Search is not available for this dataset
score float32 | label int16 | text string |
|---|---|---|
0.956754 | 1 | More polls New from Ducksters Kids Math Glossary and Terms: Probability and Statistics Average - The average is a number that is one way to find the typical value of a set of numbers. You find the average by adding up all the numbers and then dividing the total by the number of numbers in the set. To find the average of the data set (1, 3, 3, 4, 4, 5, 8) Add all the values together 1+3+3+4+4+5+8 = 28 Then divide by the total number of values 28 ÷ 7 = 4 The average value is 4. Correlation - A measurement of how closely related two variables are. Dependent event - Events are dependent if the occurrence of either event affects the probability of the occurrence of the other event. In other words, one event depends on the other. Event - A collection of outcomes from an experiment. Extrapolate - Extrapolation is a way to estimate values beyond the known data. You can use patterns and graphs to determine other possible data points that were not actually measured. Frequency - The frequency is how often an event occurs during a specific amount of time. Interpolate - Interpolation is a way to estimate data. When you interpolate you estimate the data between two known points on a graph. This can be done by drawing a curve or line between the two points. Interval - The set of numbers between two other numbers in a data set. It often refers to a period of time between two events. Mean - The mean is the same as the average. It is a way of determining the typical value of a data set. The mean is found by adding up all the numbers and then dividing them by the total number of numbers. See average above for an example. Median - The median is the halfway point in a set of numbers. It can be different from the mean or average. If you line up the numbers in a data set from least to greatest, the median would be the middle number. Example: The median of the data set (2, 3, 7, 12, 45) is 7. Mode - The mode is the number that occurs the most often in a data set. Example: The mode of the data set (2, 2, 7, 8, 12, 7, 2, 14) is 2. Outcome - The result of an experiment. Percent - A percent is a special type of fraction where the denominator is 100. It can be written using the % sign. Example: 50%, this is the same as ½ or 50/100 Probability - The probability is the chance that an event will or will not occur. Random - If something is random, then all possible events have an equal chance of occurring. Range - The range is the difference between the largest number and the smallest number in a data set. Example: The range of the data set (2, 2, 7, 8, 12, 7, 2, 14) is 14 -2 = 12. Ratio - A ratio is a comparison of two numbers. It can be written a few different ways. Example: The following are all a way to write the same ratio: 1/2 , 1:2, 1 of 2 Slope - A number that indicates the incline or steepness of a line on a graph. Slope equals the "rise" over the "run" on a graph. This can also be written as the change in y over the change in x. example of slope calculation Example: If two points on a line are (x1, y1) and (x2, y2), then the slope = (y2 - y1) ÷ (x2-x1). Statistics - Statistics are a set of data and numbers that are collected about a specific event or subject. More Math Glossaries and Terms Algebra glossary Angles glossary Figures and Shapes glossary Fractions glossary Graphs and lines glossary Measurements glossary Mathematical operations glossary Probability and statistics glossary Types of numbers glossary Units of measurements glossary Back to Kids Math Back to Kids Study About Ducksters Link to Ducksters Teachers Privacy Policy To cite this article using MLA style citation: |
0.913033 | 1 | Next: , Previous: Value Labels Records, Up: System File Format B.4 Document Record The document record, if present, has the following format: int32 rec_type; int32 n_lines; char lines[][80]; int32 rec_type; Record type. Always set to 6. int32 n_lines; Number of lines of documents present. char lines[][80]; Document lines. The number of elements is defined by n_lines. Lines shorter than 80 characters are padded on the right with spaces. |
0.908933 | 1 | 0 Stars(0 users) Repeat Exercise 11 with the text file integers.txt generated in Exercise 12. (Exercise- 11) Write a program that reads the data from the file integers.dat generated in Exercise 10. After the data are read, display the smallest, the largest, and the average. (Exercise – 10) Write a program that randomly generates N integers and stores them in a binary file integers.dat. The value for N is input by the user. Open the file with a text editor and see what the contents of a binary file look like. (Exercise -12) Repeat Exercise 10, but this time, store the numbers in a text file integers.txt. Open this file with a text editor and verify that you can read the contents. View this solution... try Chegg Study Join now |
0.915093 | 1 | John Cook is an applied mathematician working in Houston, Texas. His career has been a blend of research, software development, consulting, and management. John is a DZone MVB and is not an employee of DZone and has posted 171 posts at DZone. You can read more from them at their website. View Full User Profile More Sides or More Dice? • submit to reddit My previous post looked at rolling 5 six-sided dice as an approximation of a normal distribution. If you wanted a better approximation, you could roll dice with more sides, or you could roll more dice. Which helps more? Whether you double the number of sides per dice or double the number of dice, you have the same total number of spots possible. But which approach helps more? Here’s a plot. We start with 5 six-sided dice and either double the number of sides per die (the blue dots) or double the number of dice (the green triangles). When the number of sides n gets big, it’s easier to think of a spinner with n equally likely stopping points than an n-sided die. At first, increasing the number of sides per die reduces the maximum error more than the same increase in the number of dice. But after doubling six times, i.e. increasing by a factor of 64, both approaches have the same error. But further increasing the number of sides per die makes little difference, while continuing to increase the number of dice decreases the error. The long-term advantage goes to increasing the number of dice. By the central limit theorem, the error will approach zero as the number of dice increases. But with a fixed number of dice, increasing the number of sides only makes each die a better approximation to a uniform distribution. In the limit, your sum approximates a normal distribution no better or worse than the sum of five uniform distributions. But in the near term, increasing the number of sides helps more than adding more dice. The central limit theorem may guide you to the right answer eventually, but it might mislead you at first. Published at DZone with permission of John Cook, author and DZone MVB. (source) |
0.948057 | 1 | Abel survives a new world A mouse named Abel and his wife Amanda went on a picnic. Then a hurricane came and they found shelter in a tree. Amanda left food out in the rain, so Abel went to get it. Many bad things happened when Abel was swept away by floodwaters. That is how he found the island. Still many bad things happened again. He was lost for many days. Then he found his town. He went home and his wife was so happy he was home. |
0.905228 | 1 | शब्द (Iteration) का हिंदी मतलब अभी हमारे डाटाबेस मे नही है ।Showing English to English Meaning. ITERATION ===> (computer science) a single execution of a set of instructions that are to be repeated; "the solution took hundreds of iterations"[NOUN] ITERATION ===> (computer science) executing the same set of instructions a given number of times or until a specified result is obtained; "the solution is obtained by iteration"[NOUN] ITERATION ===> doing or saying again; a repeated performance[NOUN] Discuss Iteration Meaning Word of the day Word Of Day RSS Feed |
0.946374 | 1 | 1st Grade Math Worksheets Math Worksheets For Kids Grade 8 Connect With Us Math Worksheets > By Grades >1st grade 1st Grade Math Worksheets 1st grade math worksheets in this page are organized based on standard curriculum. Click on the quick links and explore the activities based on the given topics. Answer keys are provided at the end of each worksheet. Some important charts are also provided to help kids to paste it in their home or school. Number chart 1-50 Arrange the numbers in increasing order Count and add the pictures Color quarter part of the shapes Match the coins and its values Numbers Charts Variety of charts provided here to read and write the numbers. Make use of the numbers charts of various levels. Missing number charts and blank charts can be used to check the number skills of kids. Counting Numbers Worksheets Interactive worksheets on counting numbers are given below. Kids love to learn counting with fun! Hey kids! Enjoy your day by doing these picture counting activities. Place Value Worksheets Its important to know the position value of any number. For grade 1 kids, its necessary to know the place values of ones and tens. Test your skills by working these sheets. Check your answer with the given answer keys. Number Names Worksheets As the kids well versed with counting numbers and identifying its place value, then its time to work on writing number names of the given numerals and writing numerals for the given number names. Number and Picture Patterns Follow the picture and number sequence and fill in the missing parts. Check your skill of writing the missing numbers before and after a given number and between two given numbers. Ordering Numbers Worksheets Arrange the given set of numbers in ascending and descending order. Addition Worksheets Begin the addition practice for your kids using pictures. Addition using pictures helps the kids to understand the concept easily instead of addition using numbers. Once they get thorough with picture addition, they can proceed with adding numbers. Addition charts are also provided to develop the speed in adding. Subtraction Worksheets 1st grade math worksheets based on subtraction include pictures, horizontal, vertical subtraction and word problems are provided in this section. Multiplication and Division Worksheets Basic multiplication and division worksheets are given in this section. Fractions Worksheets This is the best place to start fractions for the kids. Coloring activities on fractions helps them to understand the concept easily. Shapes Worksheets Grade one math worksheets based on shapes provide the basic knowledge of identifying and drawing 2 and 3 dimensional shapes. Print and use the worksheets to get mastery in shapes. Money Worksheets Coin charts are provided to identify the coins and its values. Also check out the match the coins worksheet. These worksheets help the kids to know the real life money values. Time Worksheets Free printable 1st grade math worksheets based on time is provided here. Free Math Test Practice |
0.99928 | 1 | The purpose of this lab is to design a Fibonacci sequence detector circuit. By definition, the first two numbers in the Fibonacci sequence are 0 and 1, and each subsequent number is the sum of the previous two. The first few numbers in the sequence are: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, ….. You will design a system that displays any number less than 13 that belongs in the sequence. The display should be blank whenever a non-sequence number occurs. Besides displaying the Fibonacci number, 2 alternating blinking lights will alert the user that a sequence number has been entered. I need to develop a truth table and get the simplified logic equations to build my circuit. I'm struggling with this lab. Please help!!! Best answer: Answers (1) |
0.924648 | 1 | Ψάξε όποια λέξη θες, όπως fleek: A variaton on the cinncinatti bowtie, where the giver flatulates into the victim's face. When giving a cincinatti bowtie to his girlfriend, he decided to spice it up and threw a windy city bowtie into the mix. από Count Spankula 23 Απρίλιος 2006 |
0.96622 | 1 | Drexel dragonThe Math ForumDonate to the Math Forum Search All of the Math Forum: Math Forum » Discussions » Inactive » Historia-Matematica Topic: [HM] Proof of Theorem of Theaetetus. Replies: 15 Last Post: Jul 25, 2006 6:12 PM Advanced Search Robert (Bob) Eldon Taylor Posts: 101 Registered: 12/3/04 [HM] Proof of Theorem of Theaetetus. Posted: Aug 26, 2003 2:24 PM Dear Friends, In "Stetigkeit und irrationale Zahlen" section 4, Dedekind introduces the Theorem of Theaetetus (Eu. X.9): if a whole number D is not the square of a whole number, it is not the square of any rational number. He gives an indirect proof summarized below. What is the source of this proof? Robert Eldon Taylor philologos at mindspring dot com First, if D is a whole number, but not the square of a whole number, then it lies between two squares, i.e. there is a whole number, n, such that nn < D < (n + 1)^2. Assume there is a rational number whose square is = D, then there are two positive whole numbers, t, u which satisfy the equation tt - Duu = 0, and one may assume that u is the smallest positive number which possesses the property that its square may be transformed into the square of a whole number t by multiplication with D. Now, since nu < t < (n + 1) u thus the number u' = t - nu is a positive whole number, and indeed smaller than u. Further if one sets t' = Du - nt t' is likewise a positive whole number and there results t't' - du'u' = (nn - D)(tt - Duu) = 0, which stands in contradiction to the assumption about u. [Privacy Policy] [Terms of Use] © Drexel University 1994-2014. All Rights Reserved. |
0.965912 | 1 | Vectorial Envelope Generators The opcodes to generate vectors containing envelopes are vlinseg and vexpseg. These opcodes are similar to linseg and expseg, but operate with vectorial signals instead of with scalar signals. Output is a vector hosted by an f-table (that must be previously allocated), while each break-point of the envelope is actually a vector of values. All break-points must contain the same number of elements (ielements). |
0.901881 | 1 | getting ready for a game of chess... getting ready for a game of chess setting up a chess board at four is impressive knowing how the pieces moves impresses me even more chess is a game that takes a lifetime to learn 1 comment: gwadzilla said... yesterday Grant and I were playing a game of Chess he knows where the pieces go he knows how the pieces move he is pretty good at formulating a plan he will exchange pieces he will think a few moves ahead but it is tough for a four year old to corner the king and it is tough for a four year old to play a full game of chess after a bit of a chess game blood bath I could see that Grant's interest was starting to wain so I tried to accelerate the process of putting him into check he was distracted his mind was on other things I had just enough pieces remaining to corner his king but instead of playing the full game after a few approaches on his King Grant reached across the board and tipped over his king and grinned he knew the game could not be won so he forfeited |
0.921526 | 1 | Hi all I have a problem and was wondering if any 1 could help. I have a friend whom has a computer that he would like me to work on some software problems and use his computer for a game server because he has a 2 player online game he wants me to be his team mate in. The problem is that he lives in cali and i live indiana. I would use microsofts remote access to log on to hs computer bu t im using a mac and my only windows computer is getting fixed. can any one help me? |
0.994915 | 1 | Canadian Mathematical Society Canadian Mathematical Society Solutions should be submitted to Dr. Valeria Pendelieva 708 - 195 Clearview Avenue Ottawa, ON K1Z 6S1 Solution to these problems should be postmarked no later than October 31, 2000. Let x, y, z be positive real numbers for which x2 + y2 + z2 =1. Find the minimum value of S= xy z + yz x + zx y . The segments BE and CF are altitudes of the acute triangle ABC, where E and F are points on the segments AC anbd AB, resp[ectively. ABC is inscribed in the circle Q with centre O. Denote the orthocentre of ABC be H, and the midpoints of BC and AH be M and K, respectively. Let CAB= 45ˆ . (a) Prove, that the quadrilateral MEKF is a square. (b) Prove that the midpioint of both diagonals of MEKF is also the midpoint of the segment OH. (c) Find the length of EF, if the radius of Q has length 1 unit. Prove the inequality a2 + b2 + c2 +2abc<2, if the numbers a, b, c are the lengths of the sides of a triangle with perimeter 2. Each of the edges of a cube is 1 unit in length, and is divided by two points into three equal parts. Denote by K the solid with vertices at these points. (a) Find the volume of K. (b) Every pair of vertices of K is connected by a segment. Some of the segments are coloured. Prove that it is always possible to find two vertices which are endpoints of the same number of coloured segments. There are n points on a circle whose radius is 1 unit. What is the greatest number of segments between two of them, whose length exceeds 3? Prove that there are not three rational numbers x, y, z such that x2 + y2 + z2 +3(x+y+z)+5=0. © Canadian Mathematical Society, 2014 : https://cms.math.ca/ |
0.904774 | 1 | Greatest common divisor From Citizendium, the Citizens' Compendium Revision as of 09:57, 21 April 2010 by Peter Schmitt (Talk | contribs) Jump to: navigation, search This article is developing and not approved. Main Article Related Articles [?] Bibliography [?] External Links [?] Citable Version [?] Tutorials [?] The greatest common divisor (often abbreviated to gcd, or g.c.d., sometimes also called highest common factor) of two or more natural numbers is the largest number which divides evenly both (or all) the numbers. Since 1 divides all numbers, and since a divisor of a number cannot be larger than that number, the greatest common divisor of some numbers is a number between 1 and the smallest of the numbers inclusive, and therefore can be determined (at least in principle) by testing finitely many numbers. Numbers for which the greatest common divisor is 1 are called relatively prime. If (for three or more numbers) any two of them are relatively prime, they are called pairwise relatively prime. The greatest common divisor of two numbers a and b usually is written as gcd(a,b), or, if no confusion is to be expected, simply as (a,b). Finding the greatest common divisor A theoretically important method to determine the greatest common divisor uses prime factorization: Every prime factor of a common divisor must be a prime factor of all the numbers. The greatest common divisor therefore is the product of all common prime factors taken with the highest power common to all the numbers. However, since prime factorization is not efficient, this is at most practical for small numbers (or for numbers whose factorization is already known). This product expression shows that the greatest common divisor can be characterized by the following property: The gcd is a common divisor, and every common divisor divides it evenly. Fortunately, the Euclidean algorithm provides an efficient means to calculate the greatest common divisor. It also shows that the greatest common divisor can be expressed as an integer linear combination of the numbers (a,b) = ra + sb (with integers r and s). Since every such linear combination is divisible by all divisors common to a and b, this, in turn, shows that it is the smallest positive linear combination, and therefore (in the language of ring theory) the ideal generated by a and b is a principal ideal generated by (a,b). For instance, • (4,9) = 1, (4,6) = 2, and (4,12) = 4, because the divisors of 4 for are 1,2,4; the divisors of 9 are 1,3,9; and the only common divisor and the gcd 1; the divisors of 4 for are 1,2,4; the divisors of 6 are 1,2,3,6; the common divisors are 1 and 2, and the gcd is 2; the divisors of 4 for are 1,2,4; the divisors of 12 are 1,2,3,4,6,12; the common divisors are 1,2,4, and the gcd is 4. • (72,108) = 36 because the prime factorizations 72 = 22 • 2 • 33 and 108 = 2233 • 3 have the common factors 2233 = 36. • 6, 10, and 15 are relatively prime, but not pairwise relatively prime, because gcd(6,10,15) = 1, but (6,10) = 2, (6,15) = 3, (10,15) = 5, as can be seen either from the prime factorizations 6 = 2 • 3, 10 = 2 • 5, 15 = 3 • 5 in which no prime occurs in all three products, or from the lists of divisors: 1,2,3,6 for 6, and 1,2,5,10 for 10, and 1,3,5,15 for 15. • 7, 9, and 10 are relatively prime, but not pairwise relatively prime, because gcd(4,9,10) = 1, but (4,10) = 2 even though two pairs are (4,9) = (9,10) = 1 are relatively prime. • 7, 9, 10 are pairwise relatively prime, and therefore also relatively prime because (7,9) = (7,10) = (9,10) = (7,9,10) = 1. See also the Tutorial. In elementary arithmetic, the greatest common divisor is used to simplify expressions by reducing the size of numbers involved, e.g., given some fraction p/q, then p/(p,q) / q/(p,q) is its reduced representation. For instance: The reduced form of 9/12 is 3/4 because (9,12) = 3. Similarly, equations can be simplified: The quadratic equation 9x2 + 12x = 0 is equivalent to 3x2 + 4x = 0. Moreover, the gcd can be used to calculate the least common multiple: lcm(a,b) = ab/gcd(a,b): lcm(9,12) = 9•12 / gcd(9,12) = 108/3 = 36 The notion of divisibility can be generalized to the context of rings, the idea of a greatest common divisor, however, is not always applicable. But in Euclidean rings, i.e., in rings for which there is an analogue to the Euclidean algorithm, a greatest common divisor does exist. An important example is the ring of polynomials: The greatest common divisor of two polynomials is a common factor of greatest degree. In this case the gcd is only unique up to a constant factor. More generally, a greatest common divisor can be defined for rings with unique prime factorization. 1 \le \mathop{\rm gcd} (a,b) \le \min(a,b) gcd and lcm \mathop{\rm gcd} (a,b) \mathop{\rm lcm} (a,b) = ab prime factor representation a = \prod_{p\ \rm prime} p^{a(p)} \ \textrm{\ and\ }\ b = \prod_{p\ \rm prime} p^{b(p)} \ \Rightarrow \ \mathop{\rm gcd}(a,b) = \prod_{p\ \rm prime} p^{\min(a(p),b(p))} Personal tools |
0.989991 | 1 | Though there are differences between them,they complement each other in a sense that they learn how to deal with each other by observing each others individuality.l They learn frm each other by teaching each other new stuff.isMisunderstandings happen however. Rose,however stubborn,does it more firmly in negotiating n compromising w Pom, to get to a final understanding.After all,they need to liase with ech other to set a good example for the rest of the staff |
0.965737 | 1 | 36. He Takes the DMV Test Gap-fill exercise Please read the instructions above the ads. He goes to DMV. He stands in . He waits with everyone else. He moves the front of the line. Someone says, “!” He goes to the counter. He pays for the test. He gets a receipt. gets a number. He sits down in chair. He waits for his number. |
0.947451 | 1 | How does the value of x determine the value of y in the equation y=4x+1? In the equation y=4x+1, y is a dependent variable where x is an independent variable. Value of x determines value of y. For each value of x, there will be a value of y. Just put various values for x to get respective y values. Q&A Related to "How does the value of x determine the value..." The exponent of each term in the equation is The graphical way is probably the simplest. Draw a graph of the equation. Hold a ruler parallel to the y axis and slide it from left to right. If, at any point, the ruler touches If you graph these, you can use the vertical line test to determine a function that assigns one and only one y value in the range. to every x value in the domain. To solve this equation for y I would suggest that you add1 to both sides and then factorise the resulting quadratic in y. You will get If you now square root both sides you get so Explore this Topic The equation x + y = 9 does not define y as a function of x. If the equation were written y= 9-x, it would be expressing y as a function of x. ... |
0.979387 | 1 | Copyright © University of Cambridge. All rights reserved. 'Quadruple Clue Sudoku' printed from Show menu By Henry Kwok Rules of Quadruple Clue Sudoku This is a variation of sudoku on a "standard" 9x9 grid which contains a set of special clue-numbers. These are small numbers provided by sets of 4 small digits. Each set of 4 small digits in the intersection of two grid lines stands for the numbers in the four cells of the grid adjacent to this set. Here is a brief explanation of how the special clue-numbers work. It can be seen that the 4 adjacent cells around each set of 4 small digits overlap one or more sets of adjacent cells with 4 small digits. For example, in the puzzle, taking the two sets of adjacent cells with small digits {4568} and {1789}, we find that they overlap at the cell with the number 8. |
0.998207 | 1 | Online JudgeProblem SetAuthorsOnline ContestsUser Web Board Home Page Statistical Charts Submit Problem Online Status Update your info Authors ranklist Current Contest Past Contests Scheduled Contests Award Contest User ID: A Star not a Tree? Time Limit: 1000MSMemory Limit: 65536K Total Submissions: 3658Accepted: 1824 Luke wants to upgrade his home computer network from 10mbs to 100mbs. His existing network uses 10base2 (coaxial) cables that allow you to connect any number of computers together in a linear arrangement. Luke is particulary proud that he solved a nasty NP-complete problem in order to minimize the total cable length. Unfortunately, Luke cannot use his existing cabling. The 100mbs system uses 100baseT (twisted pair) cables. Each 100baseT cable connects only two devices: either two network cards or a network card and a hub. (A hub is an electronic device that interconnects several cables.) Luke has a choice: He can buy 2N-2 network cards and connect his N computers together by inserting one or more cards into each computer and connecting them all together. Or he can buy N network cards and a hub and connect each of his N computers to the hub. The first approach would require that Luke configure his operating system to forward network traffic. However, with the installation of Winux 2007.2, Luke discovered that network forwarding no longer worked. He couldn't figure out how to re-enable forwarding, and he had never heard of Prim or Kruskal, so he settled on the second approach: N network cards and a hub. Luke lives in a loft and so is prepared to run the cables and place the hub anywhere. But he won't move his computers. He wants to minimize the total length of cable he must buy. The first line of input contains a positive integer N <= 100, the number of computers. N lines follow; each gives the (x,y) coordinates (in mm.) of a computer within the room. All coordinates are integers between 0 and 10,000. Output consists of one number, the total length of the cable segments, rounded to the nearest mm. Sample Input 0 0 0 10000 10000 10000 10000 0 Sample Output Home Page Go Back To top Any problem, Please Contact Administrator |
0.978993 | 1 | Physics Forums Physics Forums ( - Quantum Physics ( - - Quantum number. ( benzun_1999 Nov19-03 05:18 AM Quantum number. hi all, What is quantum number???????? I know a bit about quantum number but i am not clear about it so can anyone please help me?. All for God. jcsd Nov19-03 07:57 AM When a property of a system can only take certain discrete values each discrete value that property has a quantum number whose value is detimned by which discrete state it is in, for example the angular momentum of an electron in a bound state of an atom can only take values of nh/2&pi, where n is and integer and is the quantum number. lethe Nov19-03 10:43 AM a quantum number is an eigenvalue of an observable from some maximally commuting set of observables of your system. quantum numbers include, n for energy level, spin, mass, charge. and more... i would not say that a quantum number must only take discrete values, although this is usually the case. benzun_1999 Nov20-03 06:01 AM could you please explain a bit more(use,properties,formula,etc) drag Nov20-03 01:20 PM Greetings ! Try this and follow the other links: Peace and long life. lethe Nov21-03 07:03 AM every time the system has a symmetry, there is a quantum number that labels which state of the symmetry that the system is in. for example, there are many ways that rotations can act on a quantum system, and if the system is a rotationally symmetric one, then whenever the system starts in one of those states, it has to remain that state. the spin quantum number is just a number to label which of those states the system is in. Luquido Lumino Nov28-03 02:24 AM Single unit differentiation A quantum number as far as I can recognize is the single unit value in a non abstractive measure therefore if the number 3 has a value in a Chronograph it would be subject relative to its effect of angular definition. I am still only a student so I may be wrong. mormonator_rm Dec1-03 10:22 AM Quantum numbers There are a whole lot of quantum numbers associated with different fields and particles. Its all about quantization; various properties can only be had in certain discreet values. The quantum numbers are actually just a description of how many of these discreet units are present in a given object. For example, I deal alot with mesons (particles that consist of a quark and an antiquark in a bound system), and there are a number of quantum numbers to deal with. First of all, there are quantum numbers for angular momentum and spin momentum, called l and s respectively. There are also flavor quantum numbers, including isospin (I), strangeness (S), charm (C), bottom (B) and top (T). The isospin is a property of the lightest quarks (up and down), while the others are properties of the heavier quarks. There is also the baryon number (b). All quarks have an intrinsic baryon number of 1/3 and their antiquarks have baryon number -1/3. The result is that baryons have b = 1 and mesons have b = 0 (which is the natural result, after all). The l and s quantum numbers can be combined through a process called "coupling", which is like addition; j = l '+' s = {[l+s], [l+s-1],..., [l-s]} but it allows all the values in between the addition and subtraction of the two, as shown above. The result of coupling is the total momentum number j. There are also quantum numbers associated with symmetries here. There is a parity number P which is either +1 or -1 based on the equation; P = (-1)^l+1 a charge conjugation number based on the formula; C = (-1)^l+s and a G-parity number based on the formula; G = (-1)^l+s+I which includes the isospin in the symmetry. There is also a radial excitation quantum number N that is useful. When we represent the quantum states that are occupied by mesons, we generally form the multiplets of mesons based on the quantum numbers N, l, s, j, P, and C. Within these multiplets are members with different values of I, G, S, C, B and T numbers as well. All mesons have b = 0. So we generally represent the mesons, in written form, by the statement IG(JPC). For example, the pion can be represented as 1-(0-+), the eta meson as 0+(0-+), the kaon as 1/2(0-). They all occur in the same multiplet, the ground state pseudoscalar multiplet with (0-+) being the key defining numbers there. *The kaon is a spin 1/2 particle, and hence not an eigenstate of C, thus the C and G numbers are ommited. So there's some examples of how quantum numbers are used to keep track of which particles are which and how they are related to each other. © 2014 Physics Forums |
0.980067 | 1 | Drexel dragonThe Math ForumDonate to the Math Forum Ask Dr. Math High School Archive This page: permutations/combinations checkmark Dr. Math See also the Dr. Math FAQ: permutations and Internet Library: About Math basic algebra linear algebra linear equations Complex Numbers Discrete Math Fibonacci Sequence/ Golden Ratio conic sections/ coordinate plane practical geometry Negative Numbers Number Theory Square/Cube Roots Browse High School Permutations/Combinations Stars indicate particularly interesting answers or good places to begin browsing. Combinations of Pegs [02/04/1997] Given a square peg board with sixteen pegs, how many triangles can you form by connecting three pegs? Combinations of Poker Hands [05/15/1998] Counting three of a kind, two pair, and one pair poker hands. Combinations of Prisoners [09/02/1997] Nine prisoners are taken for their daily exercise handcuffed together in threes. How would the warden arrange the men each day so that no two men are handcuffed together more than once over a six day period? Combinations of Three Words [8/28/1996] I have three columns of 20 words each... Combinations of Toppings when Ordering a Pizza [05/19/2005] I was working on calculating how many combinations of pizza can be made using 6 different toppings with no double toppings allowed, and I found that both c(6,0) + c(6,1) + c(6,2) + ... + c(6,6) and 2^6 gave me the same correct answer of 64. Why do both methods work? Combinations of X's and Y's [10/27/1999] X's and Y's are written in a row (e.g. XX...XXYY...Y). How many different arrangements of the letters can there be? Combinations Totaling 100 [09/27/1999] In how many ways can I achieve a sum of 100 adding together only 6 integers taken from the set of integers from 1 to 44? Combinations with Duplicate Objects [06/22/1999] How many different combinations are there when choosing 3 letters from the group {ABBCCC}? Combinatorial Proof [6/13/1996] Please prove this combinatorial proof. Combinatorial Proof [7/16/1996] How do I prove that C(n,r)C(r,k) = C(n,k)C(n-k,r- k) where k <= r <= n? Combinatorial Proof: Identity [02/10/2003] Prove: C(2n,4) = 2C(n,4) + 2C(n,3)C(n,1) + (C(n,2))^2 for all n in N; n greater than or equal to 4. Combinatorics [08/19/1997] sum((-1)^k*((4n choose 2k)/(2n choose k)) for k = 1 to 2n Combinatorics in a 4x4 Board with White and Black Tiles [07/27/2006] 8 white and 8 black tiles are arranged in a 4x4 square such that in each row and each column there are 2 white and 2 black tiles. In how many different patterns can the tiles be arranged? Combinatorics: Ramsey Theory [01/12/1998] Combinatorics: Unique Groupings [5/30/1996] Twenty-four friends want to play as many rounds of golf as they can... how many unique rounds of golf can they play? Comparing Corresponding Factors [03/04/2003] Show that C(p-1,k) = ((p-1)*...*(p-k))/(1*...*k) is congruent to (-1)^ k mod p. Connect 4 - Number of Winning Arrangements [6/20/1996] Can you help us find the formula for the number of winning lines on a 7 x 6 Connect-4 board? Connecting the Dots [03/14/1999] Counterfeit Coin Challenge [05/12/2007] Counting Alternating Subsets [03/29/2003] Find the number of alternating subsets of A_n. Counting Arrangements of Objects in a Set [01/17/2006] How many different ways can you arrange the elements in a set? Two elements (a and b) can be ab or ba, but it's harder with bigger sets. An explanation of the math underlying the Multiplication Principle. Counting Digits [10/23/1998] Counting Handshakes [9/7/1995] There are 20 people in a room, and each person shakes hands just once with everyone else. Wouldn't this be 380 total handshakes? 19 * 20 = 380. Counting Paths on a Grid [08/21/2006] On a 4 x 5 grid, how many possible routes are there to go from one corner to the diagonally opposite corner? Counting Paths With Factorials [07/24/2002] In the diagram, how can I find the number of paths from point A to point B, using factorials? Counting Patterns in Stacking Bricks [01/04/2009] I have a number of bricks which are each 3 units long, 1 unit deep and 1 unit wide. I want to stack them in a tower 3 units wide, 1 unit deep and 10 units high. How many ways can that be done? Counting Possible Combinations of Weights [10/07/2004] If you have a set of weights, in sizes 50g, 25g, 15g, and 5g, how would you go about determining how many possible combinations of weights you could make to equal 85g? Counting Possible Letter Arrangements [10/25/2004] In rearranging the letters of the word GUMTREE, how many ways can the letter M be to the left of the two E's? Counting Possible Paths [05/30/2002] How do I find the number of pathways from A to B? Counting Rectangles [05/23/2001] Counting Squares in Bigger Squares [02/29/2000] How many edge 2 squares (2x2 squares) can be found in an edge 4 square (a 4x4 square)? Counting Triangles [05/27/1999] Creating a List of Permutations with a Computer Program [07/31/2007] I'm writing a computer program to list all the permutations of a given set of numbers, but am not sure of the best way to do it. Any ideas? Creating an Organized List of Combinations [11/25/2008] I know how to use the formula to determine the number of combinations of 6 numbers drawn from 12 numbers (1 to 12 inclusive). But without the formula, how can I make a list of all the possible combinations and be confident that I've found them all? Cuisenaire Rod Combinations [11/13/2001] We are to record 512 different combinations of each level (or color) of rod, with the orange rod (the largest) being the last. Is there a formula exist to find these combinations without literally manipulating the rods? Darts Tournament with Eight Players [10/05/2002] There are eight players in a darts tournament. Each player plays one game against each of the other players. How many dart games will be played in the tournament? Data Compression [08/10/2003] Is there an algorithm that can reduce any binary number to a much smaller binary number, then later be reversed to regain the original number? It has to work for any binary number. Dealing with Duplicate Elements [05/17/2002] Digits that Average [7/19/1996] How many integers from 100 to 999 inclusive have one digit that is the average of the other two? Search the Dr. Math Library: Search: entire archive just High School Permutations and Combinations Find items containing (put spaces between keywords): Click only once for faster results: parts of words whole words [Privacy Policy] [Terms of Use] © 1994-2014 Drexel University. All rights reserved. |
0.975325 | 1 | Digit sum From Wikipedia, the free encyclopedia (Redirected from Rule of nines (mathematics)) Jump to: navigation, search In mathematics, the digit sum of a given integer is the sum of all its digits, (e.g.: the digit sum of 84001 is calculated as 8+4+0+0+1 = 13). Digit sums are most often computed using the decimal representation of the given number, but they may be calculated in any other base; different bases give different digit sums, with the digit sums for binary being on average smaller than those for any other base.[1] The digit sum of a number x in base b is given by \sum_{n=0}^{\lfloor \log_b x\rfloor} \frac{1}{b^n}(x \bmod b^{n + 1} - x \bmod b^n) Let S(r,N) be the digit sum for radix r of all non-negative integers less than N. For any 2 ≤ r1 < r2 and for sufficiently large N, S(r1,N) < S(r2,N).[1] The sum of the decimal digits of the integers 0, 1, 2, ... is given by OEISA007953 in the On-Line Encyclopedia of Integer Sequences. Borwein & Borwein (1992) use the generating function of this integer sequence (and of the analogous sequence for binary digit sums) to derive several rapidly converging series with rational and transcendental sums.[2] The concept of a decimal digit sum is closely related to, but not the same as, the digital root, which is the result of repeatedly applying the digit sum operation until the remaining value is only a single digit. The digital root of any non-zero integer will be a number in the range 1 to 9, whereas the digit sum can take any value. Digit sums and digital roots can be used for quick divisibility tests: a natural number is divisible by 3 or 9 if and only if its digit sum (or digital root) is divisible by 3 or 9, respectively. For divisibility by 9, this test is called the rule of nines, and is the basis of the casting out nines technique for checking calculations. Digit sums are also a common ingredient in checksum algorithms and were used in this way to check the arithmetic operations of early computers.[3] Earlier, in an era of hand calculation, Edgeworth (1888) suggested using sums of 50 digits taken from mathematical tables of logarithms as a form of random number generation; if one assumes that each digit is random, then by the central limit theorem, these digit sums will have a random distribution closely approximating a Gaussian distribution.[4] The digit sum of the binary representation of a number is known as its Hamming weight or population count; algorithms for performing this operation have been studied, and it has been included as a built-in operation in some computer architectures and some programming languages. These operations are used in computing applications including cryptography, coding theory, and computer chess. Harshad numbers are defined in terms of divisibility by their digit sums, and Smith numbers are defined by the equality of their digit sums with the digit sums of their prime factorizations. 1. ^ a b Bush, L. E. (1940), "An asymptotic formula for the average sum of the digits of integers", American Mathematical Monthly (Mathematical Association of America) 47 (3): 154–156, doi:10.2307/2304217, JSTOR 2304217 . 2. ^ Borwein, J. M.; Borwein, P. B. (1992), "Strange series and high precision fraud", American Mathematical Monthly 99 (7): 622–640, doi:10.2307/2324993, JSTOR 2324993 . 3. ^ Bloch, R. M.; Campbell, R. V. D.; Ellis, M. (1948), "The Logical Design of the Raytheon Computer", Mathematical Tables and Other Aids to Computation (American Mathematical Society) 3 (24): 286–295, doi:10.2307/2002859, JSTOR 2002859 . 4. ^ Edgeworth, F. Y. (1888), "The Mathematical Theory of Banking", Journal of the Royal Statistical Society 51 (1): 113–127 . External links[edit] |
0.95477 | 1 | Physics is nothing but the ABC's. Nature is an equation with an unknown, a Hebrew word which is written only with consonants to which reason has to add the dots. Johann Georg Hamann |
1.000005 | 1 | Ranchi To Rourkela Trains Following is the list of all the trains running between Ranchi to Rourkela Railway Stations: 18312 Bsb Sbp Express Ranchi 04:25 Rourkela 07:55 N Y N N Y N N 13352 Dhanbad Express Ranchi 08:55 Rourkela 05:10 Y Y Y Y Y Y Y 18110 Jat Muri Rou Ex Ranchi 09:45 Rourkela 14:05 Y Y Y Y Y Y Y 18102 Muri Express Ranchi 09:50 Rourkela 14:05 Y Y Y Y Y Y Y 13351 Dhn Alappuzha E Ranchi 15:05 Rourkela 18:30 Y Y Y Y Y Y Y 12831 Bbs Garib Rath Ranchi 20:10 Rourkela 23:30 Y N Y N N Y N 17006 Dbg Hyb Express Ranchi 21:25 Rourkela 00:35 N N N N N N Y Ranchi to Rourkela Train Tickets | Railway Reservation | Book Online |
0.929515 | 1 | nuclear species [noo-klahyd, nyoo-] A nuclide (from lat.: nucleus) is a species of atom characterized by the constitution of its nucleus and hence by the number of protons, the number of neutrons, and the energy content. The various nuclides of a particular chemical element with equal proton number (atomic number), but different neutron numbers are called isotopes of this element. Before the term "nuclide" was internationally accepted (ca. 1950), the term "isotope" was also loosely used to describe a nuclear species, i.e., a nuclide. Nuclides with equal mass number but different atomic number are called isobars (isobar = equal in weight). Isotones are nuclides of equal neutron number but different proton numbers. Nuclear isomers are atomic nuclei of a particular nuclide that have equal proton number and equal mass number, differ in energy content, and are long-lived (for example the two states of shown among the decay schemes). Unstable nuclides are radioactive and are called radionuclides. Their decay products ('daughter' products) are called radiogenic nuclides. Isotopes equal proton number , Isotones equal neutron number , Isobars equal mass number , , see beta decay Mirror nuclei neutron and proton number exchanged , Nuclear isomers different energy states long-lived or stable About 270 stable and about 70 unstable (radioactive) nuclei exist in nature. There are three main types of natural radionuclides. Firstly, those whose half-lives T1/2 are at least 10% as long as the age of the earth (4.6×109 years). These are remnants of nucleosynthesis that occurred in stars before the formation of the solar system. For example, the isotope (T1/2 = 4.5×109 a) of uranium occurs in nature, but the shorter-lived isotope, (T1/2 = 0.7 ×109 a), is 138 times rarer. The second group consists of isotopes such as (T1/2 = 1602 a), an isotope of radium, which are formed in the radioactive decay chains of uranium or thorium. The third group consists of nuclides such as (radiocarbon) that are made by cosmic-ray bombardment of other elements. Many more than 1000 nuclides have been artificially produced. The known nuclides are shown in charts of the nuclides (see Weblinks) See also External links Search another word or see nuclear specieson Dictionary | Thesaurus |Spanish Copyright © 2014, LLC. All rights reserved. • Please Login or Sign Up to use the Recent Searches feature |
0.995922 | 1 | Next: , Previous: Random Numbers, Up: Scientific Functions 9.6 Combinatorial Functions Commands relating to combinatorics and number theory begin with the k key prefix. The k g (calc-gcd) [gcd] command computes the Greatest Common Divisor of two integers. It also accepts fractions; the GCD of two fractions is defined by taking the GCD of the numerators, and the LCM of the denominators. This definition is consistent with the idea that ‘a / gcd(a,x)’ should yield an integer for any ‘a’ and ‘x’. For other types of arguments, the operation is left in symbolic form. The k l (calc-lcm) [lcm] command computes the Least Common Multiple of two integers or fractions. The product of the LCM and GCD of two numbers is equal to the product of the numbers. The k E (calc-extended-gcd) [egcd] command computes the GCD of two integers ‘x’ and ‘y’ and returns a vector ‘[g, a, b]’ where g = gcd(x,y) = a x + b y’. The ! (calc-factorial) [fact] command computes the factorial of the number at the top of the stack. If the number is an integer, the result is an exact integer. If the number is an integer-valued float, the result is a floating-point approximation. If the number is a non-integral real number, the generalized factorial is used, as defined by the Euler Gamma function. Please note that computation of large factorials can be slow; using floating-point format will help since fewer digits must be maintained. The same is true of many of the commands in this section. The k d (calc-double-factorial) [dfact] command computes the “double factorial” of an integer. For an even integer, this is the product of even integers from 2 to ‘N’. For an odd integer, this is the product of odd integers from 3 to ‘N’. If the argument is an integer-valued float, the result is a floating-point approximation. This function is undefined for negative even integers. The notation ‘N!!’ is also recognized for double factorials. The k c (calc-choose) [choose] command computes the binomial coefficient ‘N’-choose-‘M’, where ‘M’ is the number on the top of the stack and ‘N’ is second-to-top. If both arguments are integers, the result is an exact integer. Otherwise, the result is a floating-point approximation. The binomial coefficient is defined for all real numbers by N! / M! (N-M)!’. The H k c (calc-perm) [perm] command computes the number-of-permutations function ‘N! / (N-M)!’. The k b (calc-bernoulli-number) [bern] command computes a given Bernoulli number. The value at the top of the stack is a nonnegative integer ‘n’ that specifies which Bernoulli number is desired. The H k b command computes a Bernoulli polynomial, taking ‘n’ from the second-to-top position and ‘x’ from the top of the stack. If ‘x’ is a variable or formula the result is a polynomial in ‘x’; if ‘x’ is a number the result is a number. The k e (calc-euler-number) [euler] command similarly computes an Euler number, and H k e computes an Euler polynomial. Bernoulli and Euler numbers occur in the Taylor expansions of several functions. The k s (calc-stirling-number) [stir1] command computes a Stirling number of the first kind, given two integers ‘n’ and ‘m’ on the stack. The H k s [stir2] command computes a Stirling number of the second kind. These are the number of ‘m’-cycle permutations of ‘n’ objects, and the number of ways to partition ‘n’ objects into ‘m’ non-empty sets, respectively. The k p (calc-prime-test) command checks if the integer on the top of the stack is prime. For integers less than eight million, the answer is always exact and reasonably fast. For larger integers, a probabilistic method is used (see Knuth vol. II, section 4.5.4, algorithm P). The number is first checked against small prime factors (up to 13). Then, any number of iterations of the algorithm are performed. Each step either discovers that the number is non-prime, or substantially increases the certainty that the number is prime. After a few steps, the chance that a number was mistakenly described as prime will be less than one percent. (Indeed, this is a worst-case estimate of the probability; in practice even a single iteration is quite reliable.) After the k p command, the number will be reported as definitely prime or non-prime if possible, or otherwise “probably” prime with a certain probability of error. The normal k p command performs one iteration of the primality test. Pressing k p repeatedly for the same integer will perform additional iterations. Also, k p with a numeric prefix performs the specified number of iterations. There is also an algebraic function ‘prime(n)’ or ‘prime(n,iters)’ which returns 1 if ‘n’ is (probably) prime and 0 if not. The k f (calc-prime-factors) [prfac] command attempts to decompose an integer into its prime factors. For numbers up to 25 million, the answer is exact although it may take some time. The result is a vector of the prime factors in increasing order. For larger inputs, prime factors above 5000 may not be found, in which case the last number in the vector will be an unfactored integer greater than 25 million (with a warning message). For negative integers, the first element of the list will be -1. For inputs -1, 0, and 1, the result is a list of the same number. The k n (calc-next-prime) [nextprime] command finds the next prime above a given number. Essentially, it searches by calling calc-prime-test on successive integers until it finds one that passes the test. This is quite fast for integers less than eight million, but once the probabilistic test comes into play the search may be rather slow. Ordinarily this command stops for any prime that passes one iteration of the primality test. With a numeric prefix argument, a number must pass the specified number of iterations before the search stops. (This only matters when searching above eight million.) You can always use additional k p commands to increase your certainty that the number is indeed prime. The I k n (calc-prev-prime) [prevprime] command analogously finds the next prime less than a given number. The k t (calc-totient) [totient] command computes the Euler “totient” function, the number of integers less than ‘n’ which are relatively prime to ‘n’. The k m (calc-moebius) [moebius] command computes the Moebius “mu” function. If the input number is a product of ‘k’ distinct factors, this is ‘(-1)^k’. If the input number has any duplicate factors (i.e., can be divided by the same prime more than once), the result is zero. |
0.982684 | 1 | 9. Remove parentheses and simplify (x + y)5 We start with an x5 term, and each succeeding term has one less factor of x and one more factor of y until we get to y5. We get the coefficients from the 5th row of Pascal's triangle. x5 + 5x4y + 10x3y2 + 10x2y3 + 5xy4 + y5 |
0.967622 | 1 | Slots Rules Slots. It is a game for centuries. It is has been around for a while. All the players and casinos found mutual benefit in this game. That is why the casinos have the largest gaming room for the slots parlors and the players play this game more than any other casino game. The rules of the game are extremely easy. If you want to feel retro effect of the slot machines, you should turn to the land casinos, where it is still possible to find them. But now it is really difficult to find any reel machine, which would not operate due to random number generator. How to Play Slots Nowadays all the slots are very similar in operating, as you will be able to see on You put your bill or your coin into the machine, push the button (rarely pull the lever) and play the game. Actually you are expecting the miracle to happen. In the video slots you can also choose the pay lines. There can be several dozens of the pay lines and the bet is increased each time you want to add another pay line, which can possibly bring you to the winnings. If you are not playing progressive slots, you may not choose to play maximum quantity of coins. There are different types of symbols, which are used in slots. Except average symbols, there are special symbols. One type of them is called wild symbols. This type of symbols can replace any symbol in the pay line. That is how the winning pay line can be formed. There is another type called scatter symbols. If this type of symbols shows up on a pay line in a certain combination, it can give more chances to win. For example, scatter symbol can trigger a series of the free spins or some bonus games. There are also multiplier symbols, which multiply the winnings of the player. It is really good, if the credits are increased by a large sum of money. Also the winnings can be added to the player's account and he will be able just to push the "Replay" button to repeat the game. There is only one conclusion, which is called "Use your brain in gaming!" No matter how simple and easy the slots can seem, always check the rules of each machine you are going to gamble. You should also read the pay table as it reveals various peculiarities of the machine and the possible winnings. |
0.944599 | 1 | If you think too much about playing an icon, it will immobilize you. You have to treat it like a fresh character. Sure, there are guidelines so that you don't upset people, but you have to find your own way. Erica Durance |
0.972918 | 1 | time loop perfect chess machine Chess solver Combining tricks from both quantum and time-traveling computers, this circuit searches the move/countermove possibilities for all possible games of chess to find the best possible next move. It consists of a chain of about 100 "single-move-units." |
0.915596 | 1 | Super Escape He is an innocent person . Let’s start design road map and hand-make tools be able to drill through wall for his escape mission. |
0.953829 | 1 | You may also like problem icon Lunar Leaper problem icon High Jumping problem icon Escape from Planet Earth If a projectile is fired at an angle of 45° to the horizontal then the initial speed of the projectile and the distance it travels is related by $d = \frac{v^2}{g}$. Use the conservation of energy in order to find the speed of the stone. The energy stored in a strip which is extended by $\delta x$ is equal $E = \frac{k \delta x^2}{2}$. |
0.931521 | 1 | C# Programming > Web Javascript Multi-line String Multiline Strings in Javascript Most web developers nowadays have had to deal with javascript at one point or another. While javascript is an extremely powerful tool for programming web applications, it has many shortcomings. Among them is a particularly minor annoyance, which is that there is no straightforward way to declare multi-line strings in javascript. An alternative is to use a regular string for each line and concatenate them together like so: "line 1" + "line 2" + "line 3" This can be very time consuming to write, especially if you have an existing block of text you want to assign to a variable during coding. So what we are going to do is write a tool in C# to convert a block of text into a javascript-friendly string. Formatting a Line Formatting a single line is extremely simple: add quotes around the line of text and follow it up with an addition symbol (except for the last line of course). "[line]" + There is one thing we have to consider, if the original string already has quotes this will produce an invalid javascript string. So instead of double-quotes let's use single-quotes and escape any existing single quotes: original text: I don't know escaped text: I don\'t know javascript string: 'I don\'t know' + //C# code string text = "I don't know"; string escaped = text.Replace("'", @"\'"); string javascript = "'" + escaped + "'" + " +"; Blocks of Text To format a block of text, we can simply go through each line of text and format it. In C# an easy way to read a string line-by-line is with the System.IO.StringReader class. using System.IO; StringBuilder writer = new StringBuilder(); using (StringReader reader = new StringReader(blockText)) while (reader.Peek() != -1) string line = reader.ReadLine(); //Format the line line = line.Replace("'", @"\'"); line = "'" + line + "'"; //Add a '+' if this is not he last line if (reader.Peek() != -1) line += " +"; Here we can add a very useful optional feature. Imagine that you copy a chunk of HTML code that you want to convert into a string. Chances are, that HTML code is going to be indented to give it structure (easier to read). We can have our javascript multi-line tool remove as much whitespace as possible from the beginning of each line while preserving the structure. The trick is to first scan all the lines in the text and look for the minimum amount of space to remove, then remove that amount from all the lines. (Side Note: Our String Processing Library PRO component comes with this function). un-indenting example If you would like to see the source code for this function download the project files at the end of the article, it is included. String to Plain-text Another useful feature is to convert those javascript strings back to plain text, this way we can easily make modifications and then re-covert the string back to a multi-line string. This part is easier since we just have to remove the quotes around the text and the concatenation operators: 'javascript string' + javascript string Then we remove the backslashes that escaped existing single-quotes: I don/'t know I don't know That's it, if we process each line individually, the result will be a string back in its original format. Final Notes Keep in mind that this is intended to make it easier for someone writing a web application to visualize multi-line strings in the javascript code. When javascript executes the code, the string will actually be a single line in memory. To create a string that is actually multiple lines then you have to explicitly add a line-separator (<br /> for example) at the end of each line. For the actual C# implementation of the ideas discussed above, please download the c# source code below. Back to C# Article List |
0.953977 | 1 | Use Derivatives to solve problems: Distance-time Optimization A problem to minimize (optimization) the time taken to walk from one point to another is presented. First an applet is used to fully understand the problem and then an analytical method, using derivatives and other calculus concepts and theorems, is developed in order to find an analytical solution to the problem. Problem : You decide to walk from point A (see figure below) to point C. To the south of the road through BC, the terrain is difficult and you can only walk at 3 km/hr. However, along the road BC you can walk at 5 km/hr. The distance from point A to the road is 5 km. The distance from B to C is 10 km. What path you have to follow in order to arrive at point C in the shortest ( minimum ) time possible? Interactive Tutorial We first try to understand the problem using the applet below. There are several possible paths one can follow to go from A to C. On the left panel of the applet, is shown possible paths: You may walk from point A to a certain point P, somewhere on the road between B and C, and continue along the road to get to point C. The question is: What is the position of point P that will minimize the time taken to go from A to C? Your browser is completely ignoring the <APPLET> tag! Use the mousse to press and drag point P. What you are doing here is changing distance BP = x. On the right panel you have the time plotted against x. As you can see there seem to be one value of x for which the time is smallest (minimum). You may also plot the whole graph using the "on" and "off" buttons above it. The total time t taken from A to C is calculated as follows: t = distance AP / 3 km/hr + distance PC / 5 km/hr Analytical Tutorial We now look at a solution using derivatives and other calculus concepts. Let distance BP be equal to x. Let us find a formula for the distances AP and PC. Using Pythagorean theorm, we can write: distance AP = sqrt(5 2 + x 2) distance PC = 10 - x We now find time t 1 to walk distance AP.(time = distance / speed). t 1 = distance AP / 3 = sqrt(5 2 + x 2) / 3 Time t 2 to walk distance PC is given by t 2 = distance PC / 5 = (10 - x) / 5 The total time t is found by adding t 1 and t 2. t = sqrt(5 2 + x 2) / 3 + (10 - x) / 5 we might consider the domain of function t as being all values of x in the closed interval [0 , 10]. For values of x such that point P is to the left of B or to the right of c, time t will increase. To find the value of x that gives t minimum, we need to find the first derivative dt/dx (t is a functions of x). dt/dx = (x/3) / sqrt(5 2 + x 2) - 1/5 If t has a minimum value, it happens at x such that dt/dx = 0. (x/3) / sqrt(5 2 + x 2) - 1/5 = 0 Solve the above for x. Rewrite the equation as follows. 5x = 3sqrt(5 2 + x 2) Square both sides. 2 = 9(5 2 + x 2) Group like terms and simplify 2 = 225 Solve for x (x >0 ) x = sqrt(225/16) = 3.75 km. dt/dx has one zero. The table of sign of the first derivative dt/dx is shown below. table of sign of the first derivative dt/dx The first derivative dt/dx is negative for x < 3.75, equal to zero at x = 3.75 and positive for x >3.75. Also the values of t at x = 0 and x = 10 (the endpoints of the domain of t) are respectively 3.6 hrs and 3.7 hrs. The value of t at x = 3.75 is equal to 3.3 hrs and its is the smallest. The answer to our problem is that one has to walk to point P such BP = 3.75 km then procced along the road to C in order to get there in the shortest possible time. 1 - Solve the same problem as above but with the following values. diagram for exercise solution to the above exercise x = 6.26 km (rounded to 2 decimal places). More references on calculus problems |
0.92083 | 1 | Electric Flux Assignment Help Electrostatics - Electric Flux Electric Flux We know that the number of field lines crossing a unit area placed normal to the field at a point is a measure of strength of electric field E at that point. If we place a small planar element of area Δ S normal to E at this point, number of electric filed lines crossing this area element is proportional to E ( Δ S) note that it is not proper to say that number of field lines crossing the area is equal to E ( Δ S). The number of field lines is after all a matter of how many field lines we choose to draw. What is physically significant is the relative number of field lines crossing a given area at different points. If we tilt the area element by angle o [or we tilt E w. area element by angle θ fig.1 (c) the number of filed lines crossing the area will be smaller. As projection of area element normal to E is Δ S cos θ (or component of E normal of electric filed lines crossing area ?S is proportional to E Δ S cos θ. This is shown in fig 1 (c) Hence electric flux Δ ∅ through an area element Δ S in an electric field E is defined as Δ ∅ = E. Δ S = E ( Δ S) cos θ This is proportional to the number of field lines cutting the area element. Here θ is smaller angle between E and Δ S. for a closed surface, θ is the angle between E and outward normal to the area element. Now, Δ ∅ = E . Δ S = E Δ S cos θ = E ( Δ S cosθ), E times the projection of area normal to E. Also, Δ ∅ = E . Δ S = E Δ S cos θ = (E cos θ) Δ S, Component of E along normal to the area element times the magnitude of area element when E is normal to area element θ = 0. electric flux is maximum when E is along the area element, θ = 90 electric flux is zero. When θ > 90 cos θ is negative therefore electric flux is negative. To calculate the total electric flux through any given surface, we have to divide the given surface into small area elements calculate the flux at each element and add them up. Therefore, total electric flux through a surface S is ∅ε ≈ ∑ E. Δ S However when we take the limit Δ S --> O, the summation can be written as integral and we obtain the exact value of electric flux ∅ε = ƒE. Ds The circle on the integral sing indicates that the surface of integration is a closed surface. Electric flux is a scalar quantity. Units of ∅ε = unite of E x unit of S = NC -1 x m2 = Nm2 C-1 Dimensional formula of ∅ε; [MLT-2] (L2) [AT] -1 = [M1 LT-3 A-1] Electric Flux Assignment Help, Electric Flux Homework Help, Electric Flux Tutors, Electric Flux Solutions, Electric Flux Tutors, Electrostatics Help, Physics Tutors, Electric Flux Questions Answers Help with Assignments Why Us ? Online Instant Experts Tutors ~Experienced Tutors ~24x7 hrs Support ~Plagiarism Free ~Quality of Work ~Time on Delivery ~Privacy of Work |
0.911784 | 1 | What is 17 in Roman numerals? Quick Answer The Arabic number "17" is written as "XVII" in Roman numerals. The "X" represents a quantity of ten, the "V" represents a quantity of five, and "I" represents a quantity of one. The sum of the Roman numerals identifies its numeric value. Continue Reading Full Answer There are several letters in the Roman numeral system used to represent a numeric value. Each individual letter should be translated from the Roman numeral to the Arabic numeral then added together to identify the Arabic numeral equivalent. "XVII", for example, can be translated as "10+5+1+1", which equals 17. The Arabic numeral is then "17". Similarly, "XXVIII" can be translated as "10+10+V+1+1+1", which equals 28. The Arabic numeral is then "28". Learn more about Numbers Related Questions |
0.992533 | 1 | All-Star Puzzles Home Contact Help Log Innot logged in Number Square 1 Put the numbers in their places in this deceptively-tough Logic Puzzle. Index | Instructions | Printable | Solution March 15, 2004 A Number Square is a 3x3 arrangement with a different number 1-9 in each cell. For example, a Number Square could be: Given the clues below, can you determine the composition of Number Square 1? 1. The three numbers in the upper-left to lower-right diagonal sum to 24. 2. The four numbers in the corner cells of the square add to 24. 3. The three digits in the lower-left to upper-right diagonal sum to 14. 4. Adding the three bottom-row numbers equals 18. 5. Summing the three digits in the rightmost column gives 16. 6. The number in the middle cell of the leftmost column isn't 2. Logic Problems Logic Problem Instructions Click for Puzzle Instructions Contact | Help | Privacy Policy |
0.940111 | 1 | Open Access Quantum conductance of silicon-doped carbon wire nanojunctions • Dominik Szczȩśniak1, 2Email author, • Antoine Khater1, • Zygmunt Ba̧k2, • Radosław Szczȩśniak3 and • Michel Abou Ghantous4 Nanoscale Research Letters20127:616 DOI: 10.1186/1556-276X-7-616 Received: 28 July 2012 Accepted: 11 October 2012 Published: 7 November 2012 Unknown quantum electronic conductance across nanojunctions made of silicon-doped carbon wires between carbon leads is investigated. This is done by an appropriate generalization of the phase field matching theory for the multi-scattering processes of electronic excitations at the nanojunction and the use of the tight-binding method. Our calculations of the electronic band structures for carbon, silicon, and diatomic silicon carbide are matched with the available corresponding density functional theory results to optimize the required tight-binding parameters. Silicon and carbon atoms are treated on the same footing by characterizing each with their corresponding orbitals. Several types of nanojunctions are analyzed to sample their behavior under different atomic configurations. We calculate for each nanojunction the individual contributions to the quantum conductance for the propagating σ, Π, and σelectron incidents from the carbon leads. The calculated results show a number of remarkable features, which include the influence of the ordered periodic configurations of silicon-carbon pairs and the suppression of quantum conductance due to minimum substitutional disorder and artificially organized symmetry on these nanojunctions. Our results also demonstrate that the phase field matching theory is an efficient tool to treat the quantum conductance of complex molecular nanojunctions. Nanoelectronics Quantum wires Electronic transport Finite-difference methods 85.35.-p 73.63.Nm 31.15.xf Quantitative analysis of electronic quantum transport in nanostructures is essential for the development of nanoelectronic devices [1]. The monatomic linear carbon wire (MLCW) systems are expected in this context to have potentially interesting technological applications, in particular as connecting junction elements between larger device components [2]. In this respect, electronic quantum transport properties are the key features of such wire nanojunctions [3]. Carbon exists in nature under a wide range of allotropic forms as the two-dimensional graphene [4], the cage fullerenes [5], and the quasi one-dimensional carbon nanotubes [6]. These forms exhibit exceptional physical properties and can be considered as promising components for future nanodevices [7]. The discovery of MLCW, [814] turns the attention to another intriguing carbon allotropic form. In the experiment conducted recently by Jin et al. [14], MLCW was produced by directly removing carbon atoms row by row from the graphene sheets, leading to a relatively stable freestanding nanostructure. At present, the available experimental data do not provide essential knowledge about the electronic properties of MLCW systems, and only theoretical studies shed some light on these properties. Furthermore, although the MLCW systems were investigated for a long time from the theoretical point of view [1526], their interest was not highlighted until recently due to the open attention paid to other carbon allotropic forms. It has been shown in particular that from the structural point of view, MLCW can form either as cumulene wires (interatomic double bonds) or polyyne wires (alternating interatomic single and triple bonds) [14, 17, 19, 27, 28]. However, there is no straightforward answer as to which of these two structures is the favorable one; experimental studies do not give a satisfactory answer, and theoretical calculations yield provisions which depend on applied computational methods. Density functional theory (DFT) calculations predict double-bond structures [29, 30], whereas ab initio Hartree-Fock (HF) results favor alternating bond systems [1518, 27]. This situation arises from the fact that DFT tends to underestimate bond alternation (second-order Jahn-Teller effect), while HF overestimates it [27]. More recently, first-principle calculations have indicated [31] that both structures are stable and present mechanical characteristics of a purely one-dimensional nanomaterial. Moreover, on the basis of the first-principle calculations [3142], the cumulene MLCW wires are expected to be almost perfect conductors, even better than linear gold wires [29], while the corresponding polyyne wires are semiconducting [41]. It is also worth noting that the MLCW cumulene system may exhibit conductance oscillations with the even and odd numbers of the wire atoms [28, 42]. In the present work, we consider in particular the problem of the electronic quantum transport across molecular nanojunctions made up of silicon-doped carbon wires, prepared in ordered or substitutionally disordered configurations as in the schematic representation of Figure 1, where the nanojunctions are between pure MLCW wire leads. This problem has not been considered previously and is still unsolved to our knowledge. The interest in the quantum transport of such nanojunctions arises from the fact that chemical defects or substitutional disorder may have a significant impact on their transport properties [43]. Chemical impurities doping the nanojunction may even allow the control of the transport for such nanostructures [44]. The properties of the nanoelectronic device and its functionality may hence be greatly affected or even built on such ordered and disordered configurations. The interest in silicon carbide, furthermore, stems from the fact that it is considered a good substrate material for the growth of graphene [45] and may produce interesting effects in its interactions with Si or C [46]. Figure 1 Schematic representation of finite silicon-doped carbon wire nanojunction between two semi-infinite quasi one-dimensional carbon leads. The irreducible region and matching domains are distinguished (please see subsection ‘Phase field matching theory’ in the ‘Methods’ section for more details). The binding energies for a given atomic site and the coupling terms between neighbor atoms with corresponding interatomic distances are depicted. The n and n indices for the coupling parameters are dropped for simplicity. The electrons which contribute to transport present characteristic wavelengths comparable to the size of molecular nanojunctions, leading to quantum coherent effects. The transport properties of a given nanojunction are then described in terms of the Landauer-Büttiker theory [47, 48], which relates transmission scattering to quantum conductance. Several approaches have been developed in order to calculate the scattering transmission and reflection cross sections in nanostructures, where the most popular are based on first-principle calculations [49, 50] and semiempirical methods using the non-equilibrium Green’s function formalism [51, 52]. In the present work, we investigate the electronic scattering processes on the basis of phase field matching theory (PFMT) [53, 54], originally developed for the scattering of phonons and magnons in nanostructures [5559]. Our theoretical method is based on appropriate phase matching of the Bloch states of ideal leads to the local states in the scattering region. In this approach, the electronic properties of the system are described in the framework of the tight-binding formalism (TB) which is widely exploited for electronic transport calculations [54, 6063] and for simulating the STM images of nanostructures [64, 65]. In particular, we employ the appropriate Slater-Koster [66] type Hamiltonian parameters calculated on the basis of the Harrison’s tight-binding theory (HTBT) [67]. The PFMT method, which is formally equivalent to the method of non-equilibrium Green’s functions [68], can be considered consequently as a transparent and efficient mathematical tool for the calculation of the electronic quantum transport properties for a wide range of molecular-sized nanojunction systems. The present paper is organized in the following manner. In the ‘Methods’ section, we give the detailed discussion of theoretical PFMT formalism. Our numerical results, which incorporate propagating and evanescent electronic states, are presented per individual lead modes in the ‘Results and discussion’ section. Also presented are the total conductance spectra; they are compared with results based on first-principle calculations when available. Finally, the discussion and conclusions are given in the ‘Conclusions’ section. Appropriate appendices which supplement the theoretical model are also presented. Theoretical model and propagating states The schematic representation of the system under study with an arbitrary nanojunction region is presented in Figure 1. With reference to the Landauer-Büttiker theory for the analysis of the electronic scattering processes [47, 48], this system is divided into three main parts, namely the finite silicon-doped carbon wire nanojunction region, made up of a given composition of carbon (black) and silicon (orange) atoms, and two other regions to the right and left of the nanojunction which are semi-infinite quasi one-dimensional carbon leads. Moreover, for the purpose of quantum conductance calculations, the so-called irreducible region and the matching domains are depicted (see the ‘Phase field matching theory’ subsection for more details). Figure 1 is used throughout the ‘Methods’ section as a graphical reference for analytical discussion. The system presented in Figure 1 is described by the general tight-binding Hamiltonian block matrix: H = 0 0 E N 1 , N 1 H N , N 1 0 0 0 H N , N 1 E N , N H N + 1 , N 0 0 0 H N + 1 , N E N + 1 , N + 1 0 0 . This is defined in general for a system of N x inequivalent atoms per unit cell, where N l denotes the number of basis orbitals per atomic site, assuming spin degeneracy. In Equation 1, Ei,j denotes on-diagonal matrices composed of both diagonal ε l n , α and off-diagonal h l , l , m n , n , β elements for a selected unit cell. In contrast, the Hi,jmatrices contain only off-diagonal elements for interactions between different unit cells. The index α identifies the atom type, C or Si, on the n th site in a unit cell. Each diagonal element is characterized by the lower index l for the angular momentum state. The off-diagonal elements h l , l , m n , n , β describe the m-type bond, (m=σ,Π), between l and l nearest-neighbor states. The index β identifies the types of interacting neighbors, C-C, Si-Si, or Si-C. The h l , l , m n , n , β elements are consistent with the Slater-Koster convention [66] and may be expressed in the framework of the HTBT [67] by the following: h l , l , m n , n , β = η l , l , m 2 m e d β 2 , where η l , l , m values are the dimensionless Harrison coefficients; m e , the electron mass in vacuum; and d β , the interatomic distance for interacting neighbors. Explicit forms of the Ei,j and Hi,j matrices are given in Appendix Appendix 1. The tight-binding parameter schemes are illustrated in Figure 1; however, it is noteworthy that the n and n indices for coupling parameters are dropped for simplicity in this figure. In our calculations, the single-particle electronic wave functions are expanded in the orthonormal basis of local atomic wave functions ϕ l (r) as follows: Ψ ( r , k ) = l , n , N c l ( r n R N , k ) ϕ l ( r R N , k ) . In Equation 3, k is the real wave vector; R N , the position vector of the selected unit cell; and R N , the position vector of the n th atom in the selected unit cell. For ideal leads, the wave function coefficients c l (r n R N ,k) are characterized under the Bloch-Floquet theorem in consecutive unit cells by the following phase relation: c l ( r n R N + 1 , k ) = z c l ( r n R N , k ) , where z is the phase factor z ± = e ± i k R N , which corresponds here to waves propagating to the right (+) or to the left (−). The electronic equations of motion for a leads unit cell, independent of N, may be expressed in a square matrix form, with an orthonormal minimal basis set of local wave functions as follows: ( E I M d ) × c ( k , E ) = 0 . E stands for the electron eigenvalues, and I is the identity matrix, while the dynamical matrix M d contains the Hamiltonian matrix elements and the z phase factors; c(k,E) is the N x ×N l size vector defined as follows: c ( k , E ) = c s ( r 1 , k , E ) c p x ( r 1 , k , E ) c p y ( r n , k , E ) c p z ( r n , k , E ) c l ( r 1 , k , E ) c l ( r n , k , E ) . Equation 6 gives the N x ×N l eigenvalues with corresponding eigenvectors which determine the electronic structure of the lead system, where l under the vector c l corresponds to N l =4 orbitals s,p x ,p y ,p z . Note that the choice of an orthonormal minimal basis set of local wavefunctions may result initially in an inadequate description of the considered electronic eigenvalues. However, as can be seen later, the proper choice of the TB on-site energies and coupling terms allows us to to obtain agreement with the DFT results. This is a systematic procedure in our calculations. Evanescent states The complete description of electronic states on the ideal leads requires a full understanding of the propagating and evanescent electronic states on the leads. This arises because the silicon-doped nanojunction breaks the perfect periodicity of the infinite leads and forbids a formulation of the problem only in terms of the pure Bloch states as given in Equation 5. Depending on the complexity of a given electronic state, it follows that the evanescent waves may be defined by the phase factors for a purely imaginary wave vectors k=i κ such that z = z ± = e κ r n , or for complex wave vectors k=κ1 + i κ2such that z = z ± = e ( i κ 1 κ 2 ) r n . The phase factors of Equations 8 and 9 correspond to pairs of hermitian evanescent and divergent solutions on the leads. Only the evanescent states are physically considered where spatial evanescence occurs to the right and left, away from the nanojunction localized states. It is important to note that the l-type evanescent state corresponds to energies beyond the propagating band structure for this state. The functional behavior of z(E) for the propagating and evanescent states on the leads may be obtained by various techniques. An elegant method presented previously for phonon and magnon excitations [59] is adapted here for the electrons. It is described on the basis of Equations 4 and 6 by the generalized eigenvalue problem for z: E I E N , N H N , N 1 I 0 z H N , N 1 0 0 I × c ( R N , z , E ) c ( R N 1 , z , E ) = 0 . Equation 10 gives the 2N x N l eigenvalues as an ensemble of N x N l pairs of z and z−1. Only solutions with |z|=1 (propagating waves) and |z|<1 (evanescent waves) are retained as physical ones. In Equation 10, k is then replaced by the appropriate energy E variable. Furthermore, for systems with more than one atom per unit cell, the matrices HN,N−1 and H N , N 1 in this procedure are singular. In order to obtain the physical solutions, the eigenvalue problem of Equation 10 is reduced from the 2N x N l size problem to the appropriate 2N l one, using the partitioning technique (please see Appendix Appendix 2). Phase field matching theory The scattering problem at the nanojunction is considered next. An electron incident along the leads has a given energy E and wave vector k, where E=E γ (k) denotes the available dispersion curves for γ = 1, 2,.., γ propagating eigenmodes, where γ corresponds to the total number of allowed solutions for the eigenvalue problem of phase factors in Equation 10. In any given energy interval, however, these may be evanescent or propagating eigenmodes and together constitute a complete set of available channels necessary for the scattering analysis. The irreducible domain of atomic sites for the scattering problem includes the nanojunction domain itself, (N[0,D−1]), and the atomic sites on the left and right leads which interact with the nanojunction, as in Figure 1. This constitutes a necessary and sufficient region for our considerations, i.e., any supplementary atoms from the leads included in the calculations do not change the final results. The scattering at the boundary yields then the coherent reflected and transmitted fields, and in order to calculate these, we establish the system of equations of motion for the atomic sites (N[−1,D]) of the irreducible nanojunction domain. This procedure leads to the following general matrix equation: M nano × V = 0 . M nano is a (D + 2)×(D + 4) matrix composed of the block matrices ( E I E N , N H N , N 1 H N , N 1 ) , and the state vector V of dimension D + 4 is given as follows: V = c l ( r 1 R 2 , E ) c l ( r n R 2 , E ) c l ( r 1 R D + 1 , E ) c l ( r n R D + 1 , E ) . Since the number of unknown coefficients in Equation 11 is always greater than the number of equations, such a set of equations cannot be solved directly. Assuming that the incoming electron wave propagates from left to right in the eigenmode γ over the interval of energies E=E γ , the field coefficients on the left and right sides of the irreducible nanojunction domain may be written as follows: c l L ( r n R N , z γ , E γ ) = c l ( r n , z γ , E γ ) z γ N + γ Γ c l ( r n , z γ , E γ ) z γ N r γ , γ ( E γ ) for N 1 , c l R ( r n R N , z γ , E γ ) = γ Γ c l ( r n , z γ , E γ ) z γ N t γ , γ ( E γ ) for N D , where γ Γ is an arbitrary channel into which the incident electron wave scatters, and c l (r n ,z γ ,E γ ) denotes the the eigenvector of the lead dynamical matrix of Equation 6 for the inequivalent site n at z γ and E γ . The terms r γ , γ and t γ , γ denote the scattering amplitudes for backscattering and transmission, respectively, from the γ into the γ eigenmodes and constitute the basis of the Hilbert space which describes the reflection and transmission processes. Equations 13 and 14 are next used to transform the (D + 2)×(D + 4) matrix of the system of equations of motion, Equation 11, into an inhomogeneous (D + 2)×(D + 2) matrix for the scattering problem. This procedure leads to the new form of the following vector: V = z 2 0 0 z 0 0 1 1 1 0 0 z 0 0 z 2 × r γ , γ c l ( r 1 R 0 , E γ ) c l ( r n R 0 , E γ ) c l ( r 1 R D 1 , E γ ) c l ( r n R D 1 , E γ ) t γ , γ + c l ( r 1 , z γ , E γ ) z γ 2 c l ( r n , z γ , E γ ) z γ 2 c l ( r 1 , z γ , E γ ) z γ 1 c l ( r n , z γ , E γ ) z γ 1 0 0 . The rectangular sparse matrix in Equation 15 has the (D + 4)×(D + 2) size. The vectors r γ , γ and t γ , γ are column vectors of the backscattering and transmission Hilbert basis. Substituting Equation 15 into Equation 11 yields an inhomogeneous system of equations as follows: M × r γ , γ c l ( r 1 R 0 , E γ ) c l ( r n R 0 , E γ ) c l ( r 1 R D 1 , E γ ) c l ( r n R D 1 , E γ ) t γ , γ = M 1 in M 2 in 0 0 . In Equation 16, M is the matched(D + 2)×(D + 2) square matrix, and the vector of dimension (D + 2) which incorporates the M 1 in and M 2 in elements, regroups the inhomogeneous terms of the incident wave. The explicit forms of the M matrix elements and and M N in vectors are presented in Appendix Appendix 3. In practice, Equation 16 can be solved using standard numerical procedures, over the entire range of available electronic energies, yielding the coefficient c l for atomic sites on the nanojunction domain itself as well as the γ reflection r γ , γ ( E ) and the γ transmission t γ , γ ( E ) coefficients. The reflection and transmission coefficients give the reflection r γ , γ ( E ) and transmission t γ , γ ( E ) probabilities, respectively, by normalizing with respect to their group velocities v γ in order to obtain the unitarity of the scattering matrix as follows: R γ , γ ( E ) = v γ v γ r γ , γ ( E ) 2 , T γ , γ ( E ) = v γ v γ t γ , γ ( E ) 2 , where v γ v γ (E) denotes the group velocity of the incident electron wave in the eigenmode γ. The group velocities are calculated by a straightforward procedure as in Appendix Appendix 4. For evanescent eigenmodes, v γ = 0 . Although the evanescent eigenmodes do not contribute to the electronic transport, they are required for the complete description of the scattering processes. Furthermore, using Equations 17 and 18, the overall reflection probability, R γ (E), for an electron incident in the γ eigenmode and the total electronic reflection probability, R(E), from all the eigenmodes may be expressed, respectively, as follows: R γ ( E ) = γ Γ R γ , γ ( E ) and R ( E ) = γ Γ R γ ( E ) . Similarly, for transmission probabilities, we may write the equivalent equations as follows: T γ ( E ) = γ Γ T γ , γ ( E ) and T ( E ) = γ Γ T γ ( E ) . The T γ (E) and T(E) probabilities are very important for the electronic scattering processes since they correspond directly to the experimentally measurable observables. Likewise, the total transmission T(E γ ) allows to calculate the overall electronic conductance. In this work, we assume the zero-bias limit and write the total conductance in the following way: G ( E F ) = G 0 T ( E F ) . In Equation 21, G0 is the conductance quantum and equals 2e2/h. Due to the Fermi-Dirac distribution, G(E F ) is calculated at the Fermi level of the perfect lead band structure since electrons only at this level give the important contribution to the electronic conductance. The Fermi energy can be determined using various methods where, in the present work, E F is calculated as the basis of the density of state calculations. Results and discussion The tight-binding model and basic electronic properties In this section, we present the results of our model calculations for the electronic structure of carbon, silicon, and silicon carbide wires under study. Our results are validated by comparison with DFT calculations [29, 69], which allow us to establish unambiguously our choice of the tight-binding parameters for these systems. In principle, we can develop our model calculations for the nanojunctions and their leads using any adequate type of orbitals; even a single orbital suffices to calculate the electronic quantum transport for carbon nanojunctions [44]. However, this approximation is inadequate for silicon atoms. To treat both types of atoms on the same footing, we thus characterize the atoms by the electronic states 2s and 2p for carbon and by 3s and 3p for silicon. Such a scheme gives us four different orbitals, namely s, p x , p y , and p z , for both types of atoms. In the present work, our TB parameters are effectively rescaled from the Harrison’s data in order to match our model calculations for the electronic structure with those given by the DFT. The utilized TB parameters are presented in Table 1 in comparison with the values given by Harrison. It is worthy to note that the values of the on-site Hamiltonian matrix elements ε p n , α are identical for states p x , p y , and p z . The off-diagonal distance-dependent h l , l , m n , n , β elements are calculated on the basis of Equation 2. For symmetry considerations, these latter elements are positive or negative, also hs,p,σ = ηs,p,σ = 0 and hp,p,σ = ηp,p,σ = 0, for p y and p z , and h p , p , Π = h p y , p y , Π = h p z , p z , Π = h p x , p x , Π = 0 [70]. Table 1 is supplemented for the reader by Figure 2 which gives the dependence of the hopping integrals with distance as calculated in the present paper (continuous curves), in comparison with the Harrison’s data (open symbols). Table 1 Tight-binding parameters and Harrison’s dimensionless coefficients proposed in this work and compared with original values Harrison TB parameters Present TB parameters ε s ε p η s,s,σ η s,p,σ η p,p,σ η p,p,Π h s,s,σ h s,p,σ h p,p,σ h p,p,Π d β The values of the tight-binding parameters, ε l n , α and h l , l , m n , n , β (in eV), and Harrison’s dimensionless coefficients, η l , l , m , as proposed in this work and compared with the original values by Harrison [67]. Please note that the distance-dependent h l , l , σ n , n , β parameters are computed for the appropriate interatomic spacings d β (in Å), tabulated below, and assumed after the works of Tongay et al. [29] and Bekaroglu et al. [69]. In order to keep table transparent indices, n and n for ε l n , α and h l , l , m n , n , β are omitted. Figure 2 The nearest-neighbor tight-binding coupling parameters with the interatomic distance (A, B, C). The curves represent our calculated TB results in comparison with those calculated using the Harrison parameters (squares, triangles, circles). Figure 2 clearly indicates the fact that qualitatively, both Harrison’s and our rescaled coupling parameters for silicon, carbon and diatomic silicon carbide wires, present the same functional behavior, confirming the desired conservation of their physical character. However, most of the rescaled coupling parameters have somehow smaller values than those initially proposed by Harrison; this trend can be also traced in Table 1 for the onsite parameters. This difference stems from the influence of the low-coordinated systems are considered here, whereas the initial Harrison values are given to match tetrahedral phases [67]. Another general observation can be made for the tight-binding parameters of the σ-type interactions (the hs,p,σand hp,p,σones), which present much closer values over the considered interatomic distance range than in the case of Harrison’s data. Our calculated electronic band structures for silicon, carbon, and diatomic silicon carbide infinite wires (continuous curves) are presented in Figure 3 in comparison with the DFT results [29, 69] as in the right-hand side of the figures. We note for the carbon and silicon structures that our TB parameters correctly reproduce the DFT results up to energies slightly above the Fermi level. Electronic branches in the regions of high energies are in qualitative agreement. In the case of the diatomic silicon carbide structure, some of the electronic states perfectly match the DFT results even for high-energy domains. The left-hand side of Figure 3 compares our results (continuous curves) with those from the older TB values given by Harrison (open symbols); as is seen, our TB parameters constitute the most optimal set for the electronic transport calculations since their corresponding electronic band structures conform to the appropriate energy ranges highlighted by the DFT results and, what is even more important, correctly reproduce the Fermi level. Figure 3 Electronic structures of carbon (A), silicon (B), and diatomic silicon carbide (C). These structures are for infinite linear atomic wires presented over the first Brillouin zone φ=kd[−Π Π. Our calculated results (continuous curves), represented by a color scheme (details in the text), are compared on the right-hand side with the first-principle results (closed circles, φ[0,Π) [29, 69] and on the left-hand side with results calculated using Harrison TB parameters [67] (diamonds, φ[−Π,0]). Our calculated Fermi levels are given as the zero-reference energies, and the calculated electronic DOS in arbitrary units are presented in the right-hand column. In Figure 3A,B for silicon and carbon, the red and blue colors correspond, respectively, to the σ and σ bands. These arise from the s p x orbital hybrids where the lowest lying bands are always occupied by two electrons. Bands marked by the red color have the Π character and are degenerate. Their origin in the p y and p z orbitals allows them to hold up to four electrons. In Figure 3C for the diatomic silicon carbide, starting from the band structure minimum, consecutive bands have their origin in the following orbitals: carbon 3s (red band), silicon 3s (green band), carbon 3p (blue and black bands), and silicon 3p (orange and violet bands). The blue and orange colors for the silicon carbide electronic structure indicate two doubly degenerate Π-type bands. The metallic or insulating character of the considered atomic wires, following the Fermi level, is appropriate only when the wires are infinite. It is well known that this character can change for the case of finite size wires with a limited number of atoms or due to the type and quality of the leads. Numerical characteristics for the carbon leads In general, the infinite carbon wires which are considered as the leads in our work, present electronic band structure characteristics which incorporate not only propagating (see Figure 3A), but also evanescent states. Both of these types of states, which are derivable from the generalized eigenvalue problem as presented in Equation 10, constitute a complete set over the allowed energies for the electrons incident along the leads, which can be further scattered at the considered nanojunction. This complete set of eigenstates is used as the basis for the numerical calculations of the quantum conductance presented in the ‘Transport properties’ subsection. Figure 4A presents the three-dimensional representation of the solutions of Equation 10 as a set of generalized functionals z(E) for the σ, σ, and Π electronic states of the carbon leads. As described by Equations 5, 8, and 9, the eigenstates in Figure 4A characterized by |z|=1 correspond to the propagating electronic waves described by the real wave vectors, whereas those by |z|<1 correspond to the evanescent and divergent eigenstates for the complex wave vectors. Furthermore, for convenience, the corresponding moduli of the complex z factors are presented in Figure 4B. Note that |z|=1 solutions may be grouped into pairs for the two directions of propagation linked by time-reversal symmetry. Due to the fact that each of these two solutions provides the same information, we consider waves propagating only from left to right. However, this is not true for the |z|<1 solutions which are always considered for both left and right as spatially evanescent. As can be seen in Figure 4, the generalized results for σ, σ, and Π states are represented by the same colors as the corresponding states in Figure 3A, following their propagating character for |z|=1, and further extended to the physically |z|<1 evanescent solutions. Figure 4 Three-dimensional representation of the functionals z(E) and the evolution of their absolute values for carbon leads. (A) Three-dimensional representation of the functionals z(E) on a complex plane and (B) the evolution of their absolute values as a function of energy for carbon leads. The color scheme here is the same as that for carbon in Figure 3A. Figure 4 provides a more complete description for the electronic states of a given system compared to a typical band structure representation as in Figure 3, since both the propagating and evanescent states are shown. Such a general representation clearly indicates the importance of the evanescent eigenstates for a full description of the scattering problem presented in the ‘Transport properties’ subsection. The energies considered in our calculations correspond to the range within the band structure boundaries, marked by two vertical dotted lines in Figure 4B. As a consequence, not only the propagating states, but also the evanescent solutions are included in the quantum conductance calculations in the ‘Transport properties’ subsection. Transport properties In this subsection, the electronic transport properties of nanojunction systems composed of silicon-doped carbon wires between carbon leads are calculated using the PFMT method. Figure 5A presents a number of these systems where we indicate the irreducible domains by the shaded grey areas. Note that these systems are always composed of finite nanojunction regions of silicon and carbon atoms, coupled with two carbon semi-infinite leads. The first three systems of Figure 5 correspond to periodic diatomic silicon carbide nanojunctions composed of 1, 2, and 3 Si-C atomic pairs, respectively. The next system corresponds to a nanojunction with a substitutional disorder, composed of three carbon and three silicon atoms. The last is a symmetric nanojunction of five silicon atoms and only one carbon atom in the middle. Figure 5B presents the group velocities of electrons in the carbon leads. Figure 5 Schematic representation of the five nanojunction systems and group velocities for propagating band structure modes. (A) Schematic representation of the five nanojunction systems composed of silicon and carbon atoms between one-dimensional carbon leads considered in the present work. The irreducible domains are marked by the shaded grey areas, whereas for the other cases, only the irreducible domains are shown. (B) The group velocities for the propagating band structure modes on the carbon leads. The calculated transmission and reflection scattering cross sections for each of the four available transport channels are presented in Figure 6. Each row of the figure corresponds to a nanojunction system (NS) as follows: Figure 6A,B,C for NS 1, Figure 6D,E,F for NS 2, Figure 6G,H,I for NS 3, Figure 6J,K,L for NS 4, and Figure 6M,N,O for NS 5. The red and green continuous curves represent the transmission and reflection spectra, respectively. The blue histograms correspond to the free electronic transport on the carbon leads, i.e., to the electronic transport on the perfect infinite quasi one-dimensional carbon wire over the different propagating states. These histograms constitute the reference to the unitarity condition which is used systematically as a check on the numerical results. The leads’ Fermi level is marked by a dashed line and set as a zero-energy reference. Under the zero-bias limit, the total conductance is calculated at this Fermi level. Figure 6 Transmission and reflection probabilities across five types of silicon-doped carbon wires between two semi-infinite one-dimensional carbon leads. The arrangement of the figure is as follows: (A, B, C) for case 1, (D, E, F) for case 2, (G, H, I) for case 3, (J, K, L) for case 4, and (M, N, O for case 5. The Fermi level is set at the zero-energy reference position. In Figure 6, the transmission spectra present strong scattering resonances, showing an increasing complexity with the increasing size and configurational order of the nanojunctions. The valence σ state exhibits negligible transmission for all of the considered nanojunctions. The degenerate Π states and the σstate present in contrast the finite transmission spectra. However, it is only the Π states which cross the Fermi level, giving rise to electronic conductance across the nanojunction within the zero-bias limit. In particular, the first three considered systems represent increasing lengths of the diatomic silicon carbide nanojunction with the increasing number of ordered Si-C atomic pairs. The transmission at the Fermi level for these systems is nonzero (see Figure 3C), which contrasts with the insulating character of the infinite silicon carbide wire. One can connect this finite transmission to the indirect bandgap (Δ) around the Fermi level for the diatomic silicon-carbide infinite wire (for more details, please see Figure 3C). This gap, Δ1. 5 eV, is indeed related to the difference between the binding energies of the silicon and carbon atoms and corresponds to an effective potential barrier for the propagating Π-state electrons. As the wire length increases by adding Si-C atomic pairs, as for systems 1 to 3 of Figure 5B, the transmission decreases due to cumulative barrier effects. We note that a similar effect for the monovalent diatomic copper-cobalt wire nanojunctions has been observed in a previous work [54]. Furthermore, it is instructive to compare the scattering spectra for the degenerate Π states, for nanojunction systems 3 and 4. These two systems contain identical numbers of silicon and carbon atoms; however, system 3 is an ordered configuration of Si-C pairs, whereas system 4 presents substitutional disorder of the atoms. It is seen that the disorder suppresses the conductance of the Π-state electrons at the Fermi level within the zero-bias limit. Another general observation can be made from the results for nanojunction system 5 which contains more silicon than carbon atoms. Despite the finite size of this system, which is comparable to system 4, and despite the structural symmetry of its atomic configuration, the electronic transmission is suppressed at the Fermi level within the zero-bias limit. This implies that one of the main observations of our paper is that structural symmetry on the nanojunction is not a guarantee for finite transmission in the case of the multivalence diatomic wire nanojunctions. Figure 6 also shows that the transmission spectra for the σ state are close to unity over a significant range of energies from approximately 1 to 7 eV for all of the five nanojunction systems. This result may prove useful for the electronic conductance across silicon-doped carbon nanojunctions under finite bias voltages. In Figure 7, we present the total electronic conductance G(E) as a function of energy E and in units of G0=2e2/h for the considered nanojunction systems of a given length as depicted in Figure 5 (red). Moreover, the perfect electronic conductance on the carbon leads (blue) is given in comparison and constitutes effectively the conductance of the infinite and perfect quasi one-dimensional carbon wire. In Figure 7, the Fermi level is indicated by the dashed line as a zero-reference energy, and G(E) is calculated from all the contributing eigenstates of Figure 6, including the two degenerate Π states. Figure 7 Total electronic conductance. Total electronic conductance G(E) (A, B, C, D, E) as a function of energy E in units of G0=2e2/h for silicon-doped carbon wires. See text for details. We note that the conclusions given for the results presented in Figure 6 are also followed by the more general representation of the electronic transport depicted in Figure 7. Furthermore, the results presented in Figure 7 confirm that only the electrons incident from the leads in the Π states are responsible for the electronic conductance at the zero-bias limit, which is readable from the Fermi level position. However, for all considered systems, the conductance at the Fermi level is theoretically limited to the value of 2 G0, and the biggest conductance maxima close to the perfect infinite carbon wire value of 3 G0can be observed only in the energy interval from approximately 1 to 7 eV hence for energies above the Fermi level. Once again, this follows our previous observations for the transmission results for the Π states concluded from Figure 6. Nonetheless, only on the basis of the results presented in Figure 7 can we note that due to the summation over all possible state contributions which constitute the G(E) spectra, not only the σ-state electrons, but also some of those in the degenerate Π states contribute to the high conductance values in the cited energy intervals. This important observation proves that the σ- and Π-state electrons are of crucial importance for both the zero-bias quantum conductance of the silicon-doped carbon wires and the possible finite bias ones. This implies that the use of only a single orbital for the description of the carbon atoms will result in an inadequate description of the transport processes across low-coordinated systems containing these atoms. In the present work, the unknown properties of the quantum electronic conductance for nanojunctions made of silicon-doped carbon wires between carbon leads are studied in depth. This is done using the phase field matching theory and the tight-binding method. The local basis for the electronic wave functions is assumed to be composed of four different atomic orbitals for silicon and carbon, namely the s, p x , p y , and p z states. In the first step, we calculate the electronic band structures for three nanomaterials, namely the one-dimensional infinite wires of silicon, carbon, and diatomic silicon carbide. This permits a matching comparison with the available corresponding DFT results, with the objective to select the optimal TB parameters for the three nanomaterials. This optimal set of the tight-binding parameters is then used to calculate the electronic conductance across the silicon-doped carbon wire nanojunctions. Five different nanojunction cases are analyzed to sample their behavior under different atomic configurations. We show that despite the nonconducting character of the infinite silicon carbide wires, its finite implementation as nanojunctions exhibit a finite conductance. This outcome is explained by the energy difference between the binding energies of the silicon and carbon atoms, which correspond to an effective potential barrier for the degenerate Π-state electrons transmitted across the nanojunction under zero-bias field. The conductance effects that may arise due to minimal substitutional disorder and to artificially organize symmetry considerations on the silicon carbide wire nanojunction are also investigated. By exchanging the positions of two silicon and carbon atoms on an initial nanojunction to generate a substitutional disorder, we show that the total quantum conductance is suppressed at the Fermi level. This is in sharp contrast with the finite and significant conductance for the initial atomically ordered nanojunction with periodic configurations of the silicon and carbon atoms. Also, the analysis of a silicon carbide nanojunction of a comparable size as the one above, presenting symmetry properties, shows that quantum conductance is suppressed at the Fermi level. In summary, we note that the biggest maxima of the conductance spectra for the zero-bias limit can be observed for high energies for all of the considered systems. This conclusion reveals the fact that electrons incident from the leads in both σand Π states are crucial for the considerations of the electronic transport properties of the silicon-doped carbon wire nanojunctions. Appendix 1 Explicit forms of the Ei,jand Hi,jmatrices The explicit forms of the submatrices of Equation 1 are given in the following manner: E i , j = ε 1 h 2 , 1 h n , 1 h 2 , 1 ε 2 ε n 1 h n , n 1 h n , 1 h n , n 1 ε n , H i , j = 0 h 1 , 2 h 1 , n 1 h 1 , n h 2 , n 1 h 2 , n 0 h n , n 0 0 0 0 0 0 , ε i , j = ε s n , α 0 0 0 ε p x n , α ε l 1 n , α 0 0 0 ε l n , α , h i , j = h s , s , σ n , n , β h s , p x , σ n , n , β h s , l , m n , n , β h p x , s , σ n , n , β h p x , p x , σ n , n , β h l 1 , l 1 , m n , n , β h l 1 , l , m n , n , β h l , s , m n , n , β h l , l 1 , m n , n , β h l , l , m n , n , β . Equations 22 and 23 denote N x N l square matrices, where matrix (Equation 23) is upper triangular. In this manner, component matrices (Equations 24 and 25) are of the dimension N l ×N l . Additionally, matrix ε i , j always denotes diagonal matrix, while h i , j matrix is much more complex, with possible nonzero elements at every position. Please note that some of the h l , l , m n , n , β elements can vanish due to symmetry conditions and simplify the notation of the h i , j matrix. Appendix 2 Partitioning technique The partitioning technique is a suitable method which allows to avoid the singularity problem of the HN,N−1 and H N , N 1 matrices and calculates only nontrivial solutions of Equation 10. Detailed discussion of the partitioning technique is presented in the work of Khomyakov and Brocks [71], and this section gives only our short remarks on this method. Following studies from Khomyakov and Brocks [71], Equation 10 is partitioned into two parts of D1D2 and D2 sizes where D 1 = N x N l , D 2 = N n N l . In Equation 27, parameter N n stands for the order of nearest-neighbor interactions assumed in calculations, e.g., N n =1 for the first nearest-neighbor interactions. On the basis of Equations 26 and 27, the reduced 2N l eigenvalue problem is written as follows: A 1 , 1 A 1 , 2 I 2 , 2 0 z B 1 , 1 B 1 , 2 0 I 2 , 2 × c 2 ( x N , k ) c 2 ( x N 1 , k ) = 0 . At this point, we correct the misprint from the study of Khomyakov and Brocks [71] and write the submatrices of Equation 28 in the following form: A 1 , 1 = E I 2 , 2 E 2 , 2 E 2 , 1 E I 1 , 1 E 1 , 1 1 E 1 , 2 , A 1 , 2 = H 2 , 2 E 2 , 1 E I 1 , 1 E 1 , 1 1 H 1 , 2 , B 1 , 1 = H 2 , 2 + H 1 , 2 E I 1 , 1 E 1 , 1 1 E 1 , 2 , B 1 , 2 = H 1 , 2 E I 1 , 1 E 1 , 1 1 H 1 , 2 . Please note that the reduced problem of Equation 28 gives 2N l eigenvalues with 2N l corresponding eigenvectors; this N x times less than can be expected from a physical point of view. Nevertheless, those solutions can be easily separated into N x N l eigenvalues and N x N l eigenvectors of a purely physical character. Appendix 3 Explicit forms of the Mi,j, M 1 in , and M 2 in components The submatrices of the matched(D + 2)×(D + 2) square matrix M in Equation 16, for a given i and j indices, are given as follows: M i , j = E I E i , i for i = j D > i > 1 D > j > 1 , M i , j = H i , i 1 for i j i > 2 j = i 1 , M i , j = H i , i 1 for i j i < D + 1 j = i + 1 , except for the submatrices which describe the boundary atoms of the system and those that are expressed in the following manner: M 1 , 1 = H 1 , 2 c l ( r n , z γ , E γ ) z γ 2 + E I E 1 , 1 c l ( r n , z γ , E γ ) z γ , M 2 , 1 = H 0 , 1 c l ( r n , z γ , E γ ) z γ , M D + 1 , D + 2 = H D 1 , D c l ( r n , z γ , E γ ) z γ D , M D + 2 , D + 2 = H D , D + 1 c l ( r n , z γ , E γ ) z γ D + 1 + E I E D , D c l ( r n , z γ , E γ ) z γ D . Finally, the M 1 in and M 2 in of Equation 16 vector components are written as follows: M 2 in = H 0 , 1 c l ( r n , z γ , E γ ) z 1 . Appendix 4 Group velocities As specified in the ‘Phase field matching theory’ subsection, the group velocities for individual states can be calculated on the basis of Equation 6 rewritten in the following manner: v I V v ( k , E ) = 0 , where v denotes the eigenvalues of Equation 39 which yields all required electron group velocities for each propagating state. Further, V is the N x ×N l size matrix of the following form: V = M d k . Finally, v(R N ,k) stands for eigenvectors of the problem of Equation 39. We note that, usually, Equation 40 includes the constant part d β /h, where h is the Planck constant. However, for the purpose of electronic conductance calculations within the PFMT approach, this term can be omitted due to the fact that only the ratios of the given group velocities are important (please see Equations 17 and 18). D Szczȩśniak would like to thank the French Ministry of Foreign Affairs for his PhD scholarship grant CNOUS 2009-2374, to the Polish National Science Center for their research grant DEC-2011/01/N/ST3/04492, and to the Graduate School of Sciences at the University du Maine for their support. Authors’ Affiliations Institute for Molecules and Materials UMR 6283, University of Maine Institute of Physics, Jan Długosz University in Czȩstochowa Institute of Physics, Czȩstochowa University of Technology Department of Physics, Texas A&M University 1. Agraït N, Levy-Yeyati A, van Ruitenbeek JM: Quantum properties of atomic-sized conductors. Phys Rep 2003, 377: 81–279. 10.1016/S0370-1573(02)00633-6View ArticleGoogle Scholar 2. Nitzan A, Ratner M: Electron transport in molecular wire junctions. Science 2003, 300: 1384–1389. 10.1126/science.1081572View ArticleGoogle Scholar 3. Wan CC, Mozos JL, Taraschi G, Wang J, Guo H: Quantum transport through atomic wires. Appl Phys Lett 1997, 71: 419–421. 10.1063/1.119328View ArticleGoogle Scholar 5. Kroto HW, Heath JR, O’Brien SC, Curl RF, Smalley RE: C60: buckminsterfullerene. Nature 1985, 318: 162–163. 10.1038/318162a0View ArticleGoogle Scholar 6. Iijima S, Ichihashi T: Single-shell carbon nanotubes of 1-nm diameter. Nature 1993, 363: 603–605. 10.1038/363603a0View ArticleGoogle Scholar 7. Euen PL: Nanotechnology: carbon-based electronics. Nature 1998, 393: 15–16. 10.1038/29874View ArticleGoogle Scholar 8. Heath JR, Zhang Q, O’Brien SC, Curl RF, Kroto HW, Smalley RE: The formation of long carbon chain molecules during laser vaporization of graphite. J Am Chem Soc 1987, 109: 359–363. 10.1021/ja00236a012View ArticleGoogle Scholar 9. Lagow RJ, Kampa JJ, Wei HC, Battle SL, Genge JW, Laude DA, Harper CJ, Bau R, Stevens RC, Haw JF, Munson E: Synthesis of linear acetylenic carbon: the “sp” carbon allotrope. Science 1995, 267: 362–367. 10.1126/science.267.5196.362View ArticleGoogle Scholar 10. Derycke V, Soukiassian P, Mayne A, Dujardin D, Gautier J: Carbon atomic chain formation on the β-SiC(100) surface by controlled sp→sp3 transformation. Phys Rev Lett 1998, 81: 5868–5871. 10.1103/PhysRevLett.81.5868View ArticleGoogle Scholar 11. Troiani HE, Miki-Yoshida M, Camacho-Bragado GA, Marques MAL, Rubio A, Ascencio JA, Jose-Yacaman M: Direct observation of the mechanical properties of single-walled carbon nanotubes and their junctions at the atomic level. Nano Lett 2003, 3: 751–755. 10.1021/nl0341640View ArticleGoogle Scholar 13. Yuzvinsky TD, Mickelson W, Aloni S, Begtrup GE, Kis A, Zettl A: Shrinking a carbon nanotube. Nano Lett 2006, 6: 2718–2722. 10.1021/nl061671jView ArticleGoogle Scholar 14. Jin C, Lan H, Peng L, Suenaga K, Iijima S: Deriving carbon atomic chains from graphene. Phys Rev Lett 2009, 102: 205501.View ArticleGoogle Scholar 15. Kértesz M, Koller J, Az̆man A: Ab initio Hartree-Fock crystal orbital studies. II. Energy bands of an infinite carbon chain. J Chem Phys 1978, 68: 2779–2782. 10.1063/1.436070View ArticleGoogle Scholar 16. Kértesz M, Koller J, Az̆man A: Different orbitals for different spins for solids: fully variational ab initio studies on hydrogen and carbon atomic chains, polyene, and poly(sulphur nitride). Phys Rev B 1979, 19: 2034–2040. 10.1103/PhysRevB.19.2034View ArticleGoogle Scholar 17. Karpfen A: Ab initio studies on polymers. I. The linear infinite polyyne. J Phys C Solid State Phys 1979, 12: 3227–3237. 10.1088/0022-3719/12/16/011View ArticleGoogle Scholar 18. Teramae M, Yamabe T, Imamura A: Ab initio effective core potential studies on polymers. Theor Chim Acta 1983, 64: 1–12.View ArticleGoogle Scholar 19. Springborg M: Self-consistent, first principles calculations of the electronic structures of a linear, infinite carbon chain. J Phys C 1986, 19: 4473–4482. 10.1088/0022-3719/19/23/010View ArticleGoogle Scholar 20. Rice MJ, Phillpot SR, Bishop AR, Campbell DK: Solitons, polarons, and phonons in the infinite polyyne chain. Phys Rev B 1986, 34: 4139–4149. 10.1103/PhysRevB.34.4139View ArticleGoogle Scholar 21. Springborg M, Dreschel SL, Málek J: Anharmonic model for polyyne. Phys Rev B 1990, 41: 11954–11966. 10.1103/PhysRevB.41.11954View ArticleGoogle Scholar 22. Watts JD, Bartlett RJ: A theoretical study of linear carbon cluster monoanions, Cn and dianions, Cn2− (n=2−10). J Chem Phys 1992, 97: 3445–3457. 10.1063/1.462980View ArticleGoogle Scholar 23. Xu CH, Wang CZ, Chan CT, Ho KM: A transferable tight-binding potential for carbon. J Phys Condens Matter 1992, 4: 6047–6054. 10.1088/0953-8984/4/28/006View ArticleGoogle Scholar 24. Lou L, Nordlander P: Carbon atomic chains in strong electric fields. Phys Rev B 1996, 54: 16659–16662. 10.1103/PhysRevB.54.16659View ArticleGoogle Scholar 25. Jones RO, Seifert G: Density functional study of carbon clusters and their ions. Phys Rev Lett 1997, 79: 443–446. 10.1103/PhysRevLett.79.443View ArticleGoogle Scholar 26. Fuentealba P: Static dipole polarizabilities of small neutral carbon clusters Cn (n 8). Phys Rev A 1998, 58: 4232–4234. 10.1103/PhysRevA.58.4232View ArticleGoogle Scholar 27. Abdurahman A, Shukla A, Dolg M: Ab initio many-body calculations of static dipole polarizabilities of linear carbon chains and chainlike boron clusters. Phys Rev B 2002, 65: 115106.View ArticleGoogle Scholar 29. Tongay S, Ciraci S: Atomic strings of group IV, III-V, and II-VI elements. Appl Phys Lett 2004, 85: 6179–6181. 10.1063/1.1839647View ArticleGoogle Scholar 30. Bylaska EJ, Weare JH, Kawai R: Development of bond-length alternation in very large carbon rings: LDA pseudopotential results. Phys Rev B 1998, 58: R7488—R7491.View ArticleGoogle Scholar 31. Zhang Y, Su Y, Wang L, Kong ESW, Chen X, Zhang Y: A one-dimensional extremely covalent material: monatomic carbon linear chain. Nanoscale Res Lett 2011, 6: 577. 10.1186/1556-276X-6-577View ArticleGoogle Scholar 32. Lang ND, Avouris P: Oscillatory conductance of carbon-atom wires. Phys Rev Lett 1998, 81: 3515–3518. 10.1103/PhysRevLett.81.3515View ArticleGoogle Scholar 33. Lang ND, Avouris P: Carbon-atom wires: charge-transfer doping, voltage drop, and the effect of distortions. Phys Rev Lett 2000, 84: 358–361. 10.1103/PhysRevLett.84.358View ArticleGoogle Scholar 34. Larade B, Taylor J, Mehrez H, Guo H: Conductance, I-V curves, and negative differential resistance of carbon atomic wires. Phys Rev B 2001, 64: 075420.View ArticleGoogle Scholar 35. Tongay S, Dag S, Durgun E, Senger RT, Ciraci S: Atomic and electronic structure of carbon strings. J Phys Cond Matter 2005, 17: 3823–3836. 10.1088/0953-8984/17/25/009View ArticleGoogle Scholar 36. Senger RT, Tongay S, Durgun E, Ciraci S: Atomic chains of group-IV elements and III-V and II-VI binary compounds studied by a first-principles pseudopotential method. Phys Rev B 2005, 72: 075419.View ArticleGoogle Scholar 37. Baranović G, Z̆ Crljen: Unusual conductance of polyyne-based molecular wires. Phys Rev Lett 2007, 98: 116801.View ArticleGoogle Scholar 38. Okano S, Tománek D: Effect of electron and hole doping on the structure of, C, Si, and S nanowires. Phys Rev B 2007, 75: 195409.View ArticleGoogle Scholar 39. Chen W, Andreev AV, Bertsch GF: Conductance of a single-atom carbon chain with graphene leads. Phys Rev B 2009, 80: 085410.View ArticleGoogle Scholar 40. Wang Y, Lin ZZ, Zhang W, Zhuang J, Ning XJ: Pulling long linear atomic chains from graphene: molecular dynamics simulations. Phys Rev B 2009, 80: 233403.View ArticleGoogle Scholar 41. Song B, Sanvito S, Fang H: Anomalous I-V curve for mono-atomic carbon chains. New J Phys 2010, 12: 103017. 10.1088/1367-2630/12/10/103017View ArticleGoogle Scholar 42. Zhang GP, Fang XW, Yao YX, Wang CZ, Ding ZJ, Ho KM: Electronic structure and transport of a carbon chain between graphene nanoribbon leads. J Phys Cond Matter 2011, 23: 025302. 10.1088/0953-8984/23/2/025302View ArticleGoogle Scholar 43. Ke Y, Xia K, Guo H: Disorder scattering in magnetic tunnel junctions: theory of nonequilibrium vertex correction. Phys Rev Lett 2008, 100: 166805.View ArticleGoogle Scholar 44. Nozaki D, Pastawski HM, Cuniberti G: Controlling the conductance of molecular wires by defect engineering. New J Phys 2010, 12: 063004. 10.1088/1367-2630/12/6/063004View ArticleGoogle Scholar 45. Strupiński W, Grodecki K, Wysmołek A, Stȩpniewski R, Szkopek T, Gaskell PE, Grüneis A, Haberer D, BoŻek R, Krupka J, Baranowski JM: Graphene epitaxy by chemical vapor deposition on SiC. Nano Lett 2011, 11: 1786–1791. 10.1021/nl200390eView ArticleGoogle Scholar 46. Wang F, Shepperd K, Hicks J, Nevius MS, Tinkey H, Tejeda A, Taleb-Ibrahimi A, Bertran F, Fèvre PL, Torrance DB, First PN, de Heer WA, Zakharov AA, Conrad EH: Silicon intercalation into the graphene-SiC interface. Phys Rev B 2012, 85: 165449.View ArticleGoogle Scholar 47. Landauer R: Spatial variation of currents and fields due to localized scatterers in metallic conduction. IBM J Res Dev 1957, 1: 223–231.View ArticleGoogle Scholar 48. Büttiker M: Four-terminal phase-coherent conductance. Phys Rev Lett 1986, 57: 1761–1764. 10.1103/PhysRevLett.57.1761View ArticleGoogle Scholar 49. Zwierzycki M, Xia K, Kelly PJ, Bauer GEW, Turek I: Spin injection through an Fe/InAs interface. Phys Rev B 2003, 67: 092401.View ArticleGoogle Scholar 50. Pauly F, Viljas JK, Huniar U, Häfner M, Wohlthat S, Bürkle M, Cuevas JC, Schön G: Cluster-based density-functional approach to quantum transport through molecular and atomic contacts. New J Phys 2008, 10: 125019. 10.1088/1367-2630/10/12/125019View ArticleGoogle Scholar 51. Caroli C, Combescot R, Nozières P, Saint-James D: Direct calculation of the tunneling currents. J Phys C 1971, 8: 916–929.View ArticleGoogle Scholar 52. Deretzis I, Magna AL: Coherent electron transport in quasi one-dimensional carbon-based systems. Eur Phys J B 2011, 81: 15. 10.1140/epjb/e2011-20134-xView ArticleGoogle Scholar 53. Khater A, Szczȩśniak D: A simple analytical model for electronic conductance in a one dimensional atomic chain across a defect. J Phys Conf Ser 2011, 289: 012013.View ArticleGoogle Scholar 54. Szczȩśniak D, Khater A: Electronic conductance via atomic wires: a phase field matching theory approach. Eur Phys J B 2012, 85: 174.View ArticleGoogle Scholar 55. Khater A, Bourahla B, Abou Ghantous M, Tigrine R, Chadli R: Magnons coherent transmission and heat transport at ultrathin insulating ferromagnetic nanojunctions. Eur Phys J B 2011, 82: 53–61. 10.1140/epjb/e2011-10935-2View ArticleGoogle Scholar 56. Khater A, Belhadi M, Abou Ghantous M: Phonons heat transport at an atomic well boundary in ultrathin solid films. Eur Phys J B 2011, 80: 363–369. 10.1140/epjb/e2011-10892-8View ArticleGoogle Scholar 57. Tigrine R, Khater A, Bourahla B, Abou Ghantous M, Rafli O: Magnon scattering by a symmetric atomic well in free standing very thin magnetic films. Eur Phys J B 2008, 62: 59–64. 10.1140/epjb/e2008-00125-xView ArticleGoogle Scholar 58. Virlouvet A, Khater A, Aouchiche H, Rafli O, Maschke K: Scattering of vibrational waves in perturbed two-dimensional multichannel asymmetric waveguides as on an isolated step. Phys Rev B 1999, 59: 4933–4942. 10.1103/PhysRevB.59.4933View ArticleGoogle Scholar 59. Fellay A, Gagel F, Maschke K, Virlouvet A, Khater A: Scattering of vibrational waves in perturbed quasi-one-dimensional multichannel waveguides. Phys Rev B 1997, 55: 1707–1717. 10.1103/PhysRevB.55.1707View ArticleGoogle Scholar 60. Mardaani M, Rabani H, Esmaeili A: An analytical study on electronic density of states and conductance of typical nanowires. Solid State Commun 2011, 151: 928–932. 10.1016/j.ssc.2011.04.010View ArticleGoogle Scholar 61. Rabani H, Mardaani M: Exact analytical results on electronic transport of conjugated polymer junctions: renormalization method. Solid State Commun 2012, 152: 235–239. 10.1016/j.ssc.2011.09.026View ArticleGoogle Scholar 63. Chen J, Yang L, Yang H, Dong J: Electronic and transport properties of a carbon-atom chain in the core of semiconducting carbon nanotubes. Phys Lett A 2003, 316: 101–106. 10.1016/S0375-9601(03)01132-0View ArticleGoogle Scholar 64. Hands ID, Dunn JL, Bates CA: Visualization of static Jahn-Teller effects in the fullerene anion C60. Phys Rev B 2010, 82: 155425.View ArticleGoogle Scholar 65. Delga A, Lagoute J, Repain V, Chacon C, Girard Y, Marathe M, Narasimhan S, Rousset S: Electronic properties of Fe clusters on a Au(111) surface. Phys Rev B 2011, 84: 035416.View ArticleGoogle Scholar 66. Slater JC, Koster GF: Simplified LCAO method for the periodic potential problem. Phys Rev 1954, 94: 1498–1524. 10.1103/PhysRev.94.1498View ArticleGoogle Scholar 67. Harrison WA: Elementary Electronic Structure. Singapore: World Scientific; 2004.View ArticleGoogle Scholar 68. Zhang L, Wang JS, Li B: Ballistic magnetothermal transport in a Heisenberg spin chain at low temperatures. Phys Rev B 2008, 78: 144416.View ArticleGoogle Scholar 69. Bekaroglu E, Topsakal M, Cahangirov S, Ciraci S: First-principles study of defects and adatoms in silicon carbide honeycomb structures. Phys Rev B 2010, 81: 075433.View ArticleGoogle Scholar 70. Kaxiras E: Atomic and Electronic Structure of Solid. New York: Cambridge University Press; 2003.View ArticleGoogle Scholar 71. Khomyakov PA, Brocks G: Real-space finite-difference method for conductance calculations. Phys Rev B 2004, 70: 195402.View ArticleGoogle Scholar © Szczesniak et al.; licensee Springer. 2012 |
0.913899 | 1 | English version string in Computers topic From Longman Dictionary of Contemporary Englishstringstring1 /strɪŋ/ ●●● S3 W2 noun 1 parcel.jpg thread [countable, uncountable]D a strong thread made of several threads twisted together, used for tying or fastening thingsrope Her key hung on a string around her neck. a ball of string I need a piece of string to tie this package.2 group/series [countable] a) SERIESa number of similar things or events coming one after another syn seriesstring of a string of hit albums b) GROUP OF THINGSa group of similar thingsstring of She owns a string of health clubs. c) technicalTD a group of letters, words, or numbers, especially in a computer program3 no strings (attached)4 string of pearls/lights/beads etc5 music a) [countable]APM one of the long thin pieces of wire, nylon etc that are stretched across a musical instrument and produce sound b) the strings/the string sectionAPM the people in an orchestra or band who play musical instruments that have strings, such as violins6 first-string/second-string etc7 have somebody on a string8 have more than one string to your bow G-string, → how long is a piece of string? at long1(9), → pull strings at pull1(8), → pull the/somebody’s strings at pull1(9), → the purse strings at purse1(5) Examples from the Corpus stringThe pen was hanging from a string on the wall.Is this a string of isolated anecdotes or a pattern of substandard care?Almost every trainer with a string of 20 horses shares the same ultimate ambition - to win the Cheltenham Gold Cup.It will give that tight West Coast strum with the bass strings becoming almost percussive.I need a piece of string to tie this package.She pulled the string tight, strangling him.The strings vibrate again, underscoring my panic.piece of stringIt attracted everyone from stunt flying professionals to kids with an old plastic carrier bag and a piece of string.A weight suspended on the end of a piece of string and then set in motion acts as a pendulum.The pendulum consists of a weight on the end of a piece of string, thread or chain usually a few inches long.You can use your tie, your belt or a piece of string.Almost falling out, he tied the door to one of the gas cans with a piece of string.When she got flirting around with a twig or piece of string in her bill she was not to be balked.Lay several pieces of string across a board; put the lamb on top of them, skin side downwards.string ofJackson was imprisoned in 1934 for a string of sensational crimes.O'Neill had a string of successes with his first four plays.They asked me a string of questions about Gerald and Bob.She owns a string of health clubs.a string of tiny islands off the coast of Florida |
0.901316 | 1 | Dota 2 has officially exited its two year beta and is now publicly available on Steam, Valve has announced. Given the size of the Dota 2 audience, the initial launch of the game will be done in waves, Valve has said. The staggered launch is to ensure there is no disruption for those already playing. If you want access to Dota 2 you'll need to visit the Steam page, sign up, and wait for the email which grants entry. During the two year beta, Dota 2 built a huge audience, with a monthly user base of more than 3 million users. Source: Valve press release |
0.940572 | 1 | jeans collection For most women it’s very difficult to find the perfect pair of jeans. Once you find them you never know if they will fit the same the next time you buy them. Red Button makes a superb fit for almost every woman. Simply choose the style and the washing you like, we will take care of the rest. |
0.985197 | 1 | at(position) public If you pass a single Fixnum, returns a substring of one character at that position. The first character of the string is at position 0, the next at position 1, and so on. If a range is supplied, a substring containing characters at offsets given by the range is returned. In both cases, if an offset is negative, it is counted from the end of the string. Returns nil if the initial offset falls outside the string. Returns an empty string if the beginning of the range is greater than the end of the string. str = "hello" str.at(0) #=> "h" str.at(1..3) #=> "ell" str.at(-2) #=> "l" str.at(-2..-1) #=> "lo" str.at(5) #=> nil str.at(5..-1) #=> "" If a Regexp is given, the matching portion of the string is returned. If a String is given, that given string is returned if it occurs in the string. In both cases, nil is returned if there is no match. str = "hello" str.at(/lo/) #=> "lo" str.at(/ol/) #=> nil str.at("lo") #=> "lo" str.at("ol") #=> nil Show source Register or log in to add new notes. |
0.9444 | 1 | xmlns:og='http://ogp.me/ns#' Yeah. Good Times.: Let's play a guessing game!!!!!!! Wednesday, October 10, 2012 Let's play a guessing game!!!!!!! 1. Because it's orange instead of white 3. Because there are two different kinds of pasta |
0.919366 | 1 | Shannon's source coding theorem From Wikipedia, the free encyclopedia Jump to: navigation, search This article is about the theory of source coding in data compression. For the term in computer programming, see Source code. In information theory, Shannon's source coding theorem (or noiseless coding theorem) establishes the limits to possible data compression, and the operational meaning of the Shannon entropy. The source coding theorem shows that (in the limit, as the length of a stream of independent and identically-distributed random variable (i.i.d.) data tends to infinity) it is impossible to compress the data such that the code rate (average number of bits per symbol) is less than the Shannon entropy of the source, without it being virtually certain that information will be lost. However it is possible to get the code rate arbitrarily close to the Shannon entropy, with negligible probability of loss. The source coding theorem for symbol codes places an upper and a lower bound on the minimal possible expected length of codewords as a function of the entropy of the input word (which is viewed as a random variable) and of the size of the target alphabet. Source coding is a mapping from (a sequence of) symbols from an information source to a sequence of alphabet symbols (usually bits) such that the source symbols can be exactly recovered from the binary bits (lossless source coding) or recovered within some distortion (lossy source coding). This is the concept behind data compression. Source coding theorem[edit] In information theory, the source coding theorem (Shannon 1948)[1] informally states that (MacKay 2003, pg. 81,[2] Cover:Chapter 5[3]): N i.i.d. random variables each with entropy H(X) can be compressed into more than N H(X) bits with negligible risk of information loss, as N → ∞; but conversely, if they are compressed into fewer than N H(X) bits it is virtually certain that information will be lost. Source coding theorem for symbol codes[edit] Let Σ1, Σ2 denote two finite alphabets and let Σ and Σ denote the set of all finite words from those alphabets (respectively). Suppose that X is a random variable taking values in Σ1 and let f be a uniquely decodable code from Σ to Σ where 2| = a. Let S denote the random variable given by the word length f (X). If f is optimal in the sense that it has the minimal expected word length for X, then (Shannon 1948): Proof: Source coding theorem[edit] Given X is an i.i.d. source, its time series X1, ..., Xn is i.i.d. with entropy H(X) in the discrete-valued case and differential entropy in the continuous-valued case. The Source coding theorem states that for any ε > 0 for any rate larger than the entropy of the source, there is large enough n and an encoder that takes n i.i.d. repetition of the source, X1:n, and maps it to n(H(X) + ε) binary bits such that the source symbols X1:n are recoverable from the binary bits with probability at least 1 − ε. Proof of Achievability. Fix some ε > 0, and let The typical set, Aε , is defined as follows: The Asymptotic Equipartition Property (AEP) shows that for large enough n, the probability that a sequence generated by the source lies in the typical set, Aε , as defined approaches one. In particular, for sufficiently large n, can be made arbitrarily close to 1, and specifically, greater than (See AEP for a proof). The definition of typical sets implies that those sequences that lie in the typical set satisfy: Note that: • The probability of a sequence being drawn from Aε is greater than 1 − ε. • , which follows from the left hand side (lower bound) for . • , which follows from upper bound for and the lower bound on the total probability of the whole set Aε Since bits are enough to point to any string in this set. The encoding algorithm: The encoder checks if the input sequence lies within the typical set; if yes, it outputs the index of the input sequence within the typical set; if not, the encoder outputs an arbitrary n(H(X) + ε) digit number. As long as the input sequence lies within the typical set (with probability at least 1 − ε), the encoder doesn't make any error. So, the probability of error of the encoder is bounded above by ε. Proof of Converse. The converse is proved by showing that any set of size smaller than Aε (in the sense of exponent) would cover a set of probability bounded away from 1. Proof: Source coding theorem for symbol codes[edit] For 1 ≤ in let si denote the word length of each possible xi. Define , where C is chosen so that q1 + ... + qn = 1. Then where the second line follows from Gibbs' inequality and the fifth line follows from Kraft's inequality: so log C ≤ 0. For the second inequality we may set so that and so and so by Kraft's inequality there exists a prefix-free code having those word lengths. Thus the minimal S satisfies Extension to non-stationary independent sources[edit] Fixed Rate lossless source coding for discrete time non-stationary independent sources[edit] Define typical set Aε Then, for given δ > 0, for n large enough, Pr(Aε ) > 1 − δ . Now we just encode the sequences in the typical set, and usual methods in source coding show that the cardinality of this set is smaller than . Thus, on an average, Hn(X) + ε bits suffice for encoding with probability greater than 1 − δ, where ε and δ can be made arbitrarily small, by making n larger. See also[edit] 3. ^ Cover, Thomas M. (2006). "Chapter 5: Data Compression". Elements of Information Theory. John Wiley & Sons. ISBN 0-471-24195-4. |
0.937566 | 1 | Emetor logo Electric motor winding calculator Help icon With the winding calculator you can conveniently and fast find the optimum configuration for your electric motor winding. You can investigate integer-slot, fractional-slot and concentrated windings, both single and double layer windings. You can compare the maximum fundamental winding factor for different combinations of number of poles and number of slots, display and compare the winding layout for different coil spans, or display and compare the harmonic spectrum of the winding factor for different electric motor windings. To get started, choose if you want to display the number of slots per pole per phase, the maximum fundamental winding factor, the number of winding symmetries, or the least common multiple between the number of poles and the number of slots. Make your choice in the drop-down list below. Color code Integer slot winding Fractional slot winding Concentrated winding Unbalanced winding Number of poles upleft arrowup arrowupright arrow 3left arrow right arrow Number of slots downleft arrowdown arrowdownright arrow Details for a specific number of slots and number of poles Please choose a cell in the above table to display details about it. Those winding configurations that are not checked as Store will be discarded. Harmonics Store Winding layout Number of poles of slots Coil span in slot pitches Reduction of coil span compared to full pitch in slot pitches Number of layers Fundamental winding factor |
0.946634 | 1 | Integration or anti differentiation is the process of finding a function given the derivative is already known. It essentially is and can be thought of as finding the derivative in reverse. Integration Rules If we assume that: Then we can write the integral of f(x) as: Where C is an arbitrary constant that defines the function F(x) which cannot be calculated from the function f(x) Definite Integral The definite integral can be calculated as Here we can see that if we left in the arbitrary constant for the definite integral it would cancel out as we minused F(a) from F(b). |
0.923066 | 1 | Definition of Exponential Functions Exponential functions are generally given in the form . They are continuous for all values of x, and have inverses that are logarithmic functions. The derivative of an exponential function is , which is based on the chain rule and a restatement of a as (because the natural exponent and the natural log are inverse functions of one another). The power rule does not apply to exponential functions, because the variable is the exponent rather than the base. In the special case of the natural exponent (also known as Euler's number), the first derivative is . Euler's number e is special because it is irrational and not a root of any non-zero polynomial that has rational coefficients. Related Questions (10) |
0.939222 | 1 | Extreme Sudoku Sudoku Too Easy? Then try some of our extra-fiendish puzzles. The basic rules of Sudoku are easy. Just place the digits from 1 to 9 in each empty cell. Each row, column, and 3 x 3 box must contain only one of each of the 9 digits. Solving these puzzles is a different matter entirely, since these are the most difficult puzzles we create. Difficulty depends on the type of steps required to solve them, and also on the number of each type of step. To print the puzzles use either the Print button below the grid, or if you want to print the pencilmarks as well, use the browser Print option under the File menu. The Show Conflicts button does not apply the solving logic - just checks whether there are any conflicting digits already in the grid. So it's possible to have incorrect digits that don't conflict, but eventually you will get stuck. There are automatic pencilmarks that appear if you check the Pencilmarks box. These update by themselves as you solve particular cells and cannot be edited manually. If you prefer to enter your own pencilmarks, up to six digits can be entered in each cell. You can add symbols too, such as question marks. To save any position, right-click on Permalink under the grid, and click Add To Favorites or Bookmark This Link. The position is saved as 81 digits at the end of the URL string, with hyphens used for empty cells. Extreme Sudoku posts five new puzzles every day. Each puzzle has a unique solution and can be solved with pure logic. Lots of it. Guessing is never required - but it may help! Evil Excessive Egregious Excruciating Extreme Pencilmarks Small grid Medium grid Large grid Show Conflicts Permalink Powered by Collapsed Expanded |
0.952222 | 1 | Presenting the task: Initially the class is to work in groups of three. Each group should have a supply of Unifix® cubes of two different colors — about 40 cubes of each color. The teacher should explain that the task is to build towers of Unifix® cubes, saying something like this: "Each tower is to be three cubes tall. You may use the cubes on your table, which include cubes of two different colors. Please build as many different towers as possible. "Besides building the towers, please explain your work to the other students at your table, to convince them that you have not left any out, and that you have no duplicates. Please make only towers that are right-side up, like this: and do not make any "upside down" towers, like this: Then the teacher should pass out copies of the student sheet and read through the directions to be sure that everyone understands the task. Student assessment activity: See the next page. Copyright © National Academy of Sciences. All rights reserved. Terms of Use and Privacy Statement |
0.917014 | 1 | Galilaei (lunar crater) From Wikipedia, the free encyclopedia Jump to: navigation, search Galilaei (lunar crater) Galilaei crater 4162 h2.jpg Coordinates 10°30′N 62°42′W / 10.5°N 62.7°W / 10.5; -62.7Coordinates: 10°30′N 62°42′W / 10.5°N 62.7°W / 10.5; -62.7 Diameter 15.5 km Depth 1.4 km Colongitude 63° at sunrise Eponym Galileo Galilei Galilaei is a lunar impact crater located in the western Oceanus Procellarum. Some distance to the southeast is the crater Reiner, while to the south-southwest is Cavalerius. Northeast of the crater is a meandering rille named the Rima Galilaei. To the southeast is the unusual Reiner Gamma formation, a swirling arrangement of light-hued ray-like material. Galilaei is relatively undistinguished, with a sharp-edged rim that has a higher albedo than the surrounding maria. The inner walls slope down to a ring of debris on the outer edges of the interior floor. There is a small central rise near the midpoint. About 40 kilometers to the south is the landing site of the Luna 9 robotic probe, the first such vehicle to make a controlled landing on the lunar surface. Despite being the first person to publish astronomical observations of the Moon with a telescope, Galileo Galilei is honored only with this unremarkable formation. Initially, the name Galilaeus had been applied by Giovanni Battista Riccioli, an Italian Jesuit who produced one of the first detailed maps of the Moon in 1651, to a large and bright nearby albedo feature (now known as Reiner Gamma). The name was transferred to its present location by Johann Heinrich Mädler in his influential Mappa Selenographica, published in collaboration with Wilhelm Beer in four parts between 1834 and 1836. Mädler's motive for this change was the fact that his lunar map did not name albedo features, forcing him to transfer Galileo's name to an insignificant nearby crater. Satellite craters[edit] Galilaei Latitude Longitude Diameter A 11.7° N 62.9° W 11 km B 11.4° N 67.6° W 15 km D 8.7° N 62.7° W 1 km E 14.0° N 61.8° W 7 km F 12.3° N 66.2° W 3 km G 12.7° N 67.1° W 1 km H 11.5° N 68.7° W 7 km J 13.0° N 61.9° W 4 km K 13.0° N 62.7° W 3 km L 13.2° N 58.5° W 3 km M 13.3° N 56.8° W 3 km S 15.4° N 64.7° W 2 km T 16.2° N 61.4° W 2 km V 17.1° N 60.3° W 3 km W 17.8° N 60.5° W 4 km |
0.989538 | 1 | We roll two fair six-sided dice. Find the probability ofthe following events: (a) The two dice show the same number. (b) The number that appears on the rst die is larger thanthe number on the second. (c) The sum of the dice is even. (d) Theproduct of the dice is a perfect square. Want an answer? No answer yet. Submit this question to the community. |
0.953578 | 1 | Genre Strategy -> Puzzle Today's Rank 0 Date N/A Publisher N/A Date N/A Publisher N/A The game is classical physics engine game. How to play: Wooden box, tires and balloons can be bombed.ν Click the item to bomb it. Black iron can’t be bombed. It is safe forν robot to fall on it. Be careful, you have fixed number of bombs. After youν used all bombs and the robot doesn’t fall to the ground, you pass the level. Or you fail. 200 levels. Enjoy the process of game.ν Youν can try different ways to complete the task. Beautiful game background andν dynamic game music. You can choose any level to begin.ν Human interfaceν description and game tips. Sponsored Links |
0.973995 | 1 | You are here The Great Calculation According to the Indians, of Maximus Planudes - The Digits of the System, II Peter G. Brown For greater clarity let me also say the following: The number lying in the first place indicates the number of units there are. The second is the number of tens, the third the number of hundreds, the fourth the number of thousands and so on, as far as many places as the number occupies. Note also that as the number proceeds through the four places it changes its literal name each time.9 Then in the fifth place it takes again its original name, not just the number itself, but tied to its place value. This continues 'till the eighth place in which it takes the name of the fourth, and so it continues in turn. Thus, in the previous example given above, 2 indicates and is read as 'two', 9 as 'ninety', 5 as 'five hundred' and 4 as 'four thousand'. The sign 7 is then 'seven (units of) myriads', just as we referred to 'two' (units) in the first position. Similarly the next sign is for seven, but is in fact seven myriads10, 2 is for twenty myriads; just as we had ninety in the second position, so here the sign means twenty, for both numbers are decadic. This is the same as the case with the monadic numbers that preceeded them, and so on in turn. The cipher is never placed at the left-hand end of the digits but can appear in the middle of the number or at the right-hand side, that is, at the extreme side before the smallest (non-zero) place digit.11 Not only one, but two, three, four or as many zeros as are required may be placed in the middle or in the other aforementioned place. Just as the (number of) places increases the size of the number, so too does the number of ciphers. For example, one cipher lying at the end makes the number decadic, 50 is fifty in fact, two ciphers make it hecatontadic, thus 400 is four hundred, and so on in turn. If one cipher lies in the middle and there is only one symbol before it, it makes that number hecatontadic, thus 302 is three hundred and two, but if there are two such signs, the number is chiliadic, thus 6005 is six thousand and five. If there is a single cipher with two signs after it, this indicates a chiliadic number, thus 6043 is six thousand and forty three, but if there are two then the number is myriadic, thus 60043 is six myriads and forty three, and so on in turn. To put it simply12, the number is to be understood by the order in which the symbols are placed. |
0.977874 | 1 | ii- find the radiation resistance. v- design a non-uniform linear array to have the pattern : E = [(sin(3Ψ/2))/(sin(Ψ/2))]2 where Ψ=πcosφ , and φ is measured from the array line , namely find the number of elements , the element spacing , and the current amplitudes , also what is the relative side lobe level ? (note this one been answerd but there was a mistake from me in typing the question so i resubmit it here) Detailed answers to tough homework problems |
0.90633 | 1 | factor group Quotient group In mathematics, given a group G and a normal subgroup N of G, the quotient group, or factor group, of G over N is intuitively a group that "collapses" the normal subgroup N to the identity element. The quotient group is written G/N and is usually spoken in English as G mod N (mod is short for modulo). If N is not a normal subgroup, a quotient may still be taken, but the result will not be a group; rather, it will be a homogeneous space. The product of subsets of a group In the following discussion, we will use a binary operation on the subsets of G: if two subsets S and T of G are given, we define their product as ST = { st : s in S and t in T }. This operation is associative and has as identity element the singleton {e}, where e is the identity element of G. Thus, the set of all subsets of G forms a monoid under this operation. In terms of this operation we can first explain what a quotient group is, and then explain what a normal subgroup is: A quotient group of a group G is a partition of G which is itself a group under this operation. It is fully determined by the subset containing e. A normal subgroup of G is the set containing e in any such partition. The subsets in the partition are the cosets of this normal subgroup. A subgroup N of a group G is normal if and only if the coset equality aN = Na holds for all a in G. In terms of the binary operation on subsets defined above, a normal subgroup of G is a subgroup that commutes with every subset of G and is denoted NG. A subgroup that permutes with every subgroup of G is called a permutable subgroup. Let N be a normal subgroup of a group G. We define the set G/N to be the set of all left cosets of N in G, i.e., G/N = { aN : a in G }. The group operation on G/N is the product of subsets defined above. In other words, for each aN and bN in G/N, the product of aN and bN is (aN)(bN). This operation is closed, because (aN)(bN) really is a left coset: (aN)(bN) = a(Nb)N = a(bN)N = (ab)NN = (ab)N. The normality of N is used in this equation. Because of the normality of N, the left cosets and right cosets of N in G are equal, and so G/N could be defined as the set of right cosets of N in G. Because the operation is derived from the product of subsets of G, the operation is well-defined (does not depend on the particular choice of representatives), associative, and has identity element N. The inverse of an element aN of G/N is a−1N. Motivation for definition The reason G/N is called a quotient group comes from division of integers. When dividing 12 by 3 one obtains the answer 4 because one can regroup 12 objects into 4 subcollections of 3 objects. The quotient group is the same idea, however we end up with a group for a final answer instead of a number because groups have more structure than a random collection of objects. To elaborate, when looking at G/N with N a normal subgroup of G, the group structure is used to form a natural "regrouping". These are the cosets of N in G. Because we started with a group and normal subgroup the final quotient contains more information than just the number of cosets (which is what regular division yields), but instead has a group structure itself. • Consider the group of integers Z (under addition) and the subgroup 2Z consisting of all even integers. This is a normal subgroup, because Z is abelian. There are only two cosets: the set of even integers and the set of odd integers; therefore, the quotient group Z/2Z is the cyclic group with two elements. This quotient group is isomorphic with the set { 0, 1 } with addition modulo 2; informally, it is sometimes said that Z/2Z equals the set { 0, 1 } with addition modulo 2. • A slight generalization of the last example. Once again consider the group of integers Z under addition. Let n be any positive integer. We will consider the subgroup nZ of Z consisting of all multiples of n. Once again nZ is normal in Z because Z is abelian. The cosets are the collection {nZ,1+nZ,...,(n−2)+nZ,(n−1)+nZ}. An integer k belongs to the coset r+nZ, where r is the remainder when dividing k by n. The quotient Z/nZ can be thought of as the group of "remainders" modulo n. This is a cyclic group of order n. • Consider the multiplicative abelian group G of complex twelfth roots of unity, which are points on the unit circle, shown on the picture on the right as colored balls with the number at each point giving its complex argument. Consider its subgroup N made of the fourth roots of unity, shown as red balls. This normal subgroup splits the group into three cosets, shown in red, green and blue. One can check that the cosets form a group of three elements (the product of a red element with a blue element is blue, the inverse of a blue element is green, etc.). Thus, the quotient group G/N is the group of three colors, which turns out to be the cyclic group with three elements. • Consider the group of real numbers R under addition, and the subgroup Z of integers. The cosets of Z in R are all sets of the form a + Z, with 0 ≤ a < 1 a real number. Adding such cosets is done by adding the corresponding real numbers, and subtracting 1 if the result is greater than or equal to 1. The quotient group R/Z is isomorphic to the circle group S1, the group of complex numbers of absolute value 1 under multiplication, or correspondingly, the group of rotations in 2D about the origin, i.e., the special orthogonal group SO(2). An isomorphism is given by f(a + Z) = exp(2πia) (see Euler's identity). • If G is the group of invertible 3 × 3 real matrices, and N is the subgroup of 3 × 3 real matrices with determinant 1, then N is normal in G (since it is the kernel of the determinant homomorphism). The cosets of N are the sets of matrices with a given determinant, and hence G/N is isomorphic to the multiplicative group of non-zero real numbers. • Consider the abelian group Z4 = Z/4Z (that is, the set { 0, 1, 2, 3 } with addition modulo 4), and its subgroup { 0, 2 }. The quotient group Z4 / { 0, 2 } is { { 0, 2 }, { 1, 3 } }. This is a group with identity element { 0, 2 }, and group operations such as { 0, 2 } + { 1, 3 } = { 1, 3 }. Both the subgroup { 0, 2 } and the quotient group { { 0, 2 }, { 1, 3 } } are isomorphic with Z2. • Consider the multiplicative group G=mathbf{Z}^*_{n^2}. The set N of nth residues is a multiplicative subgroup of order ϕ(n) of mathbf{Z}^*_n. Then N is normal in G and the factor group G/N has the cosets N, (1+n)N, (1+n)2N,…,(1+n)n−1N. The Pallier cryptosystem is based on the conjecture that it is difficult to determine the coset of a random element of G without knowing the factorization of n. The quotient group G / G is isomorphic to the trivial group (the group with one element), and G / {e} is isomorphic to G. The order of G / N is by definition equal to [G : N], the index of N in G. If G is finite, the index is also equal to the order of G divided by the order of N. Note that G / N may be finite, although both G and N are infinite (e.g. Z / 2Z). There is a "natural" surjective group homomorphism π : GG / N, sending each element g of G to the coset of N to which g belongs, that is: π(g) = gN. The mapping π is sometimes called the canonical projection of G onto G / N. Its kernel is N. There is a bijective correspondence between the subgroups of G that contain N and the subgroups of G / N; if H is a subgroup of G containing N, then the corresponding subgroup of G / N is π(H). This correspondence holds for normal subgroups of G and G / N as well, and is formalized in the lattice theorem. Several important properties of quotient groups are recorded in the fundamental theorem on homomorphisms and the isomorphism theorems. If G is abelian, nilpotent or solvable, then so is G / N. If G is cyclic or finitely generated, then so is G / N. If N is contained in the center of G, then G is called the central extension of the quotient group. If H is a subgroup in a finite group G, and the order of H is one half of the order of G, then H is guaranteed to be a normal subgroup, so G / H exists and is isomorphic to C2. This result can also be stated as "any subgroup of index 2 is normal", and in this form it applies also to infinite groups. Every group is isomorphic to a quotient of a free group. Sometimes, but not necessarily, a group G can be reconstructed from G / N and N, as a direct product or semidirect product. The problem of determining when this is the case is known as the extension problem. An example where it is not possible is as follows. Z4 / { 0, 2 } is isomorphic to Z2, and { 0, 2 } also, but the only semidirect product is the direct product, because Z2 has only the trivial automorphism. Therefore Z4, which is different from Z2 × Z2, cannot be reconstructed. See also Search another word or see factor groupon Dictionary | Thesaurus |Spanish Copyright © 2015 Dictionary.com, LLC. All rights reserved. • Please Login or Sign Up to use the Recent Searches feature |
0.916349 | 1 | Probabilities and Arrays This is a discussion on Probabilities and Arrays within the C Programming forums, part of the General Programming Boards category; Hey, i'm having trouble with having to simulate a user entered number of groups and user entered number of group ... 1. #1 Registered User Join Date Mar 2010 Probabilities and Arrays Hey, i'm having trouble with having to simulate a user entered number of groups and user entered number of group members. I am supposed to have a function with the prototype int group(int n) that will simulate one group at a time with n members. I have to assign each member with a random birthday and have the program check to see if two of members have the same birthday. If one group has two members with the same birthday, the function returns 1. If all the birthdays at one party are different, the function returns 0. Then main() counts the number of 1's, divides it by the number of parties and prints out a percentage. Here's what I have so far, it assigns each member with a bday,but i'm not sure how to make a loop if two or more members have the same bday and return a 1 or 0... #include <stdlib.h> #include <time.h> int group(int g, int p); int members; long int groups, samebday; group(members, groups); return 0; int group(int g, int p) int members, count, i, j, bdays[40]; long int groups; printf("Enter the number of groups: "); scanf("%ld", &groups); printf("Enter the number of members: "); scanf("%d", &members); for (j = 1; j <= groups; j++) for(i = 0; i < members; i++) bdays[i] = 1 + rand() % 365; Any help will be appreciated, thanks! 2. #2 and the Hat of Guessing tabstop's Avatar Join Date Nov 2007 You can go through all the pairs (0&1, 0&2, 0&3, ..., 0&39, 1&2, 1&3, 1&4, .........) and check if they're the same; if they are, set some flag to 1. 3. #3 Registered User Join Date Sep 2006 Welcome to the forum, Simplicity. There is a slick way of doing this, called distribution counting or counting sort or just "binning". Anyway, whatever the name, here's how it works: #include <stdio.h> int main() { int i; int num[10] = { 4, 1, 4, 12, 3, 8, 12, 4, 9, 4 }; int count[13] = { 0 }; if(count[i] > 1) printf("\n%2d: %2d", count[i], i); printf("\n\n\t\t\t press enter when ready"); return 0; When you run it, it shows you have 4 4's and 2 12's, in your num[] array. Note that the size of cout[] array MUST be able to go to the highest value in num[]. If the highest number in num was 126, then count would have to have at least 127 elements in it (the +1 is because arrays in C start with 0's, not 1's). Back to your program. After every party, you run this bit of code, and it will quickly tell you which parties have guests with duplicate bdays. Remember to reset the count array to all zero's, in a for loop, before the next party. This is a good programming trick to remember, btw. Popular pages Recent additions subscribe to a feed |
0.904478 | 1 | Sun Identity Manager Deployment Reference General Session Workflow Services The com.waveset.session.WorkflowServices class contains a set of services that support a range of operations, including general operations (get, checkout, checkin, etc.) on views and objects. General Session Workflow Services Call Structure Workflows have an internal hierarchical structure that constrain both flow of control and scope of variables. A workflow (also called a Case or WFCase) contains a list of workflow Activity elements. A workflow activity contains a list of Action elements. A variable declared at the WFCase is visible to all Activity and Action elements. If a variable named color is declared at the WFCase level, and then again in an Action, they are effectively two different variables, such that changing the value of the color variable in the Action will not affect the value of the color variable from the WFCase. Workflow services are called from workflow actions. WorkflowServices provides a set of operations that are selected through the value of the op Argument. Each operation can have a different set of arguments, so the calling ’signature’ must match the specified service itself. The general form of a workflow service action is shown in the following code example: <Action class=’com.waveset.session.WorkflowServices’> <Argument name=’op’ value=workflowServiceOp/> <Argument name=argname1> <Argument name=argname2> <Argument name=argnameN> Each of the supported workflow services has a variable number of required and optional arguments. The op argument to the session workflow services call must specify one of the provided services. This is similar to calling a method by reflection, where the name of the method to be called is similar to the name of the workflow service to be executed. If an op argument is given that is not on the following list, the workflow services return: ’Unknown WorkflowServices op’ and the workflow context variable WF_ACTION_ERROR will be non-null. If an op argument is given that is not on the preceding list, a workflow service returns: ’Unknown WorkflowServices op’ and the workflow context variable WF_ACTION_ERROR will be non-null. Supported Session Workflow Services The com.waveset.session.WorkflowServicesJavadoc for workflow services documents the internal Identity Manager Java string fields rather than the nomenclature required by XPRESS. For example, the documentation for the workflow services operation addDeferredTask is provided by the Javadoc for the WorkflowServices OP_ADD_DEFERRED_TASK string field. In general, to find the Javadoc for an operation, take the operation name, capitalize and separate with an underscore the words making the operation name, and prefix the result with the OP_. Operation argument names are treated similarly, but prefixed with ARG_. |
0.916824 | 1 | Personal tools 99 questions/Solutions/22 From HaskellWiki Jump to: navigation, search Create a list containing all integers within a given range. range x y = [x..y] range = enumFromTo range x y = take (y-x+1) $ iterate (+1) x range start stop | start > stop = reverse (range stop start) | start == stop = [stop] | start < stop = start:range (start+1) stop The following does the same but without using a reverse function range :: Int -> Int -> [Int] range n m | n == m = [n] | n < m = n:(range (n+1) m) | n > m = n:(range (n-1) m) or, a generic and shorter version of the above range :: (Ord a, Enum a) => a -> a -> [a] range a b | (a == b) = [a] range a b = a:range ((if a < b then succ else pred) a) b or with scanl range l r = scanl (+) l (replicate (l - r) 1) Since there's already syntactic sugar for ranges, there's usually no reason to define a function like 'range' in Haskell. In fact, the syntactic sugar is implemented using the enumFromTo function, which is exactly what 'range' should be. |
0.901959 | 1 | Aspect Two- Hope Hope is to believe Hope is to never give up Hope is to fight for your friends Hope is to fall down but rise again Let no one stop you Let no one tell you lies Let no one hurt your friends Let no one make you feel useless For you can shine bright For you are your own hope For you are the friend of many For you are with a hopeful heart For no one can change that |
0.910405 | 1 | Basic Structure of a Digital Computer [Part One] Go to the HOMEPAGE of this site, so that you can navigate through it, and find my e-mail address. If your browser does NOT support FRAMES (The current page does not have them, but others do), please click HERE . You will arrive at the HOMEPAGE of this site (without FRAMES). From there you can navigate through this website by means of the links in the CONTENTS-section. to bottom This Essay is meant as a preparation for the next Essay, which is about the ontological status of ARTIFICIAL LIFE (a-life), mainly that brand of a-life that is computer-generated. For an assessment of the ontological status of the creations of a-life it is paramount to know something about the general and basic structure of a computer, in our case a digital computer. Only after possessing some insights into the structure of these interesting machines can we rationally speak about the role and nature of the substrate of artificial-life creatures in order to assess the reality-status of those creatures. For this purpose we do not need a description of all the details and design features of modern computers. Just a general lay-out of the very basics is necessary. I will devote much attention to the general workings of Boolean logic (hardware) circuits (but only a very simple example of them will be treated of here [Part Two of this Essay]). Thereby it is important to realize that the computer-hardware is a physical device that obeys the laws of physics. Further one must realize that a computer can simulate phenomena of the outside world, and although these simulations are not the same as the phenomena which are being simulated, those simulations are something in their own right. What they are (in their own right) depends on their general and detailed structure and/or behavior. In the next Essay (about artificial life) these subjects will be adressed fully. In Part One of this Essay I will treat of the general CONCEPT of a digital computer, in terms of Turing machines. A concept already worked out in the 19-thirties. There are two main types of computers, analog computers and digital computers. Digital computers are discrete machines having access to a finite number of internal states only, while analog machines have access, in principle, to an infinite number of internal states and could therefore be expected to outperform digital machines. Examples of this enhanced ability of analog computers would include solving halting-problems (i.e. in principle be able to determine in advance whether any program, fed into the computer, will or will not yield a definite result in a finite number of steps), and generating non-computable numbers. For a digital computer one cannot write a test-program that could determine for any other program, to be run on such a computer, whether that program (to be tested) will, when run, give a definite result after a finite number of steps. Also a digital computer cannot compute certain numbers (so called non-computable numbers). Further we have serial machines and parallel machines. While a serial machine can only perform one computational step after another, a parallel machine can execute more than one such steps simultaneously. Parallel machines accordingly consist of more than one processor, which operate in harmony. Many processes in Nature are in fact proceeding in a parallel fashion, and can therefore adequately be simulated by such machines. But it is possible to simulate such a parallel machine on a serial machine. In our discussions and explanations we will confine ourselves mainly to DIGITAL SERIAL COMPUTING MACHINES, but the principle (concept) expounded covers parallel machines as well. The Digital (serial) Computer The main components of a digital computer are : Besides these main components we find slow secondary storing divices, such as floppy disks and hard disks. These can contain data and programs that can be used as input. They also can receive output. Further there may be one ore more control units that check and regulates information-flow (information-traffic). Figure 1. Von Neumann's computer architecture -- the layout of a typical serial machine. Except with respect to input and output devices, the information-flow is in a back-and-forth fashion. To program such a computer, in order that it will solve a certain problem or generate some desired result, the programmer first writes an algorithm, which is a solution of the problem in the form of a sequence of steps, written in ordinary language. This algorithm is then coded into a suitable programming language that will enable the computer to ' understand ' and successfully execute the corresponding instructions. Usually, this involves one of the ' higher ' programming languages, so called because they are reasonable close to human language. But because the Central Processing Unit (CPU) is composed of a set of Boolean logic circuits, it is capable only of performing elementary arithmetic operations such as addition, subtraction, multiplication, and so on. Thus the original programming language fed into the computer must be first converted by means of an interpreter (translates one line of code and executes it, then translates the next line, etc.), or a compiler (translates the whole program and then executes it) into a machine-readable assembly language (instructions, coded in this assembly language are then directly translated into machine-code, that consists of the electronical equivalents of 0's and 1's) . Only then the machine is able to execute the fed-in instructions. The input is coded up in memory, which is a grid of electronic on-off switches. The processor, which is a chip of integrated circuitry, alters what is in the memory, resulting in a different on-off pattern of switches, and then the output decodes and displays the new contents of the memory. So the actual computation consists of the processor's activities on the memory. Accordingly the processor and the memory stand in mutual contact with each other. Turing Machines Now exactly what is it that the processor has done? [See among other sources, RUCKER, Mind Tools, 1988] 1. The processor is able to read the symbol stored in whatever location it is looking at. That is, it can tell if the switch is set to ON or OFF. 2. The processor is able to change the contents of the memeory location it is currently scanning. That is, it can change the position of the switch it is observing. 3. The processor is able to move its attention to a new memory location. 4. The processor is able to change its internal state. That is, it can change its predilections about what to do next. In fact this is a concept of what a computer does, outlined by TURING in 1936. A machine, stripped to the bare bones of this concept is nowadays called a Turing Machine. A Turing machine [See, among others, PETERSON, 1988, The Mathematical Tourist, pp. 194 ] consists of a Head that can scan a Tape, and that can be in one out of a finite number of internal states. Such a Tape, which can be interpreted as memory, consists of a linear arangement of cells. Each cell can itself be in one of a finite number of cell-states. Because we are aiming at electronic computers which have a memory board consisting of on-off switches, we consider cells which can be in only one out of two states, ON or OFF, which can be represented by the cell being BLACK or WHITE respectively. The Head of the Turing Machine can READ the cell (state) on which it is currently placed (we can conceive of the Head being moved along the Tape, or the Tape being moved over the Head), i.e. it can determine if that cell is BLACK or WHITE. If the cell is BLACK it can erase the black, or leaving it BLACK. If the cell is WHITE it can leave it like that or make it a BLACK cell. After this the Head can move one cell to the left or one cell to the right. When this is accomplished the machine can either stay in the same internal state, or change its state into another. After it has performed a certain number of such tasks, the machine will turn itself off. An action table stipulates what a Turing Machine will do for each possible and relevant combination of cell-state (BLACK or WHITE) and internal state [ In a real (i.e. physical) computer this table is in fact one or another Boolean function, fed into the machine as a code, and will then be physically represented by an electrical circuit that gives a certain output depending on its input (for example numbers -- ultimately in the form of an on-off pattern of memory elements)]. The first part of the instruction specifies what the machine should write, if anything, depending on which cell-state (BLACK or WHITE) it encounters. The second part specifies whether the machine is to shift one cell to the left or to the right along the tape. The third part determines whether the machine stays in the same internal state or shifts to another state, which usually has a different set of instructions. Suppose [PETERSON, pp. 195] a Turing Machine must add two integers (this accordingly is a special Turing Machine -- out of many possible -- that specializes in executing this particualer task, the adding of any two whole numbers). We can represent such a whole number by a consecutive series of BLACK cells, for example the number 3 can be represented by three consecutive BLACK cells on the tape, and the number 4 can be represented by four such cells. If we now want to ADD these two numbers, we write them both on the tape, with a WHITE cell in between. When we think of the cell-states as being either OFF (= WHITE) or ON (= BLACK), then our INPUT will be written on the tape as follows : In order to ADD these numbers the machine fills in the blank cell, giving : ...00011111111000... , and then goes to the end of the string (of 1's) and erases the last 1 in the row, which results in the correct answer, namely a consecutive series of seven 1's : An action table is needed to instruct the machine how to perform this addition (The foregoing was just an algorithm, a recipe, that still must be translated into an executable program). The table's first column gives the machine's possible internal states, and the first row lists all the cell-states being used (in our case the cell-states BLACK and WHITE). Cell-state encountered BLACK WHITE (Internal) State 0 move right, get in state 0. put BLACK, move right, get in state 1. (Internal) State 1 move right, get in state 1. move left, get in state 2. (Internal) State 2 erase, stop. Each combination of (internal) State and Cell-state specifies what, if anything, needs to be done to a cell, in which direction to move after the action, and the (internal) state of the machine, that is, which set of instructions it will follow for its next move. The above action-table can now be written down in a more compact way by coding the details as follows : BLACK = X, WHITE = B, move right = R, move left = L, for example 1XR2 means : IF the internal state of the machine is 1, and the cell encounered is BLACK, THEN the Head of the machine must move to the next cell on the right, and the machine must enter internal state 2. Two other examples of instructions are, say, 2BX2, or 2BL3, they signify the following : 2BX2 : IF the internal state is 2, and the encountered cell is WHITE, THEN the Head must make that cell BLACK, remain in state 2. 2BL3 : IF the internal state is 2, and the encountered cell is WHITE, THEN the machine must move to the next cell to the left, and go into (internal) state 3. So we can now write down the above action-table (i.e. the program) more compact : 1. 0XR0 2. 1XR1 3. 2XB stop 4. 0BXR1 5. 1BL2 Each one of these five instructions implies, among other things, If this (new) internal state is, say, 1, and the machine has settled on a, say, BLACK (= X) cell, then, in the action-table, an instruction must be looked-up that begins with 1X. This is instruction (2) of the action-table : 1XR1. If no such entry is to be found in the action-table, or if there are more than one such entries, then (it is stipulated that) the machine will turn itself off, (it is so stipulated) because no definite course could then be follwed. The input pattern (in our example) is BLACK, BLACK, BLACK, WHITE, BLACK, BLACK, BLACK, BLACK, or, equivalently, X X X B X X X X. With the above action-table we can now compute the sum, i.e. 3 + 4 = 7. To begin with, the machine is set in state 0 and is placed at the BLACK (= X) cell farthest to the left. After execution of (1) [ See the above table ] it finds itself in state 0 and has moved one cell to the right, this new cell is again BLACK (= X). So it must execute (1) once more, resulting in having shifted one more cell to the right which is also BLACK (= X). Again it must execute (1), and this results in encountering the WHITE (= B) cell and still being in internal state 0. So next it must execute (4), which means it must make that cell BLACK, move one cell to the right and go into internal state 1. By doing so it encounters a BLACK cell, so it must execute (2), resulting in moving one cell to the right and remaining in state 1. It thereby finds again a BLACK cell, so it must again execute (2), resulting in finding another BLACK cell to the right. Again it must execute (2), resulting in encountering the last BLACK cell to the right. Once again it must execute (2), now resulting in finding a WHITE cell to the right. This implies that the machine must now execute (5). According to that instruction it must go one cell to the left and go into internal state 2. There it finds a BLACK cell. This implies that it must now execute (3), which means that the Head must make the encountered cell WHITE, and then turn itself off. The result is a string of seven consecutive BLACK cells : X X X X X X X. With this the calculation is completed. The figure below pictures the computational steps of this calculation : Figure 2. At each step, a Turing Machine may move one space to the left or right. By following a simple set of rules, this particular machine can add two whole numbers. The strategy of this particular machine is : Starting with separate groups of -- as in the example -- three and four BLACK cells, and ending up with one group of -- as in the example -- seven BLACK cells. The same action-table can generate the sum of any two whole numbers, no matter what their size, as long as it is finite. But adding two numbers such as 49985 and 51664, by itself, would require a tape with at least 100000 cells. To be capable of adding any two numbers, the tape would have to be infinitely long, which does however not mean an actually infinite tape, but only a potentially infinite tape, which means that what ever the length of the tape already is, we can always add some tape if necessary. Similar tables can be worked out for subtraction and for practically any other mathematical operation. The sole condition is that the number of internal states of the machine, and the number of different cell-states listed in the action-table, is finite, which ensures that a routine, mechanical process can do the job. For every calculation a digital computer can perform there is a corresponding Turing machine which can do that same calculation. Let us consider some more of such Turing Machines. In expounding them we will use the above notation : The input is one or more BLACK cells. At the start (i.e. when the machine is switched on) the machine is placed at the left-most BLACK cell, and enters (internal) state 1. Further we will describe the instructions using only strings of four of the above defined symbols, so a typical instruction could read : 2XB3, which means : IF the machine is currently in state 2 and reads a BLACK cell, THEN it must erase this black, i.e. it must make the cell WHITE, and must enter state 3. Or (a typical instruction) could read : 1BL1, which means : IF the machine is currently in state 1, and reads a WHITE cell, THEN it must move one cell to the left, and remain in state 1. So the first two symbols together constitute a condition to be satisfied, and the last two symbols together constitute an action, that must be taken if and when that condition is indeed satisfied. If and when the machine reaches an instruction that tells it to enter state 0, it turns itself off. Not every machine (i.e. not every program or action-table) reaches an instruction that leads to 0. Some machines go into various sorts of endless behavior loops, and so do not yield a definite result. It is not at all unusual for a Turing machine to run forever. TURING's theorem says that there is NO general method that could determine in advance whether a particular machine (i.e. a particular program for such a machine) will run forever -- and consequently not yielding any definite result in a finite amount of time -- or that it will calculate a result and turn itself off. Here is an example of a Turing machine's action table that results into a loop and never gives any output at all : At the start the machine enters state 1 and reads a left-most BLACK cell. So it must execute the first instruction, which means it must go one cell to the left and enter state 2. There it encounters a WHITE cell, so it must now follow the second instruction, which means that it must move one cell to the right and enter state 1. There it finds a BLACK cell, so it must follow the first rule, i.e. it has returned to the original situation, and from now on these actions will be repeated ad infinitum. If we denote the number of the (particular) machine's non-zero (internal) states with K, then the machine's total program contains at most 2K instructions, each of which begins with one of the 2K possible prefixes 1B--, 1X--, 2B--, 2X--, ..., KB--, KX--. This follows from the fact that any time the machine is on, its state is one of the numbers 1, 2, ..., K, and the cell it is examining is either a B or an X. Each " -- " can be filled in 4(K + 1) ways, because, for to fill in the last entry of " -- " we have K + 1 possible internal states (now including the 0-state), and for to fill in the first entry of " -- " we have four possible symbol-types : X, B, L and R. This makes for [4(K + 1)]2K possible programs (action-tables) in all, i.e. 4(K + 1), multiplied with itself 2K times. Let me explain this last assessment. When we have, say, three symbol-types, a, b, c, and when we want to make strings, each consisting of four such symbols, then we can ask how many such strings are posible. For the first entry of such a four-symbol string we have three possibilities, a, b, or c. For the second entry we also have three possibilities, a, b, and c. So each of the possible three first-entries has three possibilities with respect to the second entry. That makes 3 x 3 = 9 possibilities. For the third entry of the four-symbol string we again have three possibilities, a, b, and c. So each of the nine possibilies regarding the first two entries has three possibilities regarding the third entry, so that makes 3 x 9 = 27 possibilities. For the fourth (and last) entry of the string we again have three possibilities, a, b, and c. So each of the twenty-seven possibilities regarding the first, second and third entries has three possibilities regarding the choice of the fourth entry of the string, and that makes 3 x 27 = 81 possibilities in total four a four-symbol string using three different kinds of symbol. 81 equals 3 x 3 x 3 x 3 = 34. It is easy to generalize on that : The number of possible strings, consisting of S different types of symbols, and with a length of L symbols equals SL. Applying this to the assessment of the number of possible Turing Machine programs (action-tables) in dependence on the number of non-zero internal states, we can say the following : A Turing Machine program can be considered as a string of instructions. Each instruction, like for instance 2XR1, can be interpreted as a conditional statement of the form IF .... THEN. The first half represents the condition to be satisfied, the second halve represents the instruction proper. In the above example 2X is the condition, and R1 is the action to be taken (go one cell to the right and enter internal state 1). We have 4(K + 1) different kinds of instruction proper, which we can call ' symbols'. These symbols must be carried by at most 2K instructions. So the length of the symbol-string is 2K. It follows that the number of strings with length 2K, consisting of 4(K + 1) different symbol-types equals [4(K + 1)]2K, and this is thus the number of possible Turing Machine programs. This is quite a large number even for small K's : for K = 2 it is already about twenty thousand, but the number of possible K-state Turing Machines is finite, and for large K, the value is roughly the same size as K2K. Of course an instruction (in a program),like for instance 3XB2, can be used more than one time during the running of the program. Turing Machines can just do about anything. It can perform the same numerical tasks as any digital computer, but in general it is extremely slow. The importance therefore of the Turing machine is its being the fundamental concept of any digital computer. We are now ready to describe a few Turing machines. A Turing machine computes a function T(X) = Y, where X is some input and Y the output. We shall confine ourselves to the manipulation of non-zero natural numbers, thus the numbers 1, 2, 3, 4, ... . Such a function could be : T(N) = N + 3. This means that we can give any such natural number N as input, and the Turing Machine will compute the addition of 3, i.e. it will compute N + 3. This can be done with a five-state Turing Machine, that adds three BLACK cells to the right end of the starting string (of BLACK cells) [See RUCKER, 1988, Mind Tools, p. 232] : Let us execute this program, for N = 2. Thus we will compute T(2) = 2 + 3. [said in a more elegant way : We will compute the function T(N) = N + 3, for N = 3]. The input is XX, two BLACK cells on the tape. To start with, the machine is placed on the left-most BLACK cell and set in internal state 1. So the first instruction to execute is one (from the list) beginning with 1X, thus 1XR1. This will cause the machine to move one cell to the right while the machine remains in state 1. It encounters a BLACK cell, so again the instruction beginning with 1X, thus 1XR1, must be executed. The machine goes one cell to the right, encounters a WHITE cell, and remains in state 1. Now the machine must look for an instruction that begins with, 1B, hence 1BX2. This tells the machine to make that cell BLACK and to enter state 2. Now the machine must look in the list for an instruction that begins with 2X, it finds it : 2XR3. According to this instruction the machine must move one cell to the right and enter state 3. It encounters a WHITE cell, so the machine must look for an instruction beginning with 3B, this is instruction 3BX3. So it must make this cell BLACK and remain in state 3. Now it must look up an instruction beginning with 3X, and that is 3XR4. Again it must move one cell to the right and enter state 4. It encounters a WHITE cell, so it must find an instruction beginning with 4B, that is 4BX0, so the machine must make this cell BLACK and turn itself off. The calculation is completed, we started with two consecutive BLACK cells and ended up with five consecutive BLACK cells. A next example of a Turing machine is a machine with a program that computes the function T(N) = N + N. This programm uses the original number of BLACK cells (i.e. the input, the value of N) as a counter : it erases them one by one and replaces each of them with two new BLACK cells. When the original BLACK cells are all erased then the program will halt, and the computation is done. This can be performed with an 8-state Turing machine, one 0-state (the STOP sign) and 7 non-zero states, so K = 7. Let us calculate the above function for N = 3, thus the calculation T(3) = 3 + 3 : The program now starts all over again, beginning (again) with the first instruction. This repetition will go on until all the original N's are erased and replaced -- at the end of the string -- by two N's each : As can be seen, when we anticipate to need more tape, we just add more tape. In this way the memory (capacity) of the Turing Machine is potentially infinite. Again the program will revert to the first instruction. The program now sees that all the original N's are erased. The machine will now turn itself off, and the calculation is completed. We started off with X X X (input), and ended up with X X X X X X (output). It took 47 steps for this machine to accomplish this, i.e. to calculate the function T(N) = N + N for N = 3. A Turing Machine can become a universal Turing Machine, U, because it can (be made to) accept any program P and data D. The computation U(P,D) starts when we give U a tape with a pattern of BLACK cells representing P, followed by a WHITE cell, followed by a pattern of BLACK cells representing D. In computing U(P,D), U simulates the computation T(D), where T is the machine with program code P. This is in fact also the case with ordinary digital computers. When we apply a word-processor program to such a computer, then this computer simulates an advanced typewriter. In the following I will describe the construction of a universal Turing Machine, fully based on the expositions given in PENROSE, 1990, The Emperor's New Mind, chapter 2. The Universal Turing Machine For constructing a universal Turing Machine it is necessary to make new coding agreements, which will however -- in the process of the exposition -- successively be replaced by more suitable and efficient ones. We start with a coding that looks like the one we used in the beginning, but with a slightly different notation. Further we will employ the binary number system . The binary number system is just a different notation for numbers. Whereas the conventional (denary) notation uses powers of 10 (for example 358 = 3 x 102 + 5 x 101 + 8 x 100), the binary notation uses powers of 2 (for example 1101 = 1 x 23 + 1 x 22 + 0 x 21 + 1 x 20 = 13 in denary) [ a number raised to the 0th power always equals 1. Proof : a0 = ab-b = ab divided by ab = 1 ]. Because a computer-design always consists of a pattern of switches that are either ON or OFF, the binary number system of notation is very convenient for coding data and instructions. Each Turing machine characterizes itself by its set of instructions. It will start to operate when we feed in some data. The data will consist of a number (or numbers) on which a certain operation will be carried out by the machine according to its set of instructions. Figure 3. A twelve-state Turing machine. The horizontal bar symbolizes the possible different internal states the machine can be in. The program is the set of instructions (here applying a notation we used earlier). The machine can move along the tape one cell at a time. It can read its current state and the state of the cell it is visiting. ( After RUCKER, 1988, Mind Tools.) Since a Turing Machine has only a finite number of distinct internal states it cannot be expected to ' internalize ' all the external data nor all the results of its own calculations. Instead it must examine only those parts of the data or (parts of results of) previous calculations that it is immediately dealing with (i.e. which it locally encounters), and then perform whatever operation it is required to perform on them. It can note down, perhaps in the external storage space, the relevant results of that operation, and then proceed in a precisely determined way to the next stage of operation. Let us give a first coding scheme (that will later be altered, in order to be suitable for the construction of a universal Turing machine) for these instructions : A typical instruction could then be 0O implies 0OR This instruction means : If the machine finds itself in state 0, and if it reads an unmarked cell, then it must remain in state 0, and put a O, i.e. leave that cell unmarked. Another typical instruction could be 11O implies 1001011L This instruction means : If the machine finds itself in state 11, and reads an unmarked cell, then it must enter state 100101, and put a 1, i.e. it must mark that cell. After the machine has executed such an instruction, it will read its new state, and it will read the cell on which it is now placed, namely whether this cell is marked or unmarked. These two readings together determine which instruction is to be executed next. Our aim is finally to code the instruction list using only the symbols 0 and 1, because that will be convenient for a computer (of which the Turing machine is the bare concept) that operates with swiches that can either be ON or OFF. So we must code the symbols L, R, STOP and punctuation marks (like commas, or marks that indicate the termination of, say, an instruction) with the aid of the symbols 0 and 1. Also any kind of other mathematical symbols must be coded by means of only these two symbols. But how is it possible, using only strings of 0's and 1's, to distinguish instructions and all kinds of other marks from just numbers? Where does the string of 0's and 1's representing not a number end, in a for the machine recognizable way, and then followed by a string of 0's and 1's which represents a number, and where does this string in turn end, to be followed by yet another string of 0's and 1's representing again not a number? Further there will be Turing Machines that need as input two or more numbers, for example a Turing Machine that should calculate the highest common factor of two given numbers. Also, there will be operations that will output a pair of numbers, for example division with a remainder. All these numbers must be separated somehow from each other, for example by means of a comma. So we must find a way how to code items like a comma, an instruction, a mathematical symbol, by means of the symbols 0 and 1. This can indeed be done by the following method : Because representing a number N just by N 1's will be very inefficient for large numbers, we first of all are going to use the binary number notation (thus not just using two symbols instead of ten, but a complete notational scheme). This notation uses only 0's and 1's, but the position of them in the string indicates their value as has been explaned above. So for example the number 29 is 11101 (1 x 24 + 1 x 23 + 1 x 22 + 0 x 21 + 1 x 20). In order to code numbers (in binary notation) as well as instructions, commas, etcetera into strings of 0's and 1's we could apply the following transformations : Later on we will change this coding a little (they are of course optional, but once settled we must stick to it). We call this : coding by expansion [PENROSE, 1990, The Emperor's New Mind ]. First of all we shall concentrate on the first three transformations, that garantee the unambiguous coding of a sequence of more then one number. Let us take as an example the number-sequence 5, 13, 0, 1, 1, 4, The last comma serves to mark the end of the sequence, that otherwise would not be clear when this sequence is transformed into a sequence of 0's and 1's, because the tape of the Turing Machine must be potentially infinite in both directions and thus looks like this : .................000000000 any number coded with 0's and 1's 00000000................. The Machine must know for sure that the number sequence has ended, and that there will not appear any 1 further down the (potentially) infinite series of 0's on the right. Let us first change the denary notation of the numbers above into binary. We get the sequence 101, 1101, 0, 1, 1, 100, Now we will code this sequence by expansion according to the above transformations. This will generate the following string (for convenience we color the coded commas) : It is clear that by this method of coding the actual NUMBERS appear as sequences of 0's and 1's in such a way that there will never appear more than one 1 directly after each other (because 0 is coded as 0, and 1 is coded as 10). So we can now use sequences-of-more-than-one-1-directly-after-each-other -- and terminated with a 0 ( as can be seen in the above transformation list), to code all kinds of non-numbers, like we just did concerning the comma (that separates numbers) which was coded as 110. Other sequences, like 1110, 11110, etc. can now be used for other symbols, for instance instructions, the STOP sign, mathematical symbols, etc. One more final point should be made about this coding. In the binary notation (as in the denary notation) one or more 0's in front of a number is redundant, resulting in the fact that for example 0001101 is equal to 1101, and also 000 (or 00 or 0000, etc.) is equal to 0. In not-expanded (but binary) notation this could lead to confusions as to whether a 0 separates two numbers or whether it belongs to one or the other number or whether it is a part of one number. With the expanded binary notation such confusions cannot occur, because of the possibility to use a coding for commas. But then we even do not need to explicitly write down a 0 between two commas, like we see in the above sequence, where the third number is a 0. We can simply code ,0, as two commas directly next to one another (,,). In expanded notation this would give Thus the above set of six numbers can finally be coded as : This string has accordingly one 0 missing in comparison with the string we had before. By means of this coding by expansion we can conveniently code DATA (and output) for a Turing machine, also when they consist of more than one number. Besides the natural numbers [the numbers 0, 1, 2, 3, ... ] on which we have concentrated so far, there are other numbers, like negative numbers (like -34), fractions (like 23/5), numbers as finite decimal expressions (like 3. 54789), and numbers as infinite decimal expressions (like pi, 3.14159265358979...). When we must code for negative numbers, then we must have a code for the minus-sign, and such a code can easily be provided (using expanded binary notation) by sequences of consecutive 1's, terminated by a 0, like 1111110. The same can be said for fractions. There we need a code for the / sign. Also a finite decimal expression can be coded as a division, using the (coding for the) / sign again. For example 3. 476 = 3476/1000. So all these numbers can be handled by Turing Machines. However numbers that must be represented by infinite decimal expressions can present difficulties. We can conceive of a Turing machine churning out all the successive digits of the number pi (that must be represented by an infinite string of symbols, decimal or otherwise), and consequently running forever. But this is not allowed for a Turing Machine, because when the machine does not halt we don't know whether the result will be changed during further calculation (the machine can visit previous outputs and rework these). Only after the machine has halted we can trust the result. But there is a way to conceive of a legitimately specified Turing machine that can generate one digit of pi after another : It must produce the 1st digit by acting on the number 1, then it must produce the 2nd digit by acting on the number 2, then it must produce the 3rd digit by acting on the number 3, etcetera. So when the machine starts to act on a next number we know that the calculation of the previous digit was wholly completed. Until now we have described the general construction of one or another specific Turing Machine. With " specific " we mean a Turing Machine supplied with a special program, i.e. supplied with a special set of instructions (instructions like for instance 11010O implies 10011R ) in order to perform a special task, for example multiplying any given number with 25. Such a machine has this set of instructions internalized (see figure 3., but in fact more so than the figure suggests) -- this set of instructions are part of its hardware, while the data is fed in by the tape as input, and the result will be placed on the tape as output. We will now try to describe a Universal Turing Machine. In such a machine we want to externalize the program, so that it becomes software for a universal hardware machine. We can do this by coding the set of instructions for an arbitrary Turing machine T into a string of 0's and 1's that can be represented on a tape, and becomes as such externalized, and so part of the input for the machine. As such it then is not a part of the machine anymore, but an optional input -- a program, instructing the universal machine (how) to execute a certain task. This tape, i.e. this program, is accordingly used as the initial part of the input for the universal machine U which then acts on the remainder of the input (the data) just as T would have done. This universal Turing Machine is a universal mimic. The initial part of the tape provides the full information for the universal machine U that it needs for it to imitate any given machine T exactly. This universal Turing machine must be instructed how and when to read the initial part of the tape and how and when to shift the reading-activity to the second part (the data part) of the tape. The relevant instructions for performing these activities form the instruction-set of the machine U itself, i.e. its own instruction-set, that we can interpret as its hardware (its wiring-circuit). To see how we can conceptually construct such a universal Turing machine, we need a way of numbering Turing Machines. To accomplish this we must start with a general way of coding the instruction-set of an arbitrary Turing machine. Let us, to understand the procedure, take the Turing machine that will add ONE to any given input. Let us call this Turing machine XN + 1. This Turing machine acceps a number, written in expanded binary notation (where, as we learned above, 0 becomes 0, and 1 becomes 10) and acts on this number leading to an output that must also be considered as expanded binary, and this number has ONE added to the original number (whether we interprete input and output as ordinary binary or expanded binary). In this particular Turing machine, XN + 1, the instruction-set still figures as its hardware and is (interpreted as) ' soldered ' into the machine (such a machine can only perform one task, in this case ADDING ONE to a given number). Therefore the instruction-set is not coded using only 0's and 1's. The tape only consists of the data, in this case one number, written in expanded binary. The instruction-set for the machine XN + 1 is as follows [See PENROSE, 1990, p. 60] (recall that an instruction such as 110O implies 0OR means : If the machine is in state 110 (binary notation) and reads a O, then it must enter state 0 and move one cell to the right ) : 0O implies 0OR 01 implies 11R 1O implies 0OR 11 implies 101R 10O implies 11OL 101 implies 101R 11O implies 01STOP 111 implies 100OL 100O implies 1011L 1001 implies 1001L 101O implies 110OR 1011 implies 101R 1101 implies 1111R 111O implies 111R 1111 implies 111OR When this machine is fed with a number N (in expanded binary notation) it will output a number (in expanded binary notation) N + 1. When we now want this instruction-set to become just a program for the universal Turing machine, we must code this program into a sequence of 0's and 1's so that we can put it as marks on a tape (BLACK cells, or WHITE cells), and in this way externalize the instruction-set. To begin with, we do not have to distinguish between ordinary 0's and 1's and boldface O's and 1's as we see them in the above instruction-set, because those boldface figures occur only once in each condition of an instruction (i.e. the sequence of symbols before IMPLIES), and also only once in each action-term of an instruction (i.e. the sequence of symbols coming after IMPLIES) Thus for, say, 11O we can write just 110. We can further economize considerably by omitting the IMPLIES-term and also omitting all that comes before that term, relying instead upon the numerical ordering of instructions to specify what those instructions must be. Let me explain this. The list of instructions can be ordered according to their conditions (the part of the instruction that precedes the IMPLIES-term) when we read them as binary numbers. When we want to use this ordering (i.e. the place of each instruction in an ordered list) then this ordering must not contain gaps, because in that case the specification, saying which instruction must be followed next, will be spoiled. So when there is such a gap then we must insert a dummie-instruction. Indeed in the list of instructions for the machine XN + 1 we miss (in the conditions) 110O, so we will insert a dummie-instruction : 110O implies 0OR . This results in a complete ordering of the list (the dummie-instruction will not be visited by this machine so it does not make a difference in its calculation-procedure). Let us give this list (including the dummie-instruction) and indicate its ordering by a series of consecutive indexes: 1. 0O implies 0OR 2. 01 implies 11R 3. 1O implies 0OR 4. 11 implies 101R 5. 10O implies 11OL 6. 101 implies 101R 7. 11O implies 01STOP 8. 111 implies 100OL 9. 100O implies 1011L 10. 1001 implies 1001L 11. 101O implies 110OR 12. 1011 implies 101R 13. 110O implies 0OR (dummie) 14. 1101 implies 1111R 15. 111O implies 111R 1111 implies 111OR We see that the binary numbers, composed of the ordinary 0's and 1's (of the sequence before IMPLIES) together with the boldface O's and 1's (of the sequence before IMPLIES), indeed form a numerically successive series. Well, when an instruction is executed by the machine, it will find itself in a certain internal state (i.e. it reads its new state, in the form of a binary number), and reads an unmarked (0) or marked (1) cell. These two readings together form a binary number that directly specifies the sequential number of the next instruction to be executed. So in this way we now can omit the term IMPLIES and everything preceding it. Moreover, as has been said, we do not have to distinguish between normal 1's and 0's and boldface 1's and O's anymore, we can write them either all as normal 1's and 0's or as boldface 1's and 0's. Further, in the process of coding we do not need to code for commas, (for example) at the end of each instruction, since the symbols R, L and STOP suffice to separate the instructions from one another. Having done all this we get the following instruction-list for Turing machine XN + 1 : 00R (dummie) Now we are ready to code these instructions into expanded binary according to the following list of transformations (recall that by coding in this way we are able to distinguish between numbers and non-numbers -- non-numbers are things like R, L, STOP, etc.) : 0 becomes 0 1 becomes 10 R becomes 110 L becomes 1110 STOP becomes 11110 Because these instructions are well separated by R, L and STOP, and correspondingly well separated by 110, 1110 and 11110 (because apart from them we can never encounter more then one 1's directly after each other) we can omit any 00 when this is all there is between two such signs (for example between R and L). So ... R 0 0 L ... -- where R is the last part of an instruction, and 0 0 L is the next instruction meaning : enter state 0, write a 0, and move one cell to the left -- can be written as . . . R L . . . , and thus coded as . . . 1 1 0 1 1 1 0 . . . . It is also clear that 01, when that is all there is between two (separating) signs like R, L or STOP, can be replaced by 1, because then we see this 1 between two separating signs (like R, L or STOP) and the fact that before it stands nothing, can be interpreted as a 0 , so, for example . . . R 0 1 L . . . can be replaced by . . . R 1 L . . . , and coded as . . . 1 1 0 1 1 1 1 0 . . . . In reading . . . 1 1 0 1 1 1 1 0 . . . the machine cannot be confused by the fact that it encounters . . . 1 1 1 1 0 . . . and interprets this (falsely) as " STOP", because then it reads . . . R STOP . . . , and because STOP itself already means R STOP, the machine would read R R STOP, which cannot be a proper instruction because in one and the same individual instruction the movement of the machine is only one cell (to the right or left), and not two times to the right, nor two times to the left, nor to the left and (then) to the right (because this latter is no movement at all and should not be coded). The above already abbreviated instruction-list (including the dummie-instruction) for Turing Machine XN + 1 will accordingly look like this (written horizontally) : R 1 1 R R 1 0 1 R 1 1 0 L 1 0 1 R 1 S T O P 1 0 0 0 L 1 0 1 1 L 1 0 0 1 L 1 1 0 0 R 1 0 1 R R 1 1 1 1 R 1 1 1 R 1 1 1 0 R This is coded as the tape sequence : 11010101101101001011010100111010010110101 11101000011101001010111010001011101010001 1010010110110101010101101010101101010100110 To the left and to the right of this sequence we must think of a potentially infinite string of 0's. We see that the sequence begins with 110 and ends with 110. We can leave out the initial 110, because this means 00R, representing the initial instruction 00 implies 00R that we assume (i.e. stipulate) common to all Turing machines, so that the device can start arbitrarily far to the left of the marks on the tape and run to the right until it comes up to the first mark (1). Also we may delete the final 110, since all Turing Machines must have their descriptions ending this way, because they all end with R ( 110 ), L ( 1110 ) or STOP ( 11110 ). The machine will always start with reading an 110 in front of this economized sequence, and always read an 110 after this sequence. When we moreover imagine to leave out the infinite strings of 0's to the left and to the right, then the resulting BINARY NUMBER (i.e. the resulting sequence interpreted as an ordinary binary number) is THE NUMBER OF THE TURING MACHINE, which in our case of the Turing machine XN + 1 is : 10101101101001011010100111010010110101 11101000011101001010111010001011101010001 1010010110110101010101101010101101010100 In standard denary notation, this particular number is : Sometimes one loosely refers to a Turing machine whose number is N as the Nth Turing machine TN. Thus XN + 1 is the 450813704461593958982113775643437908th Turing machine. In principle with each binary number should correspond a certain Turing Machine, which could perform a certain task. But many of them are not really genuine Turing machines at all because many of them turn out not to be defined unambiguously or run forever without giving a final output [See PENROSE, 1990, p.70/1]. So we have now succeeded in coding the instruction-set of any Turing machine into a string of 0's and 1's, and this string can be represented on a tape and serve as a program for a universal Turing machine. The machine reads this string by adding 110 at the beginning and at the end, and by interpreting this whole string as expanded binary it reads in fact its instruction-set. But of course this is not enough. Every Turing machine needs DATA, so also our universal Turing machine. The DATA must be coded on the tape after the coding (sequence) of the instruction-set (the program), separated by the sign 111110. This coding of DATA must also be done by means of expanded binary, because sometimes we need more than one number as input, and these numbers must be separated by signs themselves also consisting of 0's and 1's only. Moreover one can have Turing Machines which operate directly on mathematical formulae, such as algebraic or trigonometric expressions. In these expressions all kinds of signs occur that are themselves not numbers, such as integrals sines and cosines. They all must be coded using only 0's and 1's. And this can be done conveniently by the method of expansion. And the Machine must accordingly read such a string as a result of this expansion. So far so good. Besides the fact that the machine interprets such a string in such and such a way (in this case as the result of expansion), WE ourselves can (also) interprete (i.e. read) such a string in another way (namely for the purpose of generating a concept for a universal Turing Machine) : Because the data-string always consists of 0's and 1's only, we can always (also) interpret it as just the representation of ONE binary number (We did this already with the string, representing the instructions, and in that way obtained the number of the Turing Machine). But because the series of 0's and 1's always ends up with an potentially infinite series of 0's it can as such not be interpreted as a definite binary number (the value of this number would be indefinitely large), so we must terminate it somewhere. Because now we interpret that string NOT as the result of expansion (which the machine does), we cannot interpret sequences as, say, 110, as commas, one of them terminating the data-string. One way of reading the string as an ordinary binary number could be by only reading this string till the last 1 appears, and including this last 1. But then our reading would result in odd numbers only. To remedy this problem we, in our effort to read the data-string as ONE number (the machine reads it another way), will NOT read the last 1, and then readings of both odd and even numbers are indeed possible. For example we can read the input-string 1 1 0 1 1 1 0 0 1 0 0 0 0 0 . . . as the following binary number : 1 1 0 1 1 1 0 0, and the number 1 1 0 1 1 1 0 0 1 1 0 0 0 0 0 . . . as the following binary number : 1 1 0 1 1 1 0 0 1 And this interpretation of the input-string can be used to generate the concept of the universal Turing machine, as follows : Our discussion will first of all use only valid Turing Machines, i.e. correctly specified machines, which means that the machine, when turned on will come to a halt after a finite number of steps, and delivers an output. We call the data-string, interpreted as a single binary number, the number M. When the Nth Turing Machine TN is fed with the number M, then it will, provided that this Turing machine is correctly specified, generate an output after a finite number of steps. This output can in fact be a complex expression containing all kinds of signs coded in expanded binary. But, just as was the case with the data-string, we can (also) interpret this output-string as representing ONE (ordinary) binary number. Let us call this number P. So when we feed TN with M, then we get P. We can express this as follows : TN (M) = P But we can look at this relation in a slightly different way also. When we know the numbers N and M, then we can work out what P is, by seeing what the Nth Turing machine does to M. This particular operation is entirely algorithmic and can therefore be carried out by a Turing machine U. This machine U works on the pair (N,M) and outputs P.: U (N,M) = TN (M) = P Since the machine U has to act on both N and M, and since we want to put both N and M on a tape, we need some way of separating the two. We can do this by the same type of sign we already use in N to specify things like R (110, L 1110, STOP 11110, etc.). So if we use a not yet used sign, say, 111110, to code for this separation-mark, then we can separate N and M easily. When the Machine is working after this mark, then it is working on the data, and here we can use for example 110 for coding a comma, and other such marks for other signs and symbols. So it is now possible to code both N and M on the tape of U. When we start U it will work on M according to N, i.e. it simulates the Turing Machine TN. So this Machine U can simulate any Turing Machine, and thus is a Universal Turing Machine. Let us quote an example, adapted from PENROSE, p. 73, for U simulating the eleventh Turing Machine, T11. We give this Turing machine T11 the number 722 as input. So we then have a Universal Turing Machine working on the numbers N and M, where N = 11 (denary notation) and M = 722. When we express the number 11 in (ordinary) binary notation, then we get N = 1011. We know that the machine reads this sequence as being preceded by 110, and also terminated by 110. So it reads (we space the digits for clarity) : 1 1 0 1 0 1 1 1 1 0 This represents the action-parts of the instruction (i.e. the instruction-set, represented by the sequence of the -- ordered -- action-parts of the instructions) : 1 1 0 = R, 1 1 1 1 0 = STOP Between these we find 1 0, that must be interpreted (read) as 1, because the expanded form of 1 is 1 0. So we have : Before R we see nothing. This means that we must read this " nothing " as 0 0, because before signs like STOP, R, L, etc. we expect TWO symbols. Likewise we must interpret the 1 . Because before STOP we expect TWO symbols, we must interprete this 1 as 0 1. So we accordingly have : 0 0 R 0 1 STOP This corresponds to the action-parts of two instructions, namely 00R and 01STOP 00R means : enter internal state 0, write 0, and move one cell to the right. 01STOP means : enter internal state 0, write 1, move one cell to the right and stop (recall that STOP means RSTOP). We know that each action-part (00R and 01STOP) must be connected with the term IMPLIES, and before that term the conditional-part of the instruction must be conceived present : the FIRST action-part must relate to the numerical FIRST conditional-part ( 00 ), the SECOND action-part must relate to the numerical SECOND conditional-part ( 01 ) [Recall that the full instructions are ORDERED according to the numerical value of the conditional-parts, which means that the machine, after having executed a complete instruction, knows which instruction to execute next. For example when, after having executed a complete instruction, it reads its internal state as 1101 (binary notation for 13), and it reads a marked (1) cell, then it must look for the 11011th instruction (in denary notation this is the 27th instruction). This next instruction to be executed can be found in the ordered list of action-parts, and the machine knows that the corresponding conditional-part is 11011, which means : IF the machine finds itself in state 1101, and reads a marked cell, then it must ..... ]. For our example we found that the conditional part of 00R is 00, and the conditional part of 01STOP is 01. So the complete instruction-set for T11, and consequently (the complete instructon-set, i.e. its program) for U (11, 722), is : 00 implies 00R, 01 implies 01STOP This instruction set means : IF the machine is in internal state 0, and finds itself on a blank cell (it is stipulated that every Turing mache starts this way), THEN it must remain in state 0, write a 0, and moves one cell to the right. IF the machine is in internal state 0, and reads a marked cell, THEN it must move one cell to the right and switch itself off (recall again that STOP = RSTOP). This instruction-set will now become the program (that could easily be replaced by another such program) of the Universal Turing machine U. This program (the instruction-set of T11) will be represented on the tape of U by the binary representation of the number 11 (N = 11) : 1 0 1 1. This number will be separated from the number M (M = 722) representing the data, by the sign 1 1 1 1 1 0 , and after this termination-sign should come the binary representation of 722, that is 1 0 1 1 0 1 0 0 1 0 , but this must be represented on the tape as 1 0 1 1 0 1 0 0 1 0 1 where the last 1 is readed as a termination-mark. So we feed U with the following string : . . . 0 0 0 1 0 1 1 1 1 1 1 1 0 1 0 1 1 0 1 0 0 1 0 1 0 0 0 . . . . . . 0 0 0 is the initial blank tape, 1 0 1 1 is the binary representation of 11, 1 1 1 1 1 0 is the termination mark of N, 1 0 1 1 0 1 0 0 1 0 is the binary representation of 722, 1 0 0 0 . . . is the remainder of the tape. What the Turing machine U would have to do, at each successive step of the operation of TN on M, would be to examine the structure of the succession of digits in the expression for N (i.e. reading the successive instructions for TN), so that the appropriate replacement (the calculation activities) in the digits for M (i.e. TN's ' tape ') can be made. U's own list of instructions would simply be providing a means of reading the appropriate entry in that list (which is encoded in the number N) at each stage of application to the digits of the data-string as given by M. There would admittedly be a lot of dodging backwards and forwards between the digits of M and those of N, and the procedure would tend to be exceedingly slow. Nevertheless a list of instructions for such a machine can certainly be provided [PENROSE, p. 73], and we call such a machine a universal Turing Machine. The machine U, when first fed with the number N, precisely imitates the Nth Turing machine. Since U is a Turing Machine, it will itself have a number, i.e. we have U = Tu, for some number u. At last we have described one of the possible versions of a UNIVERAL TURING MACHINE. We can feed this machine with any correctly specified program (the number N), and then it will execute the task that the program dictates. We can perhaps interpret its own instruction-set (described with the number u ) as the hardware of the machine, i.e. the totally internalized specification, that should be realized by means of electrical circuits inside the machine [See PART TWO of this Essay]. All modern (serial and parallel) general purpose computers are in effect universal Turing machines, but of course their logical design needs not to be identical with the machine just described. What we have done here is to describe the general CONCEPT of a general purpose digital computer, by means of the description of a universal Turing machine. For the continuation (Part Two ) of this Essay, treating of electrical circuits that can compute, please click HERE . back to top |
0.970419 | 1 | A polygon is a two-dimensional figure with three or more straight sides. (So triangles are actually a type of polygon.) Polygons are named according to the number of sides they have. All polygons, no matter how many sides they possess, share certain characteristics: • The sum of the interior angles of a polygon with n sides is (n – 2). For instance, the sum of the interior angles of an octagon is (8 – 2) = 6 = . • The sum of the exterior angles of any polygon is . • The perimeter of a polygon is the sum of the lengths of its sides. The perimeter of the hexagon below is 5 + 4 + 3 + 8 + 6 + 9 = 35. Regular Polygons The polygon whose perimeter you just calculated was an irregular polygon. But most of the polygons on the SAT are regular: Their sides are of equal length and their angles congruent. Neither of these conditions can exist without the other. If the sides are all equal, the angles will all be congruent, and vice versa. In the diagram below, you’ll see, from left to right, a regular pentagon, a regular octagon, and a square (also known as a regular quadrilateral): Good news: Most polygons on the SAT have just four sides. You won’t have to tangle with any dodecahedrons on the SAT you take. But this silver cloud actually has a dark lining: There are five different types of quadrilaterals that pop up on the test. These five quadrilaterals are trapezoids, parallelograms, rectangles, rhombuses, and squares. A trapezoid may sound like a new Star Wars character. Certainly, it would be less annoying than Jar Jar Binks. But it’s actually the name of a quadrilateral with one pair of parallel sides and one pair of nonparallel sides. In this trapezoid, AB is parallel to CD (shown by the arrow marks), whereas AC and BD are not parallel. The formula for the area of a trapezoid is where s1 and s2 are the lengths of the parallel sides (also called the bases of the trapezoid), and h is the height. In a trapezoid, the height is the perpendicular distance from one base to the other. To find the area of a trapezoid on the SAT, you’ll often have to use your knowledge of triangles. Try to find the area of the trapezoid pictured below: The question tells you the length of the bases of this trapezoid, 6 and 10. But to find the area, you first need to find the height. To do that, split the trapezoid into a rectangle and a 45-45-90 triangle by drawing in the height. Once, you’ve drawn in the height, you can split the base that’s equal to 10 into two parts: The base of the rectangle is 6, and the leg of the triangle is 4. Since the triangle is 45-45-90, the two legs must be equal. This leg, though, is also the height of the trapezoid. So the height of the trapezoid is 4. Now you can plug the numbers into the formula: A parallelogram is a quadrilateral whose opposite sides are parallel. In a parallelogram, • Opposite sides are equal in length: BC = AD and AB = DC • Opposite angles are equal: and • Adjacent angles are supplementary: • The diagonals bisect (split) each other: BE = ED and AE = EC • One diagonal splits a parallelogram into two congruent triangles: • Two diagonals split a parallelogram into two pairs of congruent triangles: and The area of a parallelogram is given by the formula where b is the length of the base, and h is the height. A rectangle is a quadrilateral in which the opposite sides are parallel and the interior angles are all right angles. Another way to look at rectangles is as parallelograms in which the angles are all right angles. As with parallelograms, the opposite sides of a rectangle are equal. The formula for the area of a rectangle is The diagonals of a rectangle are always equal to each other. And one diagonal through the rectangle cuts the rectangle into two equal right triangles. In the figure below, the diagonal BD cuts rectangle ABCD into congruent right triangles ABD and BCD. Since the diagonal of the rectangle forms right triangles that include the diagonal and two sides of the rectangle, if you know two of these values, you can always calculate the third with the Pythagorean theorem. If you know the side lengths of the rectangle, you can calculate the diagonal. If you know the diagonal and one side length, you can calculate the other side. Also, keep in mind that the diagonal might cut the rectangle into a 30-60-90 triangle. That would make your calculating job even easier. A rhombus is a specialized parallelogram in which all four sides are of equal length. In a rhombus, • All four sides are equal: AD = DC = CB = BA • The diagonals bisect each other and form perpendicular lines (but note that the diagonals are not equal in length) • The diagonals bisect the vertex angles The formula for the area of a rhombus is where b is the length of the base and h is the height. To find the area of a rhombus on the SAT (you guessed it), you’ll probably have to split it into triangles: If ABCD is a rhombus, AC = 4, and ABD is an equilateral triangle, what is the area of the rhombus? Since ABD is an equilateral triangle, the length of each side of the rhombus must be 4, and angles ADB and ABD are 60º. All you have to do is find the height of the rhombus. Draw an altitude from A to DC to create a 30-60-90 triangle. Since the hypotenuse of the 30-60-90 triangle is 4, you can use the ratio 1::2 to calculate that the length of this altitude is 2. The area formula for a rhombus is bh, so the area of this rhombus is 4 2 = 8. A square combines the special features of the rectangle and rhombus: All its angles are 90º, and all four of its sides are equal in length. The square has two more crucial special qualities. In a square, • Diagonals bisect each other at right angles and are equal in length. • Diagonals bisect the vertex angles to create 45º angles. (This means that one diagonal will cut the square into two 45-45-90 triangles, while two diagonals break the square into four 45-45-90 triangles.) The formula for the area of a square is where s is the length of a side of the square. Because a diagonal drawn into the square forms two congruent 45-45-90 triangles, if you know the length of one side of the square, you can always calculate the length of the diagonal: Since d is the hypotenuse of the 45-45-90 triangle that has legs of length 5, according to the ratio 1:1:, you know that . Similarly, if you know the length of the diagonal, you can calculate the length of the sides of the square. Help | Feedback | Make a request | Report an error |
0.978356 | 1 | Take the 2-minute tour × Statement For every $n > 1$ there is always at least one prime $p$ such that $n < p < 2n$. I am curious to know that if I replace that $2n$ by $2n-\epsilon$, ($\epsilon>0$) then what is the $\inf (\epsilon)$ so that the inequality still holds, meaning there is always a prime between $n$ and $2n-\epsilon$ share|improve this question 4 Answers 4 up vote 2 down vote accepted Three related points are worthy of mention, showing that epsilon can be close to n. There is a result of Finsler that approximates how many primes lie between n and 2n, which is of order o(n/log(n)) as is to be expected by the Prime Number Theorem. Literature on prime gaps will tell you the exponent delta such that there is (for sufficiently large n) at least one prime in the interval (n , n + n^delta). I think delta is less than 11/20. Observed data suggests that n^delta can be replaced by something much smaller: for n between something like 3 and 10^14 , some function like 2(log(n))^2 works. share|improve this answer Bertrand's postulate is if n > 3 is an integer, then there always exists at least one prime number p with n < p < 2n − 2. Thus ε < 2 for n > 3. What if n ≤ 3? • For n = 3, 3 < 5 < 6 - ε ⇒ ε < 1 • For n = 2, 2 < 3 < 4 - ε ⇒ ε < 1 Hence we have 0 < ε < 1, if ε is a constant. share|improve this answer now the question may be more interesting. I actually wanted to find the least postive $\epsilon$ such that the condition remains true. – anonymous Aug 11 '10 at 16:21 @Chandru1: Any ε between 0 and 1 will do, so the infimum of all possible positive ε is 0. This is not surprising. Did you want the supremum instead? (The supremum is 1 of you want it to hold for n=2 or 3, and infinity if you only want sufficiently large n.) – ShreevatsaR Aug 11 '10 at 16:44 @ShreevatsaR : Hi i got it. – anonymous Aug 11 '10 at 16:50 The answer depends if you want an answer that is true "for all n" or an answer that is true "for all sufficiently large n." For instance, there is not always a prime in an interval of the form (n, 3n/2). Take n=7, for instance. There is always a prime in such an interval "for sufficiently large n," however. share|improve this answer This was there in the proof of Bertrand's theorem. if $n>60$, then $\varepsilon=\frac{2n}{3}$. share|improve this answer Hi! Welcome to math.SE. This solution would be more helpful if you cited readers to the proof you are referring to. Might you be able to add that? – rschwieb Dec 28 '12 at 14:30 Your Answer |
0.970428 | 1 | Ironically, he takes a swipe at PHP. And the swipe is a bit silly since PHP does have a strlen function. His point is that "Learn X in 10 minute" type books are so superficial as to miss even basic string functions. Except strlen does not give you the length of a string, but the size of a string (in bytes), unless ofcourse mbstring's func_overload directive is enabled. |
0.941304 | 1 | Help Center Local Navigation Adjusting path segments Irregular shapes, such as polygons, polylines, freehand paths, elliptical arcs, and Bézier curves are made up of a number of path segments. Path segments can be straight or curved: polygons and polylines contain one or more straight path segments; freehand paths, elliptical arcs, and Bézier curves can include straight and curved path segments. Path segments are defined by the position of the following two kinds of points: • Anchor points specify the beginning and end of the path segment • Control points to change the direction and shape of a curve In the Plazmic® Composer, anchor points appear as red squares along the path. When you select an anchor point on a curve, the control points appear. Control points appear as black circles, connected to an anchor point by a direction line. You can use the Transformation or Select tool to move anchor points and control points to refine a shape. You can use the Add Points and Remove Point tools to add or remove anchor points. When you add anchor points to a path, you increase the number of path segments and you have more control over the shape. When you remove anchor points, you simplify the path. Was this information helpful? Send us your comments. |
0.912825 | 1 | (1) Design and build an even parity generator for a 3-bit word. This circuit will have three inputs and one output. This output should be 1 if there is an odd number of 1’s in the input word; otherwise, the output should be 0. Produce a design using gate equivalency rules containing: (a) A truth table defining the function as previously described Want an answer? No answer yet. Submit this question to the community. |
0.965361 | 1 | """Polynomial interpolation. """ from polynomial import Polynomial from reduce import System def interpolator(data): """Construct a Polynomial through desired points. Since argument, data, is a dictionary mapping inputs to the polynomial to their desired outputs. Current implementation only supports integer (including long) keys; I should ultimately make it more liberal, at least as to outputs. Returned polynomial's order is equal to the number of entries in the supplied dictionary. We're solving for a list c of coefficients given: value = sum(: c[i] * power(i, key) ←i :) for each key, valu in our dictionary. This is a simple matrix problem :-)\n""" if filter(lambda x: not ( isinstance(x, int) or isinstance(x, long) ), data.keys()): raise NotImplementedError, 'Only integral arguments supported for now' else: n = len(data) solve = System(len(data), map(lambda k: map(lambda i, k=k: k**i, range(len(data))), data.keys())) coeffs = solve.obtain(data.values()) if len(coeffs) > n: assert len(coeffs) == n + 1 coeffs, scale = coeffs[:n], coeffs[n] if scale != 1: return Polynomial(coeffs) * 1. / scale return Polynomial(coeffs) |
0.961117 | 1 | Cumulative distribution function of the logistic distribution Use only in the MuPAD Notebook Interface. This functionality does not run in MATLAB. stats::logisticCDF(m, s) stats::logisticCDF(m, s) returns a procedure representing the cumulative distribution function of the logistic distribution with mean m and standard deviation s > 0 as a procedure. The procedure f := stats::logisticCDF(m, s) can be called in the form f(x) with an arithmetical expression x. The return value of f(x) is either a floating-point number or a symbolic expression: If x is a floating-point number and m and s can be converted to floating-point numbers, then f(x) returns a floating-point number between 0.0 and 1.0. The call f(- infinity ) returns 0; the call f( infinity ) returns 1. In all other cases, the expression 1/2*(1 + tanh(PI*(x - m)/(2*sqrt(3)*s))) is returned symbolically. Numerical values for m and s are only accepted if they are real and s is positive. Environment Interactions Example 1 We evaluate the cumulative distribution function with m = 0 and s = 1 at various points: f := stats::logisticCDF(0, 1): f(-infinity), f(-3), f(0.5), f(2/3), f(PI), f(infinity) delete f: Example 2 We use symbolic arguments: f := stats::logisticCDF(m, s): f(x) When numerical values are assigned to m and s, the function f starts to produce numerical values: m := 0: s := 1: f(3), f(3.0) delete f, m, s: The mean: an arithmetical expression representing a real value The standard deviation: an arithmetical expression representing a positive real value Return Values Was this topic helpful? |
0.975991 | 1 | Slothouber–Graatsma puzzle From Wikipedia, the free encyclopedia Jump to: navigation, search The Slothouber–Graatsma puzzle is a packing problem that calls for packing six 1 × 2 × 2 blocks and three 1 × 1 × 1 blocks into a 3 × 3 × 3 box. The solution to this puzzle is unique (up to mirror reflections and rotations). The puzzle is essentially the same if the three 1 × 1 × 1 blocks are left out, so that the task is to pack six 1 × 2 × 2 blocks into a cubic box with volume 27. The Slothouber–Graatsma puzzle is regarded as the smallest nontrivial 3D packing problem.[citation needed] Solution of Slothouber-Graatsma puzzle showing the six 1 x 2 x 2 pieces in exploded view. The solution of the Slothouber–Graatsma puzzle is straightforward when one realizes that the three 1 × 1 × 1 blocks (or the three holes) need to be placed along a body diagonal of the box, as each of the 3 x 3 layers in the various directions needs to contain such a unit block. This follows from parity considerations, because the larger blocks can only fill an even number of the 9 cells in each 3 x 3 layer.[1] The Slothouber–Graatsma puzzle is an example of a cube-packing puzzle using convex polycubes. More general puzzles involving the packing of convex rectangular blocks exist. The best known example is the Conway puzzle which asks for the packing of eighteen convex rectangular blocks into a 5 x 5 x 5 box. A harder convex rectangular block packing problem is to pack forty-one 1 x 2 x 4 blocks into a 7 x 7 x 7 box (thereby leaving 15 holes).[1] See also[edit] 1. ^ a b Elwyn R. Berlekamp, John H. Conway and Richard K. Guy: Winning ways for your mathematical plays, 2nd ed, vol. 4, 2004. External links[edit] |
0.913928 | 1 | Basic Input / Output A String is a sequence of characters. In Java String is a class from which you make String objects, like: String name = "Alan Turing"; String course = "CS 303E"; String ssn = "123456789"; A String has an associated length. To obtain the length of a String do: int nameLen = name.length(); You can also concatenate two Strings to get a third String using the + operator. String zip9 = "78712" + "-" + "1059"; You can convert data that is entered as a primitive type into a String by using the valueOf() method. int i = 10; String ten = String.valueOf ( i ); Basic Output There are no I/O statements in the Java language. The I/O methods belong to classes in the package. Any source or destination for I/O is considered a stream of bytes. There are three objects that can be used for input and output. The System.out object has two methods - print() and println(). The println() method will print a carriage return and line feed after printing so that the next output will be printed on a new line. The method print() will keep the cursor on the same line after printing. In Java 1.5 you can also use the method printf(). The advantage of using the printf() method is that you can now format your output easily. The structure of the printf syntax is as follows: System.out.printf (format, args); Here is a complete specification of the format string. Let us look at a couple of examples at how this is used: int i = 10; double x = 3.1415; System.out.printf ("The value of i is %d \n", i); System.out.printf ("The value of x is %4.2f \n", x); Basic Input In Java 1.5 we have the new Scanner class. It has methods of the form hasNext() that returns a boolean if there is more input. There are also methods of the form next() that read primitive types and strings. The Scanner object breaks the input into tokens using whitespace as delimiter. Other delimiters can also be used. Here is a short code snippet that shows the use of the Scanner class. The following program snippet shows how to read and write to the console. import java.util.*; public class CalcArea public static void main ( String args[] ) System.out.print ( "Enter the radius: " ); Scanner sc = new Scanner (; double radius = sc.nextDouble(); double area = 3.14159 * radius * radius; System.out.println ( "Area is: " + area ); |
0.955726 | 1 | Gene Tree Reconciliation Given a gene tree and a species tree (e.g., Fig. 1), Notung-2.6 will determine: Notung-2.6 calculates the Duplication/Loss Score of a tree using the formula: D/L Score = cL*losses + cD*duplications + cCD*conditionalDuplications The values of cL, cD, and cCD may be set by the user. By default, cL = 1, cD = 1.5, and cCD = 0. These items can be seen in Notung's GUI (Fig. 1) or presented in a text file, generated from the command line. Fig. 1: Gene Tree Reconciliation Fig. 1a: Gene Tree Fig. 1c: Reconciled Gene Tree Fig. 1b: Species Tree Fig 2:Command Line Output This is an example of Notung-2.6 text file output. It shows the results of the gene tree shown in Fig. 1a reconciled with the species tree shown in Fig. 1b. This output is easily parsible, allowing it to fit into automated processes for high throughput genomic analysis. COST: 2.5 Duplication L. Bound U. Bound n4 tetrapod N/A Duplications: 1 Species No. of Losses tetrapod 0 HUMAN 0 FROG 1 MOUSE 0 mammalia 0 Losses: 1 |
0.986415 | 1 | 73,721 questions 72,861 answers 50,666 users Most popular tags algebra problems solving equations word problems calculating percentages geometry problems calculus problems fraction problems math problem trigonometry problems simplifying expressions rounding numbers solve for x order of operations pre algebra problems algebra evaluate the expression slope intercept form probability please answer this queastion as soon as possible. thank you :) factoring please help me to answer this step by step. word problem plz. give this answer as soon as possible polynomials statistics problems solving inequalities algebra 2 problems how to find y intercept logarithmic equations equation of a line solving systems of equations by substitution help sequences and series dividing fractions greatest common factor graphing linear equations square roots geometric shapes 6th grade math substitution method long division least common multiple factoring polynomials solving systems of equations http: mathhomeworkanswers.org ask# solving equations with fractions least to greatest ratio and proportion standard form of an equation function of x dividing decimals help me!! proving trigonometric identities algebra problem trig identity solving equations with variables on both sides precalculus problems ( help me slope of a line through 2 points domain of a function solving systems of equations by elimination algebraic expressions college algebra i need help with this distributive property trinomial factoring solving quadratic equations factors of a number perimeter of a rectangle slope of a line greater than or less than division fraction word problems limit of a function 8th grade math exponents differentiation equivalent fractions how to find x intercept differential equation elimination method algebra 1 hw help asap simplifying fractions algebra word problems area of a triangle 7th grade math inverse function geometry 10th grade . geometry story problems area of a circle simplify standard deviation integral place value parallel lines fractions solving triangles width of a rectangle systems of equations containing three variables mixed numbers to improper fractions circumference of a circle percentages solving linear equations scientific notation problems number of sides of a polygon zeros of a function prime factorization area of a rectangle 5th grade math homework solving systems of equations by graphing length of a rectangle diameter of a circle lowest common denominator quadratic functions mathematical proofs dividing polynomials derivative of a function vertex of a parabola calculus what is the answers? integers math algebra 1 converting fractions to decimals evaluating functions equation algebra 2 perpendicular lines finding the nth term range of a function ordered pairs combining like terms least common denominator calculators unit conversion radius of a circle greatest to least solve for y complex numbers ) solving radical equations 4th grade math functions slope calculus problem because i don't understand area word problems calculate distance between two points multiplying fractions statistics common denominator ratios geometry word problems math homework absolute value set builder notation binomial expansion percents help me please and show how to work it out round to the nearest tenth simplifying radicals equation of a tangent line #math significant figures please answer this question as soon as possible. thank you :) radicals product of two consecutive numbers show work () midpoint of a line adding fractions median show every step to solve this problem graphing solve number patterns pre-algebra problems divisibility rules graphing functions find out the answer place values - roots of polynomials factor by grouping 1 improper fractions to mixed numbers volume of a cylinder maths percentage decimals (explain this to me) derivatives ? subtracting fractions expanded forms how to complete the square simultaneous equations rational irrational numbers sets numbers solving equations with variables comparing decimals need help = http: mathhomeworkanswers.org ask?cat=# please help solving quadratic equations by completing the square surface area of a prism multiplying polynomials implicit differentiation http: mathhomeworkanswers.org ask# pemdas integration angles divide age problem rounding decimals perimeter of a triangle mixed numbers average rate of change algebra1 #help compound interest solving trigonometric equations surface area of a cube chemistry matrices logarithms trigonometry how do you solve this problem in distributive property rounding to the nearest cent 9th grade math dividing answer factor geometry problem reducing frations to lowest terms arithmetic sequences direct variation mean lcm measurement probability of an event simplifying trigonometric equation using identities none How tall is the tree when it casts a 96 ft shadow? A vertical yardstick next to the tree casts a 6 ft shadow. asked Apr 11, 2013 in Geometry Answers by anonymous Your answer Your name to display (optional): Anti-spam verification: Related questions 3 answers 2,699 views 1 answer 290 views 1 answer 225 views 1 answer 101 views |
0.921991 | 1 | Conceptual - Position Vs. Time An object moves back and forth, as shown in the position-time graphbelow. Which of the following statements are correct/incorrect? At point 5 the velocity is negative. At points 5 and 7 the object is on the negative side ofthe origin. The velocity is positive at point 7. At points 2 and 5 the object is the same distance fromthe origin. At points 1 and 3 the velocities are about the same. At point 1 the object is closer to the origin than at anyother numbered point. The velocity is zero at point 8. Want an answer? No answer yet. Submit this question to the community. |
0.990505 | 1 | Definition of:counting sort counting sort A sorting technique that is used when the range of keys is relatively small and there are duplicate keys. Counting sorts differ from sorts that compare data in multiple passes. They work by creating an array of counters the size of the largest integer in the list; therefore, the keys must be integers or data that can be readily converted to integers. In the first pass, each key value is used to index into the array to add 1. For example, if the key field contains 0000123, the 123rd counter in the array is incremented by 1. Counting Sort Vs. Rapid Sort Space equal to the original unordered list is allocated in memory, and the second pass of the counting sort scans the original list and uses the data in the counters to move each record from the original list into the sorted list. A "rapid sort" is similar to a counting sort, except that it is used to only count the number of occurrences of key fields with no additional data. Instead of using the data in the counters to move records into a new sequence, the counter data in a rapid sort are used to print each key field along with its corresponding count. See sort algorithm. A Counter Array The array contains a counter for each key value. For example, if they range from 1 to 1,000 there would be 1,000 counters. In the final pass, the counter data is used to move the unsorted records to a new sorted order (counting sort) or to print the number of occurrences of just the key (rapid sort). |
0.932218 | 1 | As stated, the problem of quantisation of an image is related only to the image's histogram (not the actual layout of the pixels). A common error function used takes the l2 distance between the original and quantised histograms (minimise the sum of the square of the distance between each pixel in the original image and the pixel representing it in the new image). Deriving this error function, we can see that at a local minimum, the following holds: • The average value of all the pixels represented by some color in the quantised image is equal to that color. This suggests the following optimization algorithm, which works quite nicely in practice: 1. Pick some initial set of colors. 2. Divide the image histogram into the regions of points closest to each of the colors (this is a Voronoi tesselation of the color cube!). 3. Move each color to the average of the points in its region, weighted by the histogram. 4. Repeat from 2 until weighted sum-of-squares errors is small enough. Note that convergence to the global minimum is not guaranteed; in practice this does not matter. Log in or registerto write something here or to contact authors. |
0.958376 | 1 | Gamma function The Γ\Gamma function Idea and Definition Leonhard Euler solved the problem of finding a function of a continuous variable xx which for integer values of x=nx=n agrees with the factoriel function nn!n\mapsto n!. In fact, gamma function is a shift by one of the solution of this problem. For a complex variable x1,2,x\neq -1,-2,\ldots, we define Γ(x)\Gamma(x) by the formula Γ(x)=lim kk!k x1(x) k \Gamma(x) = \lim_{k\to \infty} \frac{k! k^{x-1}}{(x)_k} where (x) 0=1(x)_0 =1 and for positive integer k=1,2,k = 1,2,\ldots, (x) k=x(x+1)(x+2)(x+k1) (x)_k = x (x+1) (x+2)\cdots (x+ k-1) is the “Pochhammer symbol” (or rising factorial). It easily follows that Γ(n+1)=n!\Gamma(n+1) = n! for natural numbers n=0,1,2,n = 0, 1, 2, \ldots. As a function of a complex variable, the Gamma function Γ(x)\Gamma(x) is a meromorphic function with simple poles at x=0,1,2,x = 0, -1, -2, \ldots. Extending the recursive definition of the ordinary factorial function, the Gamma function satisfies the following translation formula: Γ(x+1)=xΓ(x)\Gamma(x+1) = x\Gamma(x) away from x=0,1,2,x = 0, -1, -2, \ldots. It also satisfies a reflection formula, due to Euler: Γ(x)Γ(1x)=πsin(πx).\Gamma(x)\Gamma(1-x) = \frac{\pi}{\sin(\pi x)}. Quite remarkably, the Gamma function (this time as a function of a real variable) is uniquely characterized in the following theorem: Theorem (Bohr-Mollerup) The restriction of the Gamma function to the interval (0,)(0, \infty) is the unique function ff such that f(x+1)=xf(x)f(x+1) = x f(x), f(1)=1f(1) = 1, and logf\log f is convex. A number of other representations of the Gamma function are known and frequently utilized, e.g., • Product representation: 1Γ(x)=xe γx n=1 (1+xn)e x/n\frac1{\Gamma(x)} = x e^{\gamma x} \prod_{n=1}^{\infty} (1 + \frac{x}{n})e^{-x/n} where γ\gamma is Euler's constant?. • Integral representation: Γ(x)= 0 t xe tdtt.\Gamma(x) = \int_{0}^{\infty} t^x e^{-t} \frac{d t}{t}. • completed Riemann zeta function? • George Andrews, Richard Askey, Ranjan Roy, Special Functions. Encyclopedia of Mathematics and Its Applications 71, Cambridge University Press, 1999. Revised on June 26, 2015 13:29:03 by Noam Zeilberger ( |
0.995348 | 1 | 시간 제한 메모리 제한 제출 정답 맞은 사람 정답 비율 1 초 128 MB 24 6 1 5.556% Every student needs help from getting new knowledge by asking questions. Surveys are suggesting that some similar questions are repeated frequently. So it will be nice to develop an automatic question-answering system to answer these questions. Your algorithm should not have any prior knowledge, but it must be able to read sentences and remember the mentioned facts. Whenever the question is asked about such a fact, the system has to answer it properly. The input consists of many dialogues. There is a single positive integer T on the first line of input. (T <= 500, but note that 95% of them are relatively small) It denotes the number of following dialogues. Each dialogue includes one or more lines. Each line contains one sentence: either a statement or a question. Statements end with a dot character (.) while questions end with a question mark (?). There is one extra line after each dialogue. That line ends with an exclamation mark (!). The definitions of the statements and questions will be discussed later. Sentences can contain words, spaces and punctuation characters. All words contain only Latin letters and are case-sensitive. Unlike the normal English writing rules, the first letter of a sentence should keep lowercase unless the first word itself should begin with a capital letter. There are no extra spaces between words. No word will have more than 10 characters. There will be at most 1000 lines per dialogue. Each statement has one of the following forms: noun_phrase are noun_phrase. noun_phrase can verb_phrase. everything which can verb_phrase can verb_phrase. everything which can verb_phrase are noun_phrase. noun_phrase and verb_phrase are both single word. The meanings of the four forms are: A are B: If X is A, then X is B. A can B: If X is A, then X has the ability to B. everything which can A can B: If X has the ability to A, X has the ability to B. everything which can A are B: If X has the ability to A, X is B. Each question has one of the following forms: are noun_phrase noun_phrase? can noun_phrase verb_phrase? can everything which can verb_phrase verb_phrase? are everything which can verb_phrase noun_phrase? They are the question form of the statements. In each test case, the number of different noun phrases will not exceed 100; the number of different verb phrases will not exceed 100. For each test case, output two lines. The first line describes the test case number counting from 1, while the second line contains the same number of characters as the number of questions in this test case. Each character is either ‘Y’(denoting you can get that fact logically) or ‘M’(otherwise), without quotes. 예제 입력 flies can fly. flies are insects. everything which can fly are animals. are everything which can fly insects? are flies animals? can flies eat? everything which can fly can eat. can flies eat? 예제 출력 Case #1: |
0.977968 | 1 | 7.23.4 (texinfo indexing) Overview Given a piece of stexi, return an index of a specified variety. Note that currently, stexi-extract-index doesn’t differentiate between different kinds of index entries. That’s a bug ;) Usage Function: stexi-extract-index tree manual-name kind Given an stexi tree tree, index all of the entries of type kind. kind can be one of the predefined texinfo indices (concept, variable, function, key, program, type) or one of the special symbols auto or all. auto will scan the stext for a (printindex) statement, and all will generate an index from all entries, regardless of type. The returned index is a list of pairs, the CAR of which is the entry (a string) and the CDR of which is a node name (a string). |
0.937157 | 1 | The mathematical study of the chance of occurrence of random events, or the chance with which a specific event will occur. ID Title Solved By Correct Ratio IPRB Mendel's First Law 7611 IEV Calculating Expected Offspring 4625 LIA Independent Alleles 2447 PROB Introduction to Random Strings 2110 RSTR Matching Random Motifs 915 EVAL Expected Number of Restriction Sites 706 INDC Independent Segregation of Chromosomes 464 AFRQ Counting Disease Carriers 384 MEND Inferring Genotype from a Pedigree 189 SEXL Sex-Linked Inheritance 299 WFMD The Wright-Fisher Model of Genetic Drift 229 EBIN Wright-Fisher's Expected Behavior 194 FOUN The Founder Effect and Genetic Drift 184 |
0.981234 | 1 | Here are all the current TS resumes in the Dothan, Alabama area. Each adult TS entertainer's TS resume consists of pics, previous TS work history, ratings, downloadable and saveable resumes, and a list of skill ratings. At the end of each page is a list of additional pages. Thousands of TS escorts are available in Dothan and other areas |
0.999933 | 1 | In Mixed number coding type of questions, a few groups of numbers each coding a certain short message, are given. Through a comparison of the given coded message, taking two at a time, the candidate is required to find the number code for each word and then formulate the code for the message given. |
0.944664 | 1 | Videos Test Yourself Books MathFacts Math Diagnostics Math Tricks Daily PhysEd Class Photos Worksheets Teen Math Musiclopedia Techlopedia Teacher Timesavers Study Guides Holiday Season Pizza To celebrate the Holiday Season, two men, Tiger and Stevie and two women, Jaimee and Rachel go out for pizza. They order a pizza and have it sliced into eight pieces. Each person orders something to drink. Two people ordered Merlot, one ordered Sauvignon and one ordered Beaujolais. To remain incognnito during the the holiday season, each person signs the restaurant registry with an assumed family name. The assumed last names are Nike, Taghauer, Gillette and Cadillac. Everyone had at least one piece. The two Merlot drinkers ate less than half the pizza between them. Jaimee had exactly two more pieces than Gillette. Tiger, (who is known to enjoy many pieces), had exactly one more piece than Nike. Mr. Cadillac had at least as many pieces as the Beaujolais drinker. Question : Can you match each person's full name with the number of pieces each had and the drink he or she ordered? See answer golf drills Brain Teaser Of The Week |
0.941749 | 1 | From Wikipedia, the free encyclopedia Jump to: navigation, search 3D projection of a tesseract, rotating along one axis Stereographic polytope 8cell.png A tesseract is a four-dimensional object, much like a cube is a three-dimensional object. A square has two dimensions. You can make a cube from six squares. A cube has three dimensions. The tesseract is made in the same way, but in four dimensions, so it looks different from what a person might expect. The tesseract has eight cells; each cell is a cube. The tesseract is one of the six convex regular 4-polytopes. Unlike three-dimensional objects which rotate on both an axis and a plane (the plane being of length and width and the axis being of the leftover dimension, height), a tesseract rotates on two planes, one made up of length and width, and one made up of height and the fourth dimension. It is the simplest hypercube. |
0.905122 | 1 | (Analysis by Nick Wu) In this problem, we have $N$ cows scattered on the plane and want to compute the minimum distance $D$ such that all cows are able to communicate with each other (perhaps through some intermediate cows), given that two cows can only communicate directly if the distance between them is less than or equal to $D$. We can model this problem as a graph problem where each cow is a vertex connected to every other cow with an edge with the weight being the distance between the two cows. We therefore want to compute the minimum edge weight such that the graph is connected. We can solve this problem using a union-find data structure. A simple union-find data structure supports two operations: $Find(x)$ - return a unique identifier for vertex $x$. $Find(x) = Find(y)$ if and only if $x$ and $y$ are in the same connected component. $Merge(x, y)$ - connect $x$ and $y$ with an edge. Let's assume that we have a sufficiently efficient implementation of a union-find data structure. Given the original $N$ vertices are all pairwise disconnected, we can loop over the edges in nondecreasing order by weight and add an edge between two vertices if they are not already in the same component. After we add $N-1$ edges, the graph is necessarily connected, and the last edge gives us our answer. It remains to implement the union-find data structure efficiently. We will describe an implementation that supports both operations in essentially $O(1)$. We start by describing an approach that supports both operations in linear time, which will not be fast enough for our purposes. We start by annotating each node with a "parent" field - each node starts out with itself being the parent. When we call $Find(x)$, either $x$ is its own parent, in which case we return $x$, or we return $Find(parent(x))$. When we call $Merge(x, y)$, we set the parent of $Find(x)$ to $Find(y)$. Intuitively, what we are doing here is building a rooted tree and when we want to merge two trees, we have the root of one tree point to the root of the other tree. The reason that this runs in linear time is that the depth of the tree may be linear. There are a couple ways to improve the performance here - if we always are careful to set the parent of the root of the smaller tree to the root of the larger tree, then the depth of the tree is bounded to $O(\log N)$. Another approach we can take is path compression - when we compute $Find(parent(x))$, we can memoize that value. My Java solution uses the latter approach. import java.io.*; import java.util.*; public class moocast { BufferedReader br = new BufferedReader(new FileReader("moocast.in")); PrintWriter pw = new PrintWriter(new BufferedWriter(new FileWriter("moocast.out"))); int n = Integer.parseInt(br.readLine()); int[] x = new int[n]; int[] y = new int[n]; StringTokenizer st = new StringTokenizer(br.readLine()); x[i] = Integer.parseInt(st.nextToken()); y[i] = Integer.parseInt(st.nextToken()); parent = new int[n]; ArrayList<Edge> edges = new ArrayList<Edge>(); parent[i] = i; int distance = (x[i] - x[j]) * (x[i] - x[j]) + (y[i] - y[j]) * (y[i] - y[j]); edges.add(new Edge(i, j, distance)); int lastWeight = 0; int numComponents = n; for(Edge curr: edges) { if(find(curr.i) != find(curr.j)) { merge(curr.i, curr.j); lastWeight = curr.w; if(--numComponents == 1) { static int[] parent; public static int find(int curr) { return parent[curr] == curr ? curr : (parent[curr] = find(parent[curr])); public static void merge(int x, int y) { parent[find(x)] = find(y); static class Edge implements Comparable<Edge> { public int i, j, w; public Edge(int a, int b, int c){ public int compareTo(Edge e) { return w - e.w; |
0.975386 | 1 | cs381-fall03-prelim1-solutions - CS381 Fall 2003 First Mid... View Full Document Right Arrow Icon CS381 First Mid Term Solution Friday Oct 10, 2003 Fall 2003 Olin 255 9:05-9:55 This is a 50-minute in class closed book exam. All questions are straightforward and you should have no trouble doing them. Please show all work and write legibly. Thank you. There are many ways to do each problem. The answers given below are only one of many possible solutions. 1. Let L be the set of strings of 0’s and 1’s with an even number of 0’s. Strings with zero 0’s have an even number of 0’s. Write a regular expression for L. Answer: (1*01*01*)*+1* 2. Consider the set { } { } 22 010 1| 1 * 01 010 1| 1 *0*1 ii ≥≥ I Write down a string of length 19 in the set. Answer: 0100100001000000001 What is the length of the shortest string in the set of length greater than 19? Answer: 19+16+1+32+1=69 4 8 16 32 010010 10 10 10 1 3. Let be a set of strings. In each string in delete every b immediately Background image of page 1 Ask a homework question - tutors are online |
0.974777 | 1 | You may also like problem icon All in the Mind problem icon Painting Cubes problem icon Paving Paths The Old Goats Stage: 3 Challenge Level: Challenge Level:1 A rectangular field measuring 17.5 metres by 10 metres has two posts 7.5 metres apart with a ring on top of each post. The posts are approximately 7 metres from the nearest corners as shown in the diagram. You have two quarrelsome goats and plenty of ropes which you can tie to their collars and you want them to be able to graze the whole field between them. How can you secure them, using the posts, so they can't fight each other, and how much rope is needed? |
0.997641 | 1 | Addition of the matrix consists of adding the appropriate elements of the two matrices. You can only add matrices having the same dimension. The result of addition is a matrix of the same dimension. Similarly, the difference matrix is a matrix formed by subtracting the corresponding elements of two matrices of the same dimension. As a result, we obtain a matrix of the same dimension. In annex I added examples. In matrix there are two simple rules for addition and subtraction. 1.In each case the order of both the given matrices must be same. 2.Each element of first matrix shall be added to or subtracted from the corresponding element of the 2nd matrix. |
0.964252 | 1 | Math Tools for Students Set 1 This is a set of simple tools for students of math, engineering, and science, including: ASCII Converter Binary to Hex or Decimal Converter Decimal to Hex or Binary Converter Hexadecimal to Decimal or Binary Converter Right Triangle Calculator String to ASCII or Hex or Binary Converter Trigonometry Calculator Tags: math tools for students , math instruments , math tools , mathematics tool , what is mathematical tools , maths tools , set 1 math , maths sets mathematics , math instruments and their description Users review from 3 reviews |
0.936629 | 1 | send link to app Candy Saga Deluxe Candy Saga Deluxe is an addictive and delicious game!Game Features: - Arcade Mode, Time Mode.- Swap and Match 3 candy in a line to remove them.- Match 4 Candy can win the jewel's bomb and 1 lighting.- Match 5 Candy can win the jewel's bomb and 2 lightings.- The Candy bomb can eliminate the Candy around.- The Color-changing Candy can eliminate to any other colored Candy .- Lighting Candy can eliminate jewels in one row or column.- Arcade Mode: more than 200 challenging levels.- It's really challenge to get 3 stars in each level! |
0.923124 | 1 | ex'ponen'tial func'tion 1. the function y = ex. 2. any function of the form y = bax, where a and b are positive constants. 3. any function in which a variable appears as an exponent and may also appear as a base, as y = x2x. exponential curveexponential horn See also: Related Content |
0.950921 | 1 | Copyright © University of Cambridge. All rights reserved. 'Line of Four' printed from Show menu You will need a sheet of squared paper or 'grid' paper. If you don't have any, just rule some lines down some lined writing paper to form squares. One hundred squares (10 by 10) makes a good size game board, but you can use a smaller of larger grid if you prefer. Two players, using two different colours, take turns to mark a 'cross-road' on the grid. The aim is to make a line of four, across, down or diagonally. Each 'cross-road' can only be used once. Try to block the other player's path. Each time you make a line-of-four draw a line through it so it can be counted as a point. When all the 'cross-roads' have been used, the player with the most points wins. In the example below, each player has had 10 turns so far. The green player has already scored 2 points, and the red player has only one point. |
0.959231 | 1 | Students in Miss Moseley's fourth grade class are learning multiplication, and they demonstrate mastery by passing assessments. Travis has passed 11 tests, and his classmate, Jenifer, has passed 2 tests. Going forward, Travis plans to pass 2 tests per week. Meanwhile, Jenifer plans to pass 5 tests per week. Eventually Jenifer will catch up to Travis. When the number of tests each student has passed are equal, how many tests will each student have passed and how many weeks will it take? the number would be 17 and it would 3 weeks not including getting to 11 or 2. to get to get to 17. 1 5 1 |
0.940019 | 1 | The INI has a new website! An Isaac Newton Institute Programme Logic and Algorithms Complete problems for higher order logics 13th February 2006 Authors: Lauri Hella (University of Tampere), Jose Maria Turull-Torres (Massey University) Let $i, j \geq 1$, and let $\Sigma^i_j$ denote the class of the higher order logic formulas of order $i+1$ with $j-1$ alternations of quantifier blocks of variables of order $i+1$, starting with an existential quantifier block. There is a precise correspondence between the non deterministic exponential time hierarchy and the different fragments of higher order logics $\Sigma^i_j$, namely $NEXP^j_i= \Sigma^{i+1}_j$. In this article we present a complete problem for each level of the non deterministic exponential time hierarchy, with a very weak sort of reductions, namely quantifier-free first order reductions. Moreover, we don't assume the existence of an order in the input structures in this reduction. From the logical point of view, our main result says that every fragment $\Sigma^i_j$ of higher order logics can be captured with a first order logic Lindstrom quantifier. Moreover, as our reductions are quantifier-free first order formulas, we get a normal form stating that each $\Sigma^i_j$ sentence is equivalent to a single occurrence of the quantifier and a tuple of quantifier-free first order formulas. Our complete problems are a generalization of the well known problem quantified Boolean formulas with bounded alternation. Related Links |
0.950267 | 1 | Derivative (finance) From Wikipedia, the free encyclopedia Jump to: navigation, search In finance, a derivative is a special type of contract. In it, the two parties agree to sell (or to buy) certain goods, at a given price, on a given date. Derivatives can be used in two ways. The first is called speculation: One party hopes that the market price differs from the price agreed upon in the contract, so that he can make the difference between the two. The second is called hedging: One party wants to make sure that the market price doesn't go in a direction that would hurt his profits, so he make sure that the price is agreed upon a long time before the transaction takes place. For a seller, hedging means that he can be certain to receive the agreed upon price, and for the buyer hedging means that he can be certain not to pay more than the agreed upon price. One of the oldest derivatives is rice futures, which have been traded on the Dojima Rice Exchange since the eighteenth century.[1] References[change | change source] |
0.963525 | 1 | Presentation is loading. Please wait. Presentation is loading. Please wait. Ift6802 - H03 Commerce electronique Agents Presentation adaptée des notes de Adina Florea Cours a Worcester Polytechnic. Similar presentations Presentation on theme: "Ift6802 - H03 Commerce electronique Agents Presentation adaptée des notes de Adina Florea Cours a Worcester Polytechnic."— Presentation transcript: 1 Ift H03 Commerce electronique Agents Presentation adaptée des notes de Adina Florea Cours a Worcester Polytechnic 2 Lecture outline n Motivation for agents n Definitions of agents agent characteristics, taxonomy n Agents and objects n Multi-Agent Systems n Agents intelligence n Examples 3 Contexte de techno agents n Besoin de composantes logicielles robustes et flexibles n Opportunisme: prendre avantage de ressources disponibles n Prévoir lexploitation de limprévisible n Autonomie, intelligence, évolution 3 4 Motivations for agents n Large-scale, complex, distributed systems: understand, built, manage n Open and heterogeneous systems - build components independently n Distribution of resources n Distribution of expertise n Needs for personalization and customization n Interoperability with legacy systems 4 5 Questions: Programs or agents? a thermostat with a sensor for detecting room temperature electronic calendar (arranges meetings) system with messages sorted by date with messages sorted by order of importance and SPAM deleted Bidding program for electronic auctions 5 6 Agent? The term agent is used frequently nowadays in: Sociology, Biology, Cognitive Psychology, Social Psychology, and Computer Science AI Why agents? What are they in Computer Science? Do they bring us anything new in modelling and constructing our applications? Much discussion of what (software) agents are and of how they differ from programs in general 6 7 What is an agent (en informatique)? n There is no universally accepted definition of the term agent and there is a good deal of ongoing debate and controversy on this subject n The situation is somehow comparable with the one encountered when defining artificial intelligence. n Confluence de plusieurs groupes de recherche: –Informatique distribuée –IA –Sociologie & Economie –Vie artificielle / simulation 7 8 n More than 30 years ago, computer scientists set themselves to create artefacts with the capacities of an intelligent person (artificial intelligence ). n Now we are facing the challenge to emulate or simulate the way human act in their environment, interact with one another, cooperatively solve problems or act on behalf of others, solve more and more complex problems by distributing tasks or enhance their problem solving performances by competition. 8 9 n Are agents intelligent? n Not necessarily. n Not to enter a debate about what intelligence is n Agent = more often defined by its characteristics - many of them may be considered as a manifestation of some aspect of intelligent behaviour. 9 10 Agent definitions n Most often, when people use the term agent they refer to an entity that functions continuously and autonomously in an environment in which other processes take place and other agents exist. (Shoham, 1993) n An agent is an entity that senses its environment and acts upon it (Russell, 1997) 11 11 Agent & Environment Agent Environment Sensor Input Action Output 12 n Autonomous agents are computational systems that inhabit some complex dynamic environment, sense and act autonomously in this environment, and by doing so realise a set of goals or tasks for which they are designed. (Maes 1995) n Intelligent agents continuously perform three functions: perception of dynamic conditions in the environment; action to affect conditions in the environment; and reasoning to interpret perceptions, solve problems, draw inferences, and determine actions. (Hayes-Roth 1995) n Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program, with some degree of independence or autonomy, and in so doing, employ some knowledge or representation of the users goals or desires. (the IBM Agent) 12 13 Niveaux dagents –Loop percevoir environnement et attendre qqc a faire decider quoi faire prendre action n Controlleur réactifs –Daemon logiciels –Thermostat, spool in/out, serveurs 14 Identified characteristics Two main streams of definitions n Define an agent in isolation n Define an agent in the context of a society of agents social dimension MAS Two types of definitions n Does not necessary incorporate intelligence n Must incorporate a kind of IA behaviour intelligent agents 14 15 n Interesting Agent = a hardware or software-based computer system that enjoys the following properties: â autonomy - agents operate without the direct intervention of humans or others, and control their actions and state; â reactivity: agents perceive their environment and respond in a timely fashion to changes that occur in it; â complexity: manipulating the environment is not trivial â pro-activeness: agents do not simply act in response to their environment, they are able to exhibit goal-directed behaviour by taking initiative. â social ability - agents interact with other agents (and possibly humans) via some kind of agent- language; (Wooldridge and Jennings, 1995) 15 16 Agents characteristics n function continuously / persistent software n sense the environment and acts upon it / reactivity n act on behalf of a user or a / another program n autonomous n purposeful action / pro-activity goal-directed behavior vs reactive behaviour? n mobility ?intelligence? n Goals, rationality cognitive n Reasoning, decision makingcognitive n Learning/adaptation n Interaction with other agents - social dimension Other basis for intelligence? 16 17 17 Agent Environment Agent Environment Sensor Input Action Output Environment properties - Accessible vs inaccessible - Deterministic vs nondeterministic - Episodic vs non-episodic - Static vs dynamic - Open vs closed - Contains or not other agents 18 n Interactions among agents - high-level interactions n Interactions for- coordination - communication - organization o Coordination collectively motivated / interested self interested - own goals / indifferent - own goals / competition / competing for the same resources - own goals / competition / contradictory goals - own goals / coalitions 18 MAS - many agents in the same environment 19 o Communication communication protocol communication language - negotiation to reach agreement - ontology o Organizational structures centralized vs decentralized hierarchical/ markets "cognitive agent" approach MAS systems? n Electronic calendars n Air-traffic control system 19 20 20 The wise men problem A king wishing to know which of his three wise men is the wisest, paints a white spot on each of their foreheads, tells them at least one spot is white, and asks each to determine the color of his spot. After a while the smartest announces that his spot is white 21 Agents vs Objects n Les agents sont le résultat dune progression continue dans lévolution des concepts dabstraction en informatique. 21 22 Agents vs Objects n Autonomy - stronger - agents have sole control over their actions, an agent may refuse or ask for compensation n Flexibility - Agents are reactive, like objects, but also pro- active n Agents are usually persistent n Own thread of control Agents vs MAS n Coordination - as defined by designer, no contradictory goals n Communication - higher level communication than object messages n Organization - no explicit organizational structures for objects n No prescribed rational/intelligent behaviour 22 23 Ferber: n An agent is a physical or virtual entity 1. which is capable of acting in an environment. 2. which can communicate directly with other agents. 3. which is driven by a set of tendencies (in the form of individual objectives or of a satisfaction/survival function which it tries to optimize). 4. which possesses resources of its own. 5. which is capable of perceiving its environment (but to a limited extent). 6. which has only a partial representation of its environment (and perhaps none at all). 7. which possesses skills and can offer services. 8. which may be able to reproduce itself. 9. whose behaviour tends towards satisfying its objectives, taking account of the resources and skills available to it and depending on its perception, its representation and the communications it receives. 23 24 Ferber: n Multi Agent System 1. An environment E, that is, a space which generally has volume. 2. A set of objects, O. These objects are situated, that is to say, it is possible at a given moment to associate any object with a position in E. 3. An assembly of agents, A, which are specific objects (a subset of O), represent the active entities in the system. 4. An assembly of relations, R, which link objects (and therefore, agents) to one another. 5. An assembly of operations, Op, making it possible for the agents of A to perceive, produce, transform, and manipulate objects in O. 6. Operators with the task of representing the application of these operations and the reaction of the world to this attempt at modification, which we shall call the laws of the universe. 24 25 There are two important special cases of this general definition. n Purely situated agents n An example would be a robot. In this case E, the environment, is Euclidean 3- space. A are the robots, and O, not only other robots but physical objects such as obstacles. These are situated agents. n Pure Communication Agents n If A = O and E is empty, then the agents are all interlinked in a communication networks and communicate by sending messages. We have a purely communicating MAS. 25 26 How do agents acquire intelligence? Cognitive agents The model of human intelligence and human perspective of the world characterise an intelligent agent using symbolic representations and mentalistic notions: ê knowledge - John knows humans are mortal ê beliefs - John took his umbrella because he believed it was going to rain ê desires, goals - John wants to possess a PhD ê intentions - John intends to work hard in order to have a PhD ê choices - John decided to apply for a PhD ê commitments - John will not stop working until getting his PhD ê obligations - John has to work to make a living (Shoham, 1993) 26 27 Premises n Such a mentalistic or intentional view of agents - a kind of "folk psychology" - is not just another invention of computer scientists but is a useful paradigm for describing complex distributed systems. n The complexity of such a system or the fact that we can not know or predict the internal structure of all components seems to imply that we must rely on animistic, intentional explanation of system functioning and behaviour. Is this the only way agents can acquire intelligence? 27 28 n Comparison with AI - alternate approach of realizing intelligence - the sub-symbolic level of neural networks n An alternate model of intelligence in agent systems. Reactive agents n Simple processing units that perceive and react to changes in their environment. n Do not have a symbolic representation of the world and do not use complex symbolic reasoning. n The advocates of reactive agent systems claims that intelligence is not a property of the active entity but it is distributed in the system, and steams as the result of the interaction between the many entities of the distributed structure and the environment. 28 29 The problem of pray and predators 29 Reactive approach The preys emit a signal whose intensity decreases in proportion to distance - plays the role of attractor for the predators Hunters emit a signal which acts as a repellant for other hunters, so as not to find themselves at the same place Each hunter is each attracted by the pray and (weakly) repelled by the other hunters Cognitive approach Detection of prey animals Setting up the hunting team; allocation of roles Reorganisation of teams Necessity for dialogue/comunication and for coordination Predator agents have goals, they appoint a leader that organize the distribution of work and coordinate actions 30 30 Player A/ Player B CooperateDefect Cooperate Fairly good (+5)Bad (-10) Defect Good (+10)Mediocre (0) The problem of Prisoner's Dilemma Outcomes for actor A (in words, and in hypothetical "points") depending on the combination of A's action and B's action, in the "prisoner's dilemma" game situation. A similar scheme applies to the outcomes for B. The wise men problem A king wishing to know which of his three wise men is the wisest, paints a white spot on each of their foreheads, tells them at least one spot is white, and asks each to determine the color of his spot. After a while the smartest announces that his spot is white 31 n Is intelligence only optimal action towards a a goal? Only rational behaviour? Emotional agents n A computable science of emotions n Virtual actors –Listen trough speech recognition software to people –Respond, in real time, with morphing faces, music, text, and speech n Emotions: –Appraisal of a situation as an event: joy, distress; –Presumed value of a situation as an effect affecting another: happy-for, gloating, resentment, jealousy, envy, sorry-for; –Appraisal of a situation as a prospective event: hope, fear; –Appraisal of a situation as confirming or disconfirming an expectation: satisfaction, relief, fears-confirmed, disappointment n Manifest temperament control of emotions 31 32 Areas of R&D in MAS n Agent architectures n Knowledge representation: of world, of itself, of the other agents n Distributed search n Coordination n Planning: task sharing, result sharing, distributed planning n Communication: languages, protocols n Decision making: negotiation, markets, coalition formation n Organizational theories n Learning n Implementation: –Agent programming: paradigms, languages –Agent platforms –Middleware, mobility, security 32 33 Agents and MAS Applications n Industrial applications: real-time monitoring and management of manufacturing and production process, telecommunication networks, transportation systems, eletricity distribution systmes, etc. n Business process management, decision support n ecommerce, emarkets n information retrieving and filtering n PDAs n Human-computer interaction n Entertainment n CAI, Web-based learning n CSCW 33 34 Some examples PDAs used as interface with the user : manage the interactions with other assistant agents (meeting organisation,...) and with other types of agent (information search, service, contract negotiation,...) n Server agents terminal services Bounded to applications s.a. database, thematic servers, etc. Achieve a terminal service encapsulated in agents intermediate services Provide services with an added-value since they brought together terminal services. In a travel agency scenario: it corresponds to the integration of services such as Hotel reservation, flight reservation … 34 35 Some examples Marketing agents commercial agents: used to inform potential users about their services, offers, etc. (broker, intermediate services and assistants) buyer agents: negotiate the prices of services broker agents that move on the net and look for interesting information for the user. 35 36 36 Intelligent Interfaces (server) Intelligent Interfaces (server) Interaction Naturel Language linguistic analysers Learning Observation Trace - Analysis User clone agent Modeling usershabits Application Interaction Specialised Agents Dialog Box Speech Acts Communication protocols Example : The agent Artimis (CNET) 37 Information Agents Internet themtic chain Agent sort-selection theme 1 Sender Subscription Publication Sender Internet theme 2 theme k Push Technology 37 38 38 Are these example of agents? If yes, are they intelligent? n Thermostat ex. n Electronic calendar n Present a list of messages sorted by date n Present a list of messages sorted by order of importance - act on behalf of a user or a / another program - autonomous - sense the environment and acts upon it / reactivity - purposeful action / pro-activity - function continuously / persistent software - goals, rationality - reasoning, decision making - learning/adaptation - social dimension Download ppt "Ift6802 - H03 Commerce electronique Agents Presentation adaptée des notes de Adina Florea Cours a Worcester Polytechnic." Similar presentations Ads by Google |
0.952953 | 1 | #written by usingpython.com #allows us to access a random 'key' in the dictionary import random #the questions/answer dictionary my_dict = { "Base-2 number system" : "binary", "Number system that uses the characters 0-F" : "hexidecimal", "7-bit text encoding standard" : "ascii", "16-bit text encoding standard" : "unicode", "A number that is bigger than the maximum number that can be stored" : "overflow", "8 bits" : "byte", "1024 bytes" : "kilobyte", "Picture Element. The smallest component of a bitmapped image" : "pixel", "A continuously changing wave, such as natural sound" : "analogue", "the number of times per second that a wave is measured" : "sample rate", "A bunary representation of a program" : "machine code" } #welcome message print("Computing Revision Quiz") print("=======================") #the quiz will end when this variable becomes 'False' playing = True #While the game is running while playing == True: #set the score to 0 score = 0 #gets the number of questions the player wants to answer num = int(input("\nHow many questions would you like: ")) #loop the correct number of times for i in range(num): #the question is one of the dictionary keys, picked at random question = (random.choice( list(my_dict.keys()))) #the answer is the string mapped to the question key answer = my_dict[question] #print the question, along with the question number print("\nQuestion " + str(i+1) ) print(question + "?") #get the user's answer attempt guess = input("> ") #if their guess is the same as the answer if guess.lower() == answer.lower(): #add 1 to the score and print a message print("Correct!") score += 1 else: print("Nope!") #after the quiz, print their final score print("\nYour final score was " + str(score)) #store the user's input... again = input("Enter any key to play again, or 'q' to quit.") #... and quit if they types 'q' if again.lower() == 'q': playing = False |
0.902387 | 1 | Web-Banner-11.jpg BG-LPG-Webbanner-Final.jpg Web_Banner-01.jpg Web_Banner-02.jpg Web-Banner-03-final.jpg FIR_8194.jpg 6.1.jpg blpgl-factory.jpg Bashundhara LP Gas Ltd Bashundhara LP Gas Ltd is the first private LPG plant in the country. It has a higher production rate than any other filling station in Bangladesh. It possesses 3000MT storage capacity, which is the largest amount among all the LPG plant in Bangladesh. LPG is an intermediate product that is situated between natural gas and crude oil. LPG is one of the common fuels in our country where natural gas is not available. Gradually it has become a very popular fuel in our country. |
0.997162 | 1 | Skip over navigation Calculus BC: Series Series With Positive and Negative Terms In this section we briefly state two results concerning series a n with terms a n that are not necessarily ≥ 0 . The first result has to do with absolute convergence and the second with alternating series. 1. A series a n is said to converge absolutely if | a n| converges. It is a theorem that if any series converges absolutely, then it also converges. 2. A series a n is said to be alternating if the a n alternate between being positive and negative. If a n is an alternating series such that | a n+1|≤| a n| for all n≥1 and a n = 0 , then a n converges. This is called the alternating series test. Follow Us |
0.908888 | 1 | İngilizce Kişilerin Fiziksel Özellikleri Aşağıdaki tabloda insanları fiziksel yönten tanımlamak, betimlemek ve tarif etmek için kullanılan İngilizce ve Türkçe kelimeler yer almaktadır: greygri saçlıslimzayıf has a good figureiyi görünümlütalluzun adultyetişkinhas frecklesçilli in his twentiesyirmisindethinsıska balddazlakwavydalgalı saçlı blonde, fairsarışınlonguzun shortkısawears glassesgözlüklü childevlatmiddle-agedorta yaşlı well-builtboyu posu yerindecurlykıvırcık average heightorta boyluwell-dressediyi giyimli darkesmerelderlyoldukça yaşlı Örnek Cümleler: Paul is tall and slim with blonde hair. He's about twenty-five and is wearing a suit. Mandy is in her thirties and is rather fat. She has dark, curly hair and is well-dressed. Emma is middle-aged and is about 162 cm tall. She has short, wavy, blonde hair and wears glasses. She is slim and is wearing a dress. Pamela is about twenty-four and is of average height. She has a good figure and has long, dark hair. She is wearing a jumper, jeans and a pair of boots. Ken is middle-aged and is of average height. He is well-built with short, dark hair. He is wearing a suit. Brian is an elderly man who is short and fat. He is bald and wears glasses. He is wearing a jacket. Timothy is a teenager with short, curly, dark hair. He is about 160 cm tall and has freckles. He is wearing a jumper, jeans and a pair of trainers. Caroline is about seventeen with short, blonde hair. She is very tall and thin and is wearing a short skirt and a blouse. Puan: 3.4 (Oy Sayısı: 1157) comments powered by Disqus İngilizce Dersler ingilizce kurslar ingilizce fiillerin halleri ingilizce aylar simple present tense ingilizce renkler ingilizce sayılar ingilizce alfabesi simple past tense |
0.970581 | 1 | real function cmach(job) integer job c c smach computes machine parameters of floating point c arithmetic for use in testing only. not required by c linpack proper. c c if trouble with automatic computation of these quantities, c they can be set by direct assignment statements. c assume the computer has c c b = base of arithmetic c t = number of base b digits c l = smallest possible exponent c u = largest possible exponent c c then c c eps = b**(1-t) c tiny = 100.0*b**(-l+t) c huge = 0.01*b**(u-t) c c dmach same as smach except t, l, u apply to c double precision. c c cmach same as smach except if complex division c is done by c c 1/(x+i*y) = (x-i*y)/(x**2+y**2) c c then c c tiny = sqrt(tiny) c huge = sqrt(huge) c c c job is 1, 2 or 3 for epsilon, tiny and huge, respectively. c c real eps,tiny,huge,s c eps = 1.0 10 eps = eps/2.0 s = 1.0 + eps if (s .gt. 1.0) go to 10 eps = 2.0*eps cmach =eps if( job .eq. 1) return c s = 1.0 20 tiny = s s = s/16.0 if (s*1.0 .ne. 0.0) go to 20 tiny = (tiny/eps)*100. s = real((1.0,0.0)/cmplx(tiny,0.0)) if (s .ne. 1.0/tiny) tiny = sqrt(tiny) huge = 1.0/tiny if (job .eq. 1) cmach = eps if (job .eq. 2) cmach = tiny if (job .eq. 3) cmach = huge return end |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.