Monday, January 25, 2016

Faith: Where's the boundary?

   The world is full of invisible boundaries that are hard to define.   We like to draw borders and boundaries.   Some boundaries are easy, like rivers, mountains, or oceans.  This is my side and that is your side.   Boundaries help us simplify our world.  I don't have to fix everyone's problems, I just have to fix our problems.   I can take care of my side of the fence in my backyard without worrying about your side of the fence.

    But not all borders are easily identified.  Take for example where is the boundary of our solar system.   When I was a kid we were all taught that Pluto was the last of the 9 planets in our solar system.  So to many, that defined the end of our solar system.  Simple right?  Then Pluto got demoted from a planet to planet-like object.   Did the boundary move in to Neptune then? No.  It just changed our definition of a planet.   Some scientists thought the edge of our solar system goes out to the Heliosphere, which is the point at which our solar wind from our sun meats the incoming solar wind from other systems and changes direction.    But even now there are speculations that is not correct as there is mathematically proven theories that there may well be a Neptune sized planet circling the sun way past this that could be the cause of comets from the far regions of space to enter our system every several thousand years or so.


    So where does our solar system end and the rest of the galaxy begin?  No one really knows and there does not seem to be definable boundary to point to either.

   Another one of those invisible boundaries is where is the boundary between personal faith and public policy.   Where does my faith-sphere end and the public-sphere begin?   Does it end when I leave my church or Christian school?   What about in my house or on my front lawn?   What about my cars bumper?   What about my time at work?   Does it cease to exist in these realms?

    This question is as old as time itself.   In ancient Rome,  the public was expected to show their allegiance to the emperor by coming to the city square and lighting a candle and saying "Caesar is god!".   Today some get their heads all out of joint over just the use of "God" in the Pledge of Allegiance, but here you were asked to say the leader is God himself!   Imagine today having to say "Obama is god!".  Would that be appalling to almost everyone in our country?   This was an issue with the early Christians.   This statement of faith went in direct opposition to their core beliefs.   But by their non-conformity, they placed themselves in a perilous place as many considered them to be "traitors" to the empire and worthy of death.

     In Israel, the Jews also took issue with the worship of the emperor.   Ancient Romans had on their coin "To the Divine Augustus" which called Augustus a god.   Jews took issue with this and often refused to use the Roman coin because it made them break the 1st command (Thou shalt not have any gods before me).  When Jesus was asked if they should pay taxes to Caesar (this was before the Rome gave them the new coins) Jesus asked for a coin and asked "Whose image is this and whose coin is this?"   and the crowd answered "Caesars", to which Jesus said, "Give unto Caesar that which is Caesar's and to God that which is God's".   Meaning,  Caesar minted the coin and therefore owns what is printed on it.   You did not mint it therefore you are not held responsible for what it says.  We are to give to God and others what God demands. 

    Does Jesus give us a definable border?   Not exactly.   In some ways I think Jesus is telling us "There are no clear answers here. You figure it out for yourselves".

    Today in our country no one is going to prison for their faith (yet) and no one is being thrown into a den of lions for anyone's viewing pleasure, but some are being forced to pay hundreds of thousands of dollars to a government for their non-conformity of paying homage to gay marriage.  To these brave people their faith was not left at the doorway of the church.   They were not the bigots others in the media have made them out to be.  Instead they have reached out to the gay community with love and respect to show them that their decision has nothing to do with "hating gays" but instead not wanting to make a confession that was in direct conflict with their faith.   This invisible boundary is like the Heliosphere  mentioned earlier where the solar wind pushing out from our sun meets the solar wind of other stars pushing in.   To them, making a gay-wedding cake was a confession about marriage that was not in keeping with their biblical faith that marriage is only between a man and a woman and is no different than lighting a candle and saying "Caesar is god!" or minting the coin that calls the emperor a god.

    Today the external solar-wind is strong and seeks to subdue those of faith with large fines and court ordered "re-education classes" and regular government reviews on their progress.   These cases may well be the proving grounds for other cases that may arise in the future.  If people of faith must subdue their beliefs when in conflict with the state, then there may be no boundary left from which to fight or resist.   

    To be fair, there have been times when the faith wind blew too strong and extended farther than what it should have.   This happens whenever a group, no matter how noble, comes to power as the majority.    We want to make our own little heaven on earth rather than wait for the real one that is to come.   We wrongly imprisoned and put into mental hospitals the homosexuals and labelled them deviants so we didn't have to interact with them.   We ostracized divorced woman and treated them shabbily along with woman who were victims of domestic abuse or rape.   These were wrong.  We shut the doors to these people and cut them off from any conversation we might have with them in the future.

    To some extent, faith is personal.  Some have it.  Some don't.   We just have to let them go.   Take for example, Jesus interaction with a "rich young ruler" who wants to go to heaven but doesn't want to sell all his possessions and follow Jesus as he was requested.   The gospel writer says, "at this the mans face fell and he walked away sad because he had great wealth".    Did Jesus order him to be beaten or punished?   No.  Did he call him names?  No.  Did he beg for the man to come back? No.  He simply let him go and kept open the communication channels if the person in the future wanted to change his mind.
 
    We must be the same way in the dealings with those who have no faith or are opposed to how we believe.   Disagree but love them anyway.

     In the end I think we must all agree that there is no defined boundary or border to say, "this far you can go and no further".  The "winds of change" will strengthen and weaken over time and the boundary will move and someone will always be unhappy with where  it is located.

Friday, January 15, 2016

Future of Semiconductors

   People love to extrapolate the future by taking what has happened in the last 50 years and use that
as the measuring stick of what will happen in the next 50 years, especially when it comes to computers and the digital age.  When we look back at the 1950's and the ENIAC computer which was built using vacuum tubes and took up a whole room the size of a medium house and compare that with the processing power we have in our pockets with our smart phones we just can't believe what is yet to come.  We take that little piece of data and try to predict what will be the processing power of the future be like in our phones or on our wrists or even in our brains in the NEXT 50 years.  Will computers have the processing power of our brains?  Will they become self-aware?  

   Most of this has been built on a prediction by Gordon Moore who in the 1960's predicted that the number of transistors on a chip would double every 2 years thus doubling our computation power every 2 years as well.   This has been called: Moore's Law. For much of the later half of the 20th century this "law" (which is more a prediction than a law) was very accurate and seemingly unstoppable as every 2 years companies like Intel punched out chips twice as many transistors on them.  This prediction also provided companies like Intel to plan way into the future and develop highly complex chips long before there were chips large enough to contain them.

   Sadly however, just recently this "law" was broken.  For the first time in over 50 years, the law "stumbled" and took Intel 3 years to reach its next doubling instead of 2.   But even if it was a simple stumble the question remains.  Can we double the number of transistors to infinity?

    The answer to that question is of course: "No of course not!".   In order to double the number of transistors on the same size die (or chip) would require you to make the transistors 1/2 their area.   This shrinkage must reduce the width and length by a factor of  .7071 (or square root of 2 divided  by 2) in order to do this.   Chip manufactures have done this by reducing their process size from 80 nano-meter(nm) to 56nm to 40nm to 28nm to 20nm to 14nm and so on.

   But how big is 1 nano-meter?   To give you some scale, the radius of an average Silicon atom (which is what makes up most of the computer chip) is 0.541 nano-meters wide.   This means that 1 nano-meter is only about 2 Silicon atoms wide.   This means that a chip using a 14nm process means its transistors are approximately 14/0.541 = 25 atoms wide on average.  A 10nm process would be only 18 atoms wide.   Of course this would mean that the absolute smallest you could go is 0.541 nm (1 atom) which is only 6 "Moore's-Law-Doublings"  (Si atom widths: 13, 9, 6 ,4 ,2, 1) left to go, if you could go that far, but in all practicality you cannot.

   Most physicists believe that 5nm is about as far as you can go for the following reasons

Quantum Tunneling 
      Around 5nm you begin to run into some quantum-physics issues where electrons can pass through barriers without going through them.  This is called quantum-tunneling and would make the transistors "leak" electrons from one side of the transistor to the other.   Since the transistors only purpose is to act as a tiny electrical switch, a switch that allows electrons to flow through even in the "off" position is not a good switch anymore.

Silicon Migration
      Electrons flowing through silicon and act as tiny bullets or cannonballs as they occasionally hit the atoms nucleus (they do not actually collide, but their electric fields interact very strongly and bounce off of each other) and physically move the atoms around over time.   When transistors are large in size, a little atomic movement is acceptable and not even noticeable.  But with very small transistors that are only a few atoms wide they can be disastrous and make the transistor stop working.  This means that chips will not be able to last as long as they once were.   For some applications this is not an issue, but for areas like automobiles and safety it will be a problem.

Defect Effect.  
     When chips are made small electrical connections are laid out using a process called photo-lithography.   A chip is not made with just a single photo-lithography step, but instead is made by repeating the processes 100's of times over to draw different parts of the design ranging from the transistors to the intricate levels of metal connections to wire it all together.  The smaller the geometries of the devices being drawn the more difficult it become to make sure things are adequately lined up so they connect where they should connect and don't connect where they should not connect.  Each process step must line up with all the process steps before it.  If there are 100 steps then likelihood that a chip makes it through correctly is P to the 100th power where P is the accuracy of lining up with the silicon.   If you want an 90% yield you would need P = 0.9 ^ (0.01) = 99.894%  As the geometries decrease this target becomes more and more difficult to hit as the tolerances for aligning become increasingly tighter.  These alignment issues stem from 2 main issues: thermal vibration and physical vibration.

    Thermal vibration is caused by the heat of the chip.  The warmer the material the more the atoms are vibrating (heat is simply the measurement of atomic vibration).  This vibration is not noticeable to the naked eye, but at the microscopic level it can look like a massive earthquake.  Since all matter naturally vibrates from thermal interactions the tolerances become such that super-cooling will be necessary to limit these vibrations during the manufacturing process.

    Physical vibrations stem from factory induced causes such as: noise, floor vibrations, and earth vibrations (small tremors).  Even the smallest sound can sometimes be enough to affect the production. So much so, that most many workers in semiconductor fabs use sign-language to communicate rather than speaking to each other. To reduce this activity further may require fabrication processes to either be done in orbit above the earth or use superconducting magnets to allow the fab to hover above the earth.  Both of these technologies would be prohibitively expensive to do.

   Of course there have been some laboratory experiments showing transistors as small as 3nm using other substances like Graphene which is a carbon nanotube structure.  But these "experiments" only work on single transistors with no real way to produce them in the billions and assemble them in such a way that they can be considered production worthy.  

The Economic Factor
   This last fact, is really what brings to light the most limiting factor of Moore's Law.  It's not just about "can we do it" , but instead "can we do it cost efficiently".   Lots of smart people working in academia today are trying to solve the problem. But most of them are only looking at the physics of the problem and not so much the economics of the problem.   Sure you can show a single transistor under a microscope functioning at 3nm in size.   Now repeat that processes 10 billion times and do it for less than $50.   This is where the rubber meets the road and most academic papers skid off into the ditch. 

    What does this mean for the future of computing?

     It means things are going to change in a hurry.

Removing the "fat"
   Designs will have to go on a diet.  Many designs have built in "fat" (unused logic) for a variety of purposes.
  1. Over-sized buffers and memories which could be reduced 
  2. Redundant logic
  3. Extra modes of operation that very few customers use
  4. Test-mode or Debug logic which might be unnecessary if you are not changing the design much anymore
   Getting rid of this will be a first-order of business.  Another would be tailoring the design to meet each customers needs.  Today, one chip is made to meet multiple customers needs but in the future each customer may have to get their own special chip with just only their features that they request.

Hand Layout
   Much of our designs today are laid out (where transistors are put and how signals are routed to different logic on the chip) by computer programs.  These programs are good, but many times they get lost "seeing the forest through all the trees" and waste a lot of space on chips.  In many cases, humans can still do better jobs on some of this logic using creative thinking and knowing more about what is important and what is not so important.   In the 1980's and 1990's much of our processor chips were laid out this way and in the future we may return back to it again.

Re-use, Re-use, Re-use
   Many companies today are already moving toward the re-use model of technology.  They are developing Intellectual Property Blocks (also known as "Hard-IP blocks") that can be assembled quickly and efficiently by engineers to reduce their R&D costs to their absolute minimum.  This coupled with the previous change of hand-layout will help them pack more logic onto their chips as well as these Hard-IP blocks can be packed in smaller spaces.

     The other advantage of this method is that development and validation times can be reduced as well and all the added costs of engineering tools that go along with it.   It is even conceivable that in the future, customers would be able to place orders "on-line" and have their designs automatically assembled and tested without any human effort at all.  This is possible by the use of FPGA technology that has been in use for almost 30 years.  FPGA stands for "Field Programmable Gate Array" which is an array of logic cells that can be re-programmed at any time to be whatever you want it to be.  Coupling this logic with the "Hard-IP blocks" would give customers a flexible platform from which they could design their own circuits and chips and reduce the need for large R&D companies to build costly custom chips.  This purpose may explain Intel's latest purchase of Altera FPGA for 50 billion dollars.

Chip Stacking
     Some companies will look at going 3D in their chip designs by "stacking" chips on top of each other.   Memory chips are a good use of this as typically only 1 chip is being accessed at the same time and so some area could be reduced.  Also, memories are not normally big generators of heat and so stacking should not be an issue with regard to thermal issues.  But as far as processors (general purpose and special purpose) that cannot be said.   These typically generate gobs of heat and stacking makes it difficult to remove this heat in an efficient manner.

      Another issue is how to evenly distribute the power and ground connections in such a way that chips further away from the board (where the power is generated) do not incur unmanageable amounts of induction and noise that would cause the chip to malfunction.  When chips are connected to a board they have many connections that are dedicated for this purpose spread around the bottom of the chip and directly connected to the board.   Chips stacked on top of other chips will not have this luxury and will have limited amounts of connections to use.

      But even if both of these issues could be solved, stacking doesn't really solve the main cost issue at all that Moore's Law implies.   Chip stacking simply provides a denser packaging of the chips and cannot achieve Moore's Law results.  Let's say we have a memory chip that has 8G bytes and  costs $5 to produce and I sell it for $10.   Under Moore's Law in 2 years I will  sell you 16G bytes in the same package for $10 with the same $5 profit.   But with chip-stacking I need to sell you 2 chips (8G each) at $10, but now my profit is $0.  I would have to find ways to produce the chip cheaper (salaries, equipment, etc) to make the chip for $4 so I can eek out a $2 profit.  But what about the next year when I need to stack 4 chips in the same package and the cost to me become $16?   Can I reduce the cost of the chip to $2 each?   You see where this is going.

Multi-bit computing
    For almost all of the history of computing 1-bit could only represent 2 values: 0 or 1.  Computer chips would traditionally use a high voltage of  greater than 1 volt to represent a "1" (in the early days this value was 5 volts) and a voltage of 0 volts to represent a "0" (although there are some exceptions to this case).  It has been shown in the past that some technologies, such as memories, could use 4-value logic instead of binary logic and have a signal be 4 values (0,1,2,3).  Intel showed this even back in the 1990's with a Flash Memory chip capable of storing 2 bits inside of a single memory cell.  It does this by storing different amounts of voltage to represent the different values (0=0v, 1=1.0v, 2=3.0v, 3=4.0v) and effectively packed 2 bits in the same space it previously could only store 1 bit of information.  This works well for memory cells but not so much for logic as logic gates cannot measure the voltage to make decisions.   But even this capability has its limits as you would need to subdivide a range of voltages into smaller and smaller values to store more and more bits of information.  For example, to store 3-bits would require 8 voltage levels (0=0v, 1=0.5v, 2=1.0v, 3=1.5v, 4=2.0v, 5=2.5v, 6=3.0v 7=3.5v) and so now the margin of error drops to 0.5 volts rather than 1 volt and so errors would be more likely to occur. 4-bits would require 16 voltages levels and drop the margin for error to 0.25 volts if you allow your highest voltage to go up to 4 volts.  This however is simply not the case today as much logic today runs under 2 volts so that it does not consume too much power.   

Processing Re-partitioning
   Next there will be a possible change in HOW we compute.  Our current computer model is over 70  years old and this model separates processing, memory and IO.   In the future, these may be re-partitioned to more efficiently put them together to reduce the overhead of communicating between them.  Today much of our chips logic is dedicated explicitly to moving data from one side of the chip to another quickly.   It is conceivable that by combining memory, IO and processing into a small "neuro-processor" we could lessen the communication logic and compact the functions more efficiently.  Of course this would require a major rewriting of our OS and Software layers but it could be done.

Software Improvements
    Eventually all the hardware improvements will come to a grinding halt and all future improvements will be dependent on the software.   More efficient languages will need to be developed that will improve performance and memory usage as today's languages (like C++) are very inefficient in both of these aspects as their purpose is to improve development time at the cost of both performance and memory.   Programs will need to be optimized (either by hand or by other tools) to reduce undesired waste in processing.  (Who knows! Assembly language may even come back into fashion once again!).

    But all of these solutions are just futile attempt to put off the inevitable.  Like death, in the end, we will reach a limit in what we can achieve in processor computing.

    The question is, however, WHEN WILL THAT HAPPEN?

My Prediction
    To me, I think we have only about 1-2 more levels of Moore's Law in terms of transistor reduction.  Companies will invest in hand-layout of large parts of their designs to strain out another 10-20% of their die area and after that we will see about 2 years of advancement from chip-stacking and other compaction techniques.  Adding it altogether I would say we have only about 10 years at most before we see computer technology advancement come to a halt. After that companies will continue to reduce their costs of production through Hard-IP but the perceivable technological advancements to the end-user will be minimal while their costs will slowly come down (like how early calculators costs $200 but now can be bought for less than $10.  They don't do anything new, but they sure are cheap!).