Write a letter of application for the above job. 3 глава




ABA-PGT, Inc. specializes in both external and internal spur and helical plastic gears in addition to worm, face and bevel. Glenn Ellis, senior gear engineer, ABA-PGT, says, "Plastic gears have a place in the industry just as metal gears do. They both have their own marketplace, this being size, strength, weight and even the quantity required.”

There’s always a push for reducing cost and weight and that continues to increase the interest in plastic gearing, according to David Sheridan, senior design engineer at Ticona. “Certainly lately, with all the bells and whistles added to automobiles outside of the drive train, we’re seeing huge gains in automotive applications. Many plastic gear applications were once found only in luxury car models, but these features are now being integrated into standard models as well.” “We’re not back to pre-recession numbers but business is good,” adds John Winzeler, president of Winzeler Gear in Harwood Heights, Illinois. “Today there are more opportunities for plastic gears, especially where both sound and cost reduction are a factor. More and more, we’re getting interest in transmitting power, not just motion.”

A Tale of Two Segments

Plastic gears can be cut like their metal counterparts and machined for high precision with close tolerances. Plastic cut gears can also be utilized for the development of prototypes. Injection molded plastic gears are fast, economical and can cost significantly less than machined, stamped or powder metal gears. When determining which type to consider for a specific application, costs, quantity, quality and performance must be considered.

“Historically, molded gear advantages have been considered to be lightweight, quiet, resistant to corrosion, and may be used without external lubrication. While they held these properties, plastic gears were also considered to be less accurate and flimsy. There has been significant progress on many fronts to address these disadvantages,” Ulrich at Thermotech says. “First, considerable work developing engineering materials and the understanding of the mechanical properties of these materials has been completed. Secondly, computer programs have been developed along with routine tooth proportion management to leverage the ability to build molds without restriction to standard steel gear manufacturing tooling.” Kleiss adds, “Cut plastic gears can replace metal with plastic. This can be a solution to a specific problem if materials replacement is the answer. Molded plastic gears offer a few more opportunities. The gear design can be easily optimized for the specific application. We use a method we call shape forming to fit the needs of the transmission. The molded solution offers unique part characteristics outside of the gear itself that would be difficult — if not impossible — to build into in cut gears.”

“High production is much easier on molded gears, which leads to a lower price point. With a quality mold, the repeatability is very high,” Ellis says. “Once the mold has been qualified, the future production runs should not have much variation. The potential quality of a cut gear is still higher than the molded gear. One of the things a designer must know is what quality is required for their application. Why request and pay for a quality higher than needed?”

In the end, both methods have advantages and disadvantages and it’s up to the customer to determine what plastic gear solution will best fit their specific application.

Overcoming the Limitations of Plastic

The limitations in plastic gearing remain fairly straightforward. “Quite simply, plastic gears are weaker than metal. They can’t operate at the same high temperatures. The most precise plastic gear will not be as accurate as the most precise metal gear, unless we start talking about micro-gears, which can be much smaller and more accurate than their cut metal counterparts,” Kleiss says. “1 think a bright spot for plastics is PEEK (polyetheretherketone), and its derivatives are promising much improved performance at high temperatures and high loads. New compositions of nylons are hitting the market now with improved properties. I expect even further material improvements in the coming years.”

“The biggest limitation is strength, especially for higher RPM and horsepower requirements,” adds Billmeyer. “The future does hold some intriguing solutions with metal plastic hybrids, or over-molded metal frameworks. Some of the new high-temperature combination plastics such as nylon with phenylpolysulfone look promising.”

Load capacity at temperature, is the most significant limitation according to Sheridan. “The automotive transmission is all metal for obvious reasons. There needs to be more done in the future to challenge life expectancy and critical failures. Most plastic gears don’t run continuously, but I believe new materials will become available in the future that will address strength, wear and friction modifications.”

Plastic Gear Lubrication

How has the lubrication evolved in plastic gearing? Plastic gear manufacturers believe that many factors affect the compatibility between lubricants and plastics.

“Plastic gears can be internally lubricated. Anything from silicone to Teflon can be molded into the material for self-lubrication. Most engineering plastics are inherently low friction. Unfilled nylon is a particularly good example. In addition, external lubricants can be used to good effect in specific applications,” Kleiss says.

“Some plastics do not require any lubrication because they are internally lubricated. However even some of these will work better if a break-in grease is used. Some other plastics work best if they are well greased,” Ellis says. “Caution must be taken as some plastic will react with certain lubricants.”

“External lubrication does not have to be a challenge. Start with the basic soap-based products and escalate from there. Care must be taken to ensure all of the ingredients in the lube are compatible with the molding material,” Ulrich at Thermotech says.

Spreading the Plastic Gospel

The AGMA Plastic Gearing Committee evaluates materials, design, rating, manufacturing, inspection and application of molded or cut­tooth plastics gearing. They recently conducted a meeting in Michigan to discuss the test methods for plastic gears, the inspection of molded plastic gears and the identification of plastic gear failures.

“AGMA’s Plastics Gear Committee works on various documents to assist design engineers with the unique aspects of the design, manufacture and metrology of plastic gears. With the release of these documents, designers and manufacturers will have more uniform knowledge and understanding for the application of plastic materials into the gear industry,” McNamara at Thermotech says.

“1 am not aware of any real focused effort on the part of AGMA to understand or further develop the potential for molded gears or for truly bracketing the molded accuracy of a plastic gear,” Kleiss adds. “This would require a different kind of inspection analysis than has proved successful for cut metal gears. We use our own internal software for everything, from the design to the inspection and testing of molded gears and their transmissions. Perhaps there are other ways to promote technological solutions in plastics. Education is one area that has proven successful for Winzeler Gear.

“Our Ultra Light Urban Vehicle project, in cooperation with Bradley University, continues to evolve,” Winzeler says. “This project has given us knowledge of power transmission in small vehicles and allows us the opportunity to present the benefits of plastic gearing from a weight, friction reduction and sound quality perspective. The project continues to grow, as well as the interest from transmission manufacturers.”

If meetings and educational collaborations can’t get the job done, Sheridan at Ticona turns to the tried and true initiatives of other areas of gear manufacturing. “Gear Expo is always a great venue to start discussions on the latest in plastic gear technology. We also hold in­house training sessions as well as webinars to provide as much assistance as we can to our customers now and in the future.”

An Alternative to the Alternative

For several issues of this magazine, Gear Technology has considered plastics to be an alternative form of gear manufacturing along with powder metals and forging. Can the argument be made that plastics are no longer on the outside of gear manufacturing looking in?

“It is actually becoming the other way around these days,” Kleiss says. “Metal is considered as a possible alternative manufacturing method, but only if every possible solution in plastic has been rejected. We promote performance as the key goal. Performance is cost-effective. Cost-effective means dollars saved and a better product.”

“Molded plastic gearing has considerable potential still. With new molding materials continuously entering the market, coupled with the ability to design and build highly accurate mold tooling and injection molding machines capable of producing and maintaining a consistent process shot after shot, injection molded gears are replacing machined gears at a higher rate than ever before. It still remains the most economical method of producing high volumes of gears,” Ulrich says.

“We are continually trying to research and develop higher temperature materials that behave more like conventional gear materials,” Winzeler says. “The challenge is that we see very little R&D activity outside of advanced product design. Most R&D has a timetable and there’s no extra time to experiment. Metal gears have had years of knowledge and once plastic gearing can attain the same levels of research and development, more and more plastic applications will become available to us.” [Gear Technology, March/April 2012]

POWER ENGINEERING

Text 7. STEAM TURBINE REHABS DELIVER GREATER OUTPUT AND LONGER LIFE

By R. Ray

A large chunk of America’s coal-fired power plants will be phased out in favor of cleaner-burning gas-fired generation. The transition to gas is being driven by low gas prices, stricter emission standards and a tough economy. But the vast majority of U.S. coal-fired generation will survive as power producers spend billions to bring these aging units into compliance with new emission limits on a wide range of pollutants. These old coal-fired units, upgraded with new pollution control technology, will remain online for another 20 years, providing the bulk of America’s power supplies for years to come.

Coal will remain the dominant source of power generation in the U.S. through 2040, according to the Department of Energy’s Annual Energy Outlook. Coal will account for 35 percent of the nation’s power in 2040, while gas will supply 30 percent, the report showed.

The problem is this: The average age of a coal-fired power plant in the U.S. is 38 years. To remain online, many of these plants will require a major steam turbine rehabilitation. Worn and tattered after decades of operation, many of the rotating components in a steam turbine must be replaced to extend the life of the unit.

The market for steam turbine rehabs is strong, as power producers spend billions on a wide range of pollution control equipment, including scrubbers and dry sorbent injection systems, to comply with stricter emission limits and preserve their coal-fired assets.

“If they decide this is a plant they want to keep online for 20 more years, they need to look at the rotating equipment and evaluate its condition,” said Kent Rockaway, manager of strategic marketing for Mitsubishi Power Systems Americas Inc. “The problems will often be with the blade path. The rotating stationary blades can reach an end-of- life situation where the amount of erosion makes low-cost repairs no longer feasible. The objectives for most steam turbine rehabs are longer life, increased output and greater efficiency. To justify the expense, they need to see performance improvement. Their goal is to have the performance improvement pay for the upgrade. To increase the steam output, you would accommodate it with more efficient blading. If you have a 1970s vintage turbine, then going with a totally new blade path will get you overall heat rate improvement for the plant,” he said.

Mitsubishi is now rehabilitating two 40-year-old units of an unnamed plant at its Savannah Machinery Works facility, a service and manufacturing center for steam turbines, gas turbines and generators in Savannah, Ga. The project calls for upgrading each unit’s high pressure/ intermediate pressure turbines and replacing some of the blading on the low pressure (LP) turbine of each unit. Both units have a capacity of 250 MW each. “The existing blades were showing signs of fatigue,” Rockaway said. “For the long-term safety and reliability of the units, a decision was made to replace them.” Unit 1 is scheduled to be installed this fall, while Unit 2 will be installed in the fall of 2014.

Adding emission control technology to a coal-fired power plant can cause a meaningful reduction in power production, as much as 20 percent in some cases. Much or all of that lost output can be recovered through efficiencies achieved with a steam turbine rehab. “By rehabbing, you can help offset the lost output,” Rockaway said.

In April 2012, Alstom completed a steam turbine upgrade at Dominion Power’s two-unit, 1,863 MW North Anna Power Station in Louisa County, Va. The project entailed a new high pressure (HP) and two new low pressure rotors for each nuclear generating unit in order to enable the 140-ton units to handle increased steam output. The rotors installed were among the first produced at Alstom’s new Chattanooga, Tenn, turbomachine manufacturing facility.

Alstom also increased the blade length on North Anna’s LP rotors from 48 to 57 inches to maximize energy capture from the steam flow. “The advent of computational fluid dynamics has allowed us to accelerate our technology and blades,” said Charlie Athanasia, vice­president of thermal services, North America. “That not only allows for more efficiency and more power outfit, it prolongs the life.”

Alstom’s retrofit work at North Anna units I and 2 resulted in a power output capacity increase of 60 MW per unit. Previous to the North Anna upgrade, Alstom completed a similar project at the Surry Power Station in southeastern Virginia. The uprate was completed in June 2011 for Surry Unit 2, and the Surry 1 uprate was completed in December 2010. Prior to the uprates, each Surry Unit was rated at 799 net MW. After the uprates, they are each rated at 838 net MW.

While the efficiency increase is welcomed, Alstom’s focus during its steam turbine upgrades is not simply on ramping up the turbine; but rather optimizing the entire shaft line and accessory system configuration, often potentially including balance of plant. With each upgrade comes an added cost. However, Alstom has developed a cost solution for its upgrades. Instead of conducting extensive turbine maintenance at one time, Alstom’s spreads out the implementation and cost of maintenance over a long period of time. “As we continue to advance technology, we look at component design options to prolong lifetime and thus outage periods,” Athanasia said. “In doing so, customers get much higher value and return on their maintenance costs.”

In addition to implementing a unique cost mechanism, Alstom is focusing much attention on lowering the costs of steam turbine upgrades in an effort to keep coal competitive with natural gas generation. Although the market is perceived “suppressed” for new steam turbines in conventional coal-fired generation, the need for new gas turbines and steam turbines in combined-cycle configuration plants is increasing.

Therefore, options for both new and/or retrofitted steam turbines must be considered. “Alstom is looking at how to better position steam turbine technologies, application and service capabilities and capacity for what we see as a coming surge in the gas turbine driven combined- cycle application,” Athanasia said.

Additionally, steam turbine upgrades at nuclear plants, such as those performed by Alstom at North Anna and Surry, allow nuclear facilities to produce even more megawatts. By undergoing a steam turbine upgrade, both nuclear and coal-fired facilities can gain significant results in efficiency and reliability [Power Engineering, January, 2013].

COMPUTER ENGINEERING

Text 8. COMPUTER ENGINEERING: FEELING THE HEAT

By Ph. Ball

A laptop computer can double as an effective portable knee­warmer — pleasant in a cold office. But a bigger desktop machine needs a fan. A data centre as large as those used by Google needs a high- volume flow of cooling water. And with cutting-edge supercomputers, the trick is to keep them from melting. Current trends suggest that the next milestone in computing — an exaflop machine performing at 1018 flops — would consume hundreds of megawatts of power (equivalent to the output of a small nuclear plant) and turn virtually all of that energy into heat.

Increasingly, heat looms as the single largest obstacle to computting’s continued advancement!. The problem is fundamental: the smaller and more densely packed circuits become, the hotter they get. “The heat flux generated by today’s microprocessors is loosely comparable to that on the Sun’s surface,” says Suresh Garimella, a specialist in computer­energy management at Purdue University in West Lafayette, Indiana. “But unlike the Sun, the devices must be cooled to temperatures lower than 100 °C” to function properly,” he says.

To achieve that ever more difficult goal, engineers are exploring new ways of cooling — by pumping liquid coolants directly on to chips, for example, rather than circulating air around them. In a more radical vein, researchers are also seeking to reduce heat flux by exploring ways to package the circuitry. Instead of being confined to two-dimensional (2D) slabs, for example, circuits might be arrayed in 3D grids and networks inspired by the architecture of the brain, which manages to carry out massive computations without any special cooling gear. Perhaps future supercomputers will not even be powered by electrical currents borne along metal wires, but driven electrochemically by ions in the coolant flow.

Go with the flow

The problem is as old as computers. The first modern electronic computer — a 30-tonne machine called EN1AC that was built at the University of Pennsylvania in Philadelphia at the end of the Second World War — used 18,000 vacuum tubes, which had to be cooled by an array of fans. The transition to solid-state silicon devices in the 1960s offered some respite, but the need for cooling returned as device densities climbed. In the early 1990s, a shift from earlier “bipolar” transistor technology to complementary metal oxide semiconductor (CMOS) devices offered another respite by greatly reducing the power dissipation per device. But chip-level computing power doubles roughly every 18 months, as famously described by Moore’s Law, and this exponential growth has brought the problem to the fore yet again. Some of today’s microprocessors pump out heat from more than one billion transistors.

That is why computers have fans. Air that has been warmed by the chips carries some heat away by convection, but not enough: the fan circulates enough air to keep temperatures at a workable 75 °C or so. But a fan also consumes power — for a laptop, that is an extra drain on the battery. And fans alone are not always sufficient to cool the computer arrays used in data centres, many of which rely on heat exchangers that use liquid to cool the air flowing over the hot chips. Still larger machines demand more drastic measures. As Bruno Michel, manager of the advanced thermal packaging group at IBM in Switzerland, explains: “An advanced supercomputer would need a few cubic kilometres of air for cooling per day.” That simply is not practical, so computer engineers must resort to liquid cooling instead.

Water-cooled computers were commercially available as early as 1964, and several generations of mainframe computers built in the 1980s and 1990s were cooled by water. Today, non-aqueous, non-reactive liquid coolants such as fluorocarbons are sometimes used, coming into direct contact with the chips. These substances generally cool by boiling — they absorb heat and the vapour carries it away. Other systems involve liquid sprays or refrigeration of the circuitry.

SuperMUC, an IBM-built supercomputer housed at the Leibniz centre, became operational in 2012. The 3-petaflop machine is one of the world’s most powerful supercomputers. It has a water-based cooling system, but the water is warm — around 45 °C. The water is pumped through microchannels carved into a customized copper heat sink above the central processing unit, which concentrates cooling in the parts of the system where it is most needed. The use of warm water may seem odd, but it consumes less energy than other cooling methods, because the hot water that emerges from the system requires less chilling before it is reintroduced. The use of hot-water outflow for heating nearby office buildings results in further energy savings.

Michel and his colleagues at IBM believe that flowing water could be used not just to extract heat, but also to provide power for the circuitry in the first place, by carrying dissolved ions that engage in electrochemical reactions at energy-harvesting electrodes. In effect, the coolant doubles as an electrolyte “fuel”. “The idea is not entirely new. It has been used for many years in thermal management of aircraft electronics, which are cooled by jet fuel,” says Yogendra Joshi, a mechanical engineer at the Georgia Institute of Technology in Atlanta.

Delivering electrical power with an electrolyte flow is already a burgeoning technology. In a type of fuel cell known as a redox flow battery, for example, two electrolyte solutions are pumped into an electrochemical cell, where they are kept separate by a membrane that ions can flow through. Electrons travel between ions in the solutions in a process known as a reduction-oxidation (redox) reaction — but they are forced to do so through an external circuit, generating energy that can be tapped to provide electrical power.

Salty logic

Redox-flow cells can be miniaturized using microfluidic technology, in which the fluid flows are confined to microscopic channels etched into a substrate such as silicon. At small scales, the liquids flow without mixing, so there is no need for a membrane to separate them. With this simplification, the devices are easier and cheaper to make, and they are compatible with silicon-chip technology.

Michel and his colleagues have begun to develop microfluidic cells for powering microprocessors, using a redox process based on vanadium ions. The electrolyte is pumped along microchannels that are 100 - 200 micrometres wide and similar to those used to carry coolant flows around some chips. Power is harvested at electrodes spaced along the channel, then distributed to individual devices by conventional metal wiring. The researchers unveiled their preliminary results in August, at a meeting of the International Society of Electrochemistry in Prague. But they remain some way from actually powering circuits this way. At present, the power density of microfluidic redox-flow cells is less than 1 watt per square centimetre at 1 volt — two or three orders of magnitude too low to drive today’s microprocessors. However, Michel believes that future processors will have significantly lower power requirements. And, he says, delivering power with microfluidic electrochemical cells should at least halve the power losses that occur with conventional metal wiring, which squanders around 50% of the electrical energy it carries as resistive heating.

Becoming brainier

Electrochemical powering could help to reduce processors’ heat dissipation, but there is a way to make a much bigger difference. Most of the heat from a chip is generated not by the switching of transistors, but by resistance in the wires that carry signals between them. The problem is not the logic, then, but the legwork. During the late 1990s, when transistors were about 250 nanometres across, “logic” and “legwork” accounted for roughly equal amounts of dissipation. But today, says Michel, “wire energy losses are now more than ten times larger than the transistor-switching energy losses”. In fact, he says, “because all components have to stay active while waiting for information to arrive, transport-induced power loss accounts for 99% of the total”.

This is why “the industry is moving away from traditional chip architectures, where communication losses drastically hinder performance and efficiency”, says Garimella. The solution seems obvious: reduce the distance over which information-carrying pulses of electricity must travel between logic operations. Transistors are already packed onto 2D chips about as densely as they can be. If they were stacked in 3D arrays instead, the energy lost in data transport could be cut drastically. The transport would also be faster. “If you reduce the linear dimension by a factor of ten, you save that much in wire- related energy, and your information arrives almost ten times faster,” says Michel. He foresees 3D supercomputers as small as sugar lumps.

What might 3D packaging look like? “We have to look for examples with better communication architecture,” Michel says. “The human brain is such an example.” The brain’s task is demanding: on average, neural tissue consumes roughly ten times more power per unit volume than other human tissues — an energy appetite unmatched even in an Olympic runner’s quadriceps. The brain accounts for just 2% of the body’s volume, but 20% of its total energy demand.

The brain is fantastically efficient compared to electronic computers. It can achieve five or six orders of magnitude more computation for each joule of energy consumed. Michel is convinced that the brain’s efficiency is due to its architecture: it is a 3D, hierarchical network of interconnections, not a grid-like arrangement of circuits.

Smart build

This helps the brain to make much more efficient use of space. In a computer, as much as 96% of the machine’s volume is used to transport heat, 1% is used for communication (transporting information) and just one-millionth of one per cent is used for transistors and other logic devices. By contrast, the brain uses only 10% of its volume for energy supply and thermal transport, 70% for communication and 20% for computation. Moreover, the brain’s memory and computational modules are positioned close together, so that data stored long ago can be recalled in an instant. In computers, by contrast, the two elements are usually separate. “Computers will continue to be poor at fast recall unless architectures become more memory-centric”, says Michel. Three­dimensional packaging would bring the respective elements into much closer proximity.

All of this suggests to Michel that, if computers are going to be packaged three-dimensionally, it would be worthwhile to try to emulate the brain’s hierarchical architecture. Such a hierarchy is implicit in some proposed 3D designs: stacks of individual microprocessor chips are stacked into towers and interconnected on circuit boards, and these, in turn, are stacked together, enabling vertical communication between them. The result is a kind of “orderly fractal” structure, a regular subdivision of space that looks the same at every scale.

Michel estimates that 3D packaging could, in principle, reduce computer volume by a factor of 1,000, and power consumption by a factor of 100, compared to current 2D architectures. But the introduction of brain-like, “bionic” packaging structures, he says, could cut power needs by another factor of 30 or so, and volumes by another factor of 1,000. The heat output would also drop: 1-petaflop computers, which are now large enough to occupy a small warehouse, could be shrunk to a volume of 10 litres.

If computer engineers aspire to the awesome heights of zetaflop computing (1021 flops), a brain-like structure will be necessary: with today’s architectures, such a device would be larger than Mount Everest and consume more power than the current total global demand. Only with a method such as bionic packaging does zetaflop computing seem remotely feasible. Michel and his colleagues believe that such innovations should enable computers to reach the efficiency — if not necessarily the capability — of the human brain by around 2060. That is something to think about [Nature, No. 492, December, 2012].

AUTOMATION ENGINEERING

Text 9. ADVANCED CONTROL SYSTEMS FOR CEMENT PLANTS

By K. Nakase, T. Aizawa

Interest in the automation of the Japanese cement industry has increased enormously in recent years following the diversification of customer needs, intensification of international price competitivity, a slow-down in the level of energy saving benefits and rises in employees’ wages. In this report, the outline of Onoda’s plant modernisation is explained along with the results of recent developments, such as real time quality prediction systems, kiln controls, an optimising control system for ball mills, cement fineness on-line control system.



Поделиться:




Поиск по сайту

©2015-2024 poisk-ru.ru
Все права принадлежать их авторам. Данный сайт не претендует на авторства, а предоставляет бесплатное использование.
Дата создания страницы: 2021-02-06 Нарушение авторских прав и Нарушение персональных данных


Поиск по сайту: