overshot the puck for sale

PUZZLE LINKS: JPZ Download | Online Solver Bonus Tuesday puzzle for you, and it’s a guest acrostic by Twitch streamer extraordinaire and all-around great guy JibbyMAX! Jibby, what do you have to say? Hi everyone! I’m very excited to present my second ever acrostic [ed note: check out the first one here!]. I was inspired...

overshot the puck for sale

The Mylec Floor Hockey Puckis perfect for the indoor or outdoor hockey player. This lightweight puck is softer than the traditional roller hockey puck making it safer for youngsters as well as surrounding walls, vehicles, and etc. This Mylec puck can be used for floor hockey or roller hockey while the bright florescent color of the puck makes it easy to see when playing or when looking for a puck that overshot the net. The other advantage is that that this floor hockey puck won"t leave puck marks like a traditional roller or ice hockey puck would.Weight:2.0 oz

overshot the puck for sale

This lightweight puck is softer than the traditional roller hockey puck making it safer for youngsters as well as surrounding walls, vehicles, and etc.

It can be used for floor hockey or roller hockey while the bright florescent color of the puck makes it easy to see when playing or when looking for a puck that overshot the net.

overshot the puck for sale

As hockey great Wayne Gretzky once said, the key to winning is skating first to where the puck will be next. Business success is similar. We all want to go where the greatest profits will be—but by the time most of us get there, the “puck” has moved on.

Consider IBM: Riding high in the early 1980s, the company clung to where the money had been—computer-system design/assembly—outsourcing its processor chips and operating system to Intel and Microsoft. A 10-year decline followed, as Intel and Microsoft—navigating to where the money would be—captured industry profits.

Stage 1: A Tight Fit. Early products’ functionalities do not yet meet key customers’ needs (e.g., the first mainframe computers weren’t powerful or fast). Companies compete on performance, making the highest-quality products for their most demanding—and profitable—customers.

Firms also push technological frontiers—developing and combining product components more efficiently, using interdependent, proprietary product architectures. Large, established, vertically integrated companies dominate, because all their units communicate under one roof. Products for end-users constitute the most profitable point on the value chain. Example:

Telephone companies still dominate in high-speed Internet access via phone lines—because too many unpredictable interdependencies exist between DSL providers and phone companies. By spanning the entire value chain, incumbent phone companies provide more reliable service.

Stage 2: Going to Pieces. As companies stretch to meet their most demanding customers’ needs, product performance overshoots mainstream consumers’ needs. Disruptive companies enter this less demanding market, displacing incumbents by quickly delivering flexible, customized, and cheaper products. Example:

In the 1990s, computer-industry overshooting shifted competitiveness to speed, convenience, and customization. Dell Computer’s well-timed business model—featuring outsourced subsystems, custom assembly, quick delivery, and competitive prices—garnered astounding success.

As an industry continues to mature, the most profitable point along the value chain shifts from end-use products to components and subsystems—which still have technologically interdependent internal architectures.

Rather than redesign everything, successful companies at this stage mix and match the best components from top suppliers to meet customers’ needs—creating interdependent links between components and subsystems.

When products’ architectures are interdependent and proprietary, competitors can’t easily copy them. Therefore, companies who control the interdependent links in their industry’s value chain dominate.

How to control those links? As your industry matures and fragments, don’t spin off or out-source asset-intensive businesses to companies that will create subsystems with progressively more interdependent architectures. Instead, flexibly couple and decouple operations. Learning from earlier mistakes, IBM now chops up its integrated value chain—selling its technology, components, and subsystems in the open market—and has created a high-end systems-integration business. Skating to value-chain points requiring complex, nonstandard integration, IBM now earns impressive profits.

When IBM decided to outsource its operating system and processor chips in the early 1980s, it was, or appeared to be, at the top of its game. It owned 70% of the entire mainframe market, controlled 95% of its profits, and had long dominated the industry. Yet disaster famously ensued, as Intel and Microsoft subsequently captured the lion’s share of the computer industry’s profits, and Big Blue entered a decade of decline.

It’s easy to look back and ask, “What were they thinking?” but, in truth, IBM’s decision fit well with prevailing orthodoxies, particularly with the idea that companies should outsource all but their core competencies—that is, sell off or outsource any function that another company could do better or cheaper than it could. Indeed, at the time, many observers hailed IBM’s move as a master-stroke of strategy, forward-looking and astute.

Of course it turned out not to be, but what lessons should we draw from IBM’s spectacular mistake? They’re far from clear. It’s easy to say, “Don’t outsource the thing that’s going to make lots of money next,” but existing models of industry competitiveness offer very little help in predicting where, in an industry’s value chain, future profitability will be most attractive. Executives and investors all wish they could be like Wayne Gretzky, with his uncanny ability to sense where the puck is about to go. But many companies discover that once they get to the place where the money is, there’s very little of it left to go around.

Over the past six years, we’ve been studying the evolution of industry value chains, and we’ve seen a recurring pattern that goes a long way toward explaining why companies so often make strategic errors in choosing where to focus their efforts and resources. Understanding the pattern helps answer some of the enduring questions that IBM’s leaders, and thousands of others before and since, grappled with: Where will attractive profits be earned in the value chain of the future? Under what circumstances will integrated corporations wield powerful competitive advantages? What changes in circumstances will shift competitive advantage to specialized, nonintegrated companies? What causes an industry to fragment? How can a dominant, integrated player determine what to out-source and what to hold on to as its industry begins to break into pieces? How can new entrants figure out where to target their efforts to maximize profitability?

The pattern we observed arises out of a key tenet of the concept of “disruptive technologies”—that the pace of technological progress generated by established players inevitably outstrips customers’ ability to absorb it, creating opportunity for up-starts to displace incumbents. This model has long been used to predict how an industry will change as customers’ needs are exceeded. (See the sidebar “The Disruptive Technologies Model”.) Building on that ground, this new theory provides a useful gauge for measuring not only where competition will arise under those circumstances but also where, in an industry’s shifting value chain, the money will be made in the future.

The disruptive technologies model contrasts the pace of technological progress with customers’ ability to use that progress. According to the model, there are two types of performance trajectories in every market. One trajectory, depicted by the shaded area, shows how much improvement in a product or service customers can absorb over time. The other trajectory, shown by the solid lines, depicts the improvement that innovators in the industry generate as they introduce new and enhanced products. Almost always, this second trajectory—the pace of technological innovation—outstrips the ability of customers in a given tier of the market to absorb it. This creates the potential for innovative companies to enter the lower tiers of the market with “disruptive technologies”—cheaper, simpler, more convenient products or services. Almost always, the leading companies are so absorbed with upmarket innovations addressed to their most sophisticated and profitable customers that they miss the disruptive innovations. Disruptive technologies have caused many of history’s best companies to plunge into crisis and fail.

The implications of our theory will surprise many readers because, if we’re right, the money will not be made where most companies are headed, as they busily out-source exactly the things they should be holding on to and hold on to precisely the things they should unload. But we’ll get to that later…

Companies compete differently at different stages of a product’s evolution. In the early days, when a product’s functionality does not yet meet the needs of key customers, companies compete on the basis of product performance. Later, as the underlying technology improves and mainstream customers’ needs are met, companies are forced to compete on the basis of convenience, customization, price, and flexibility. These different bases of competition call for very different organizational structures at both the company and industry levels.

When products aren’t yet good enough for mainstream customers, competitive pressures force engineers to focus on wringing the best possible performance out of each succeeding product generation by developing and combining proprietary components in ever more efficient ways. They can’t assemble off-the-shelf components using standard interfaces because that would force them to back away from the frontier of what’s technologically possible. When the product is not good enough, backing off from the best you can do spells competitive trouble. To make the highest-performing products possible, then, companies typically need to adopt interdependent, proprietary product architectures.

During the early days of the computer industry, for example, when mainframes were not yet powerful or fast enough to satisfy mainstream customers’ needs, an independent contract manufacturer assembling machines from suppliers’ components could not have survived because the way the machines were designed depended on the way they were manufactured and vice versa. Nor could an independent supplier of operating systems, core memory, or logic circuitry have survived because these key subsystems had to be designed interdependently, too.

When the product isn’t good enough, in other words, being an integrated company is critical to success. As the most integrated company during the early era of the computer industry, IBM dominated its world. Ford and General Motors, as the most integrated automakers, dominated their industry during the era when cars were not good enough. For the same reasons, RCA, Xerox, AT&T, Alcoa, Standard Oil, and U.S. Steel dominated their industries at similar stages. Their products were based on the sorts of proprietary, interdependent value chains that are necessary when pushing the frontier of what is possible.

When a nonintegrated company tries to compete under these circumstances, it usually fails. Stitching together a system with other “partner” companies is extremely difficult when the subsystems and expertise those companies provide are interdependent. We could offer numerous historical examples, but there are plenty of illustrations from industries that are still emerging. In the late 1990s, for example, many nonintegrated companies attempted to offer high-speed DSL access to the Internet over phone lines operated by telephone companies. Most of these attempts failed. Many believe that low prices for DSL service that were rooted in regulatory peculiarities of the Telecommunications Act of 1996 are what drove the competitive local exchange carriers toward bankruptcy. This was only the proximate cause of their demise, however. The fundamental issue is that at this point in the industry’s evolution, DSL technology isn’t good enough yet, and there are, as a result, too many unpredictable interdependencies between what focused DSL providers need to do and what the telephone companies must do in response. The incumbent phone companies’ capacity to span the whole value chain has been a powerful advantage. They understand their own network architectures and can consequently offer service more quickly, with fewer concerns about the unintended consequences of reconfiguring their own central-office facilities. Regulatory mandates cannot decouple an industry at an interdependent interface. As long as DSL service is not good enough to satisfy most users, the integrated telephone companies will be able to provide better, more reliable service than nonintegrated competitors.

Product performance almost always improves beyond the needs of the general consumer, as companies stretch to meet the needs of the most demanding (and most profitable) customers. When technological progress overshoots what mainstream customers can make use of, companies that want to win the business of the overserved customers in less-demanding tiers of the market are forced to change the way they compete. They must bring more flexible products to market faster and customize their products to meet the needs of customers in ever smaller market niches.

To compete on these new dimensions, companies must design modular products, in which the interfaces between components and subsystems are clearly specified. Ultimately, these interfaces coalesce into industry standards. Modular architectures help companies introduce new products faster because subsystems can be improved without having to redesign everything. Companies can mix and match the best components from the best suppliers to respond to the specific needs of individual customers. Although standard interfaces invariably force compromises in system performance, competitors aiming at overserved customers can comfortably trade off some performance to achieve the benefits of speed and flexibility.

Once a modular architecture and the requisite industry standards have been defined, integration is no longer crucial to a company’s success. In fact, it becomes a competitive disadvantage in terms of speed, flexibility, and price, and the industry tends to dis-integrate as a consequence. The exhibit “The Dis-Integration of the Computer Industry” illustrates how this happened in that field. During its early decades, the dominant companies were integrated across most value-chain links because competitive conditions mandated integration. As the personal computer disrupted the industry, however, it was as if the industry got pushed through a bologna slicer. The dominant, integrated companies were displaced by specialists that competed in horizontal strata within the value chain.

The Dis-Integration of the Computer Industry Mainframes and minicomputers were never good enough or fast enough or cheap enough to create a mass market and were therefore always the province of large, integrated players who built their machines from their own proprietary designs and components. The PC, though, very quickly became good enough for the average consumer, giving rise to an army of specialized players.

This shift explains why Dell Computer was so successful in the 1990s. Dell did not succeed because its products were better than those of competitors IBM, Compaq, and the like. Rather, overshooting triggered a shift in the basis of competition to speed, convenience, and customization, and Dell’s business model was a perfect match for that environment. Customers were delighted to buy computers with outsourced subsystems, custom-assembled to their own specifications and delivered incredibly quickly at competitive prices. This also explains how Cisco, with its disruptive router and its nonintegrated business model, bested more integrated competitors like Lucent in the market for telecommunications equipment.

The careful reader will have noticed that the interfaces between stages in the value chain are central to our argument—both to the forces that support integration in the early years of an industry and to those that ultimately pull an industry apart into component pieces. They’ll become even more important when we move on to profitability flows in a moment. So let’s look more closely at what we mean by “the interfaces between components and subsystems.”

Say a company is considering whether it’s feasible to procure a subsystem from a supplier or partner rather than make it in-house. Three conditions must be met. First, managers need to know what to specify—which attributes of the item they’re procuring are crucial and which are not. Second, they must be able to measure those attributes so they can verify that they have received what they need. Third, there can’t be any unpredictable interdependencies: They need to understand how the subsystem will perform with the other pieces of the system so that it can be used with predictable effect. These conditions—specifiability, verifiability, and predictability—are prerequisites to modular designs that enable companies to work efficiently with suppliers and partners. They constitute what economists would term “sufficient information” for an efficient market to emerge for a particular component or subsystem.

Typically, when product performance has become more than good enough, the technologies being used are mature enough for these conditions to be met—facilitating the decoupling of the value chain. It is when performance is not good enough that new technologies are used in new ways—and in those circumstances the conditions of specifiability, verifiability, and predictability often are not met. When sufficient information does not exist at an interface, managerial coordination will always trump market mechanisms, reinforcing the strength of integrated companies.

The evolving structure of the lending industry offers a good example of these forces at work. Integrated banks such as Chase and Deutsche Bank have powerful competitive advantages in the most complex tiers of the lending market. Integration is key to their ability to knit together huge, complex financing packages for sophisticated and demanding global customers. Decisions about whether and how much to lend cannot be made according to fixed formulas and measures; they can only be made through the intuition of experienced lending officers. The high-end bankers who create innovative, complex financial instruments for these customers play a similar role to engineers who push the technological envelope when product functionality is not good enough. In both cases, meeting the needs of the most demanding customers requires that all the constituent parts be under one roof, able to communicate through organizational rather than market mechanisms.

The simpler tiers of the lending market, on the other hand, are being disrupted by innovations in the way credit-worthiness is established—specifically by credit-scoring technology and advances in asset securitization. In these tiers, lenders know and can measure precisely those attributes that determine the likelihood that a borrower will repay a loan. Verifiable information about borrowers—how long they have lived, where they live, how long they have worked, where they work, what their income is, and whether they’ve paid bills on time—is fed into powerful algorithms, which are used to automate lending decisions. Credit scoring took root in the 1960s in the lowest tier of the market, as department stores began to issue their own credit cards. Then, unfortunately for the big banks, the specialist horde of nonbank institutions moved inexorably upmarket in pursuit of profits—first to general consumer credit-card loans, t hen to automobile and mortgage loans, and now to small-business loans. True to form, the lending industry in these simpler tiers of the market has largely dis-integrated, as these specialist companies have emerged, each focusing on just a slice of added value.

Clearly, companies competing in an integrated market face very different challenges from those competing in a fragmented market—the ball game changes fundamentally once components become modular and customers’ thoughts turn to speed or convenience rather than functionality. Sources of profitability change as well. Our model can help managers, strategists, and investors assess how the power to grab profits is likely to shift in the future. The bedrock principle is this: Those who control the interdependent links in a value chain capture the most profit.

In periods when product functionality is not yet good enough, integrated companies that design and make end-use products typically make the most money, for two reasons. First, the interdependent, proprietary architecture of their products makes differentiation straightforward. Second, the high ratio of fixed to variable costs, which is inherent to the design and manufacture of architecturally interdependent products, creates steep economies of scale. Larger competitors can amortize high fixed costs over greater volume, giving them strong cost advantages over smaller competitors. Making highly differentiated products with strong cost advantages is a license to print money, and lots of it.

Few industries are exempt from the forces of disruption and dis-integration, management education included. This industry is changing, and whether these changes prove to be a boon or a bane to leading schools of management depends on how they address these forces.

At the top of the heap, big-name business schools offer top-tier MBA students a premium, expensive product. It’s worth it: Graduates easily command starting salaries of $130,000 or more, and they’re in high demand. True to the model, the architecture of top-tier MBA programs is interdependent. Their premise is that future managers can’t understand marketing, for example, unless they study product development, and they can’t study product development without studying manufacturing, and so on. The programs are also integrated in the sense that the faculty members do everything, soup to nuts: conduct research, writes cases and articles, design courses, and teach.

But the familiar pattern of overshooting and subsequent modularization is becoming evident. As graduates of these top-tier schools have become more expensive to employ, a significant portion of graduates now take jobs with consulting firms, investment banks, and high-tech start-ups. The established operating companies that historically had been major employers of MBAs increasingly find these graduates to be too expensive to fit into their salary structures and career paths.

Increasingly, those companies, and even some consulting firms, are opting to train their own. They hire people with bachelor’s or graduate technical degrees, then help them build managerial skills in formally organized institutions like Motorola University and GE’s Crotonville. Other companies have less-structured, but equally extensive, management-training programs. Last year, IBM spent more than $500 million on management training, for example, and announced it would begin selling management education programs to other companies’ executives as well.

Like most disruptions, these on-the-job training programs are probably not as good as what they’re replacing, at least in the way the elite schools define “good.” They’re certainly not as thorough, and their students aren’t, on average, quite as polished and prepared as the best MBA students. But like other disruptive businesses, they compete on different terms. On-the-job training programs are modular, custom-assembled courses whose content is tailored to specific issues the manager-students face. Managers will take a three-day course on strategic thinking, for instance, then use what they’ve learned to define a better strategy. It may not be as comprehensive as an MBA strategy class, but because it’s better targeted to the students’ immediate needs, it often proves more useful to them and to their employers. And in contrast to the leading schools’ integrated structure, on-the-job management education is dis-integrated. Hundreds of specialized companies develop materials; others design courses; still others produce and teach them.

How should the top management schools react? They could, of course, ignore the trend—there won’t be a dearth of MBA students anytime soon, and these institutions will likely survive in their current form for years. If they ignore the disruption, though, they will gradually lose influence because the vast majority of learning about management already occurs on the job. A second alternative is to skate to where the money is: to the design and assembly of customized courses for on-the-job training. T his is tempting because the custom executive-education market is growing, but it would be hard to compete against the focused, flexible specialists already in that space.

A better idea is to skate to where the money will be—to become the “Intel Inside” of corporate-training programs. That means providing not just single components in the form of cases or articles but rather “subsystems,” modules with proprietary internal architectures. These would be predefined sets of cases, articles, news clips, and video materials from which well-defined insights can cumulatively but interdependently be built. Teaching notes that make explicit the connections within these materials—connections that historically have resided only in the intuition of the professors who wrote the materials—would make it simple for a larger set of less well-trained instructors in a corporate setting to do a great job teaching powerful concepts. Companies that design courses could mix and match such materials to address students’ needs.

Always, disruption facilitates new waves of growth in an industry because it enables more people to buy and consume. If our model is right, future profits in the growing portions of this industry will come not from the design and assembly of courses, anyway, but from the development of the subsystems that make up those courses. That’s where the steep scale economies and differentiated materials should reside. If the leading management schools worked in this way to facilitate their own disruption, they would find they can continue to teach MBA students within their conventional model for the foreseeable future, even as they participate in the growth of the total management education industry—and continue to enjoy much of the profit as well.

Hence IBM, as the most integrated competitor in the mainframe computer industry, made 95% of the industry’s profits from just a 70% market share. And from the 1950s through the 1970s, General Motors garnered 80% of the profits from about 55% of the U.S. auto market. Most of IBM’s and GM’s suppliers, by contrast, survived on subsistence profits year after year.

But when the large integrated players overshoot what their mainstream customers can use, the tables begin to turn. Disruptive competitors begin to move upmarket, and the power to make money shifts away from companies that design and assemble the end-use product toward the back end of the value chain to those companies that supply subsystems with internal architectures that are still technologically interdependent.

A good way to visualize this is to imagine an engineer employed at Compaq whose boss just told her to design a desktop computer better than Dell’s, IBM’s, or Hewlett-Packard’s. How would she do it? When designing and assembling a modular product, your competitors can replicate anything you can do very quickly. And because most of the costs in an outsourcing-intensive business model are variable rather than fixed, there are minimal economies of scale, so that large and small competitors have similar costs. Making an undifferentiated product at undifferentiated costs is a recipe for earning undifferentiated profits.

Overshooting at the system level often throws the subsystem suppliers back to a stage where their product is not good enough for what the system assembler needs. Competitive forces consequently compel the subsystem suppliers to create architectures that are increasingly interdependent and proprietary as they try to push the bleeding edge of performance. They have to do this to win the business of their immediate customers, who are the designers and manufacturers of modular products. Hence, as a natural and inescapable result of the shift in industry structure, the place where companies are used to making a lot of money—the end-user stage—becomes unlikely to be the place where money will be made in the future. And, conversely, the places where attractive profits were rarely made in the past—components and subsystems—often become highly profitable.

The exhibit “Where the Money Went in the PC Industry” illustrates how this worked in the desktop computer market in the 1990s. Initially, money flowed from the customer to the companies that designed and assembled computers; but as the decade progressed, less and less of it stopped there as profit. Quite a bit of this money flowed over to operating system maker Microsoft and lodged there. Another chunk flowed to processor manufacturer Intel and stopped there. Money flowed to the DRAM chip makers such as Samsung and Micron Technology as well, but not much of it stopped there. It flowed through them and accumulated instead at companies like Applied Materials, which supplied the chip-manufacturing equipment that the DRAM makers used. Similarly, money flowed right through the assemblers of disk drives such as Quantum and lodged at the stage where heads and disks were made.

Where the Money Went in the PC Industry As PCs became good enough for mainstream users, profits flowed from the customers through the assemblers (the IBMs and Compaqs of the world) to lodge in the component makers—the operating system maker (Microsoft), the processor maker (Intel), and initially to the memory chip makers and disk drive manufacturers. But as DRAM chips and drives became good enough for the assemblers, the money flowed even further up the value chain to DRAM equipment makers and head and disk suppliers.

What’s different about the places where the money collected and those where it didn’t? For most of this period, profits lodged with the products that were the ones not yet good enough for what their immediate customers needed. The architectures of those products therefore tended to be interdependent and proprietary. Companies in the blue boxes could only hang onto subsistence profits because the functionality of their products tended to be more than good enough, and so their architectures had become modular.

Consider the DRAM industry. Because the architecture of their chips was modular, DRAM makers could not be satisfied with even the very best manufacturing equipment available. To succeed, DRAM producers needed to make their products at ever higher yields and ever lower costs. This rendered the functionality of the equipment that Applied Materials and other such companies made not good enough. As a consequence, the architecture of this equipment became interdependent and proprietary, as the equipment makers strove to inch closer to the performance their customers needed.

Once an industry starts to fragment, a very predictable thing happens to companies that design and assemble modular products. They face investor pressure to improve their return on assets but find that because they can’t differentiate their products or make them at a lower cost than competitors, they can’t improve the numerator of their ROA ratio. So they shrink the denominator; they sell off asset-intensive units that design and manufacture components to companies that see in those same operations the opportunity to create subsystems whose architectures are progressively more interdependent—thus improving the numerator of their ROA ratio. Lucent’s recent spin-offs of its component and manufacturing operations is an example. This seems perfectly logical and necessary, given the increasingly modular character of many of Lucent’s systems. But with perfect predictability, this pressure from Wall Street to boost ROA forces companies to skate away from the place where the money will be made in the future.

This scenario could soon play out in one of IBM’s businesses. Through the 1990s, the capacity of the 2.5-inch disk drives used in notebook computers tended to be inadequate. True to form, their architectures were interdependent, and the design and assembly stage was very profitable. As the leading manufacturer, IBM enjoyed 40% gross margins. Now, drive capacity is becoming more than good enough for notebook computer makers, presaging the decline of what has been a beautiful business.

If IBM plays its cards right, however, it is actually in a very attractive position. As the most integrated drive maker, it can skate to where the money will be by using the advent of modularity to detach its head and disk operations from its disk drive design-and-assembly business. If IBM begins to sell its components aggressively to competing disk drive makers, it can continue to enjoy the most attractive profit levels in the industry. There was a time IBM could fight this particular war and win. Now, a better strategy is to sell bullets to the combatants.

IBM has already made similar moves in its computer business through its decisions to chop up its integrated value chain and aggressively sell its technology, components, and subsystems in the open market. Simultaneously, it has created a consulting and systems integration business at the high end and de-emphasized the design and assembly of computers. As IBM has skated to those points in the value chain where complex, nonstandard integration needed to occur, that has led to a remarkable—and remarkably profitable—transformation of a huge company. To the extent that an integrated company like IBM can flexibly couple and decouple its operations, rather than irrevocably sell off operations, it has greater potential than a nonintegrated company to thrive from one cycle to the next.

We believe this model can help managers, strategists, and investors in a wide variety of industries see into the future with greater clarity than the traditional tools of historical data analysis have allowed. When we consider, for example, where the money in the automobile industry will go in the future, the car companies seem to be falling into exactly the same trap that IBM did some 15 years ago.

While automobiles often used to rust or fall apart mechanically well before their owners were ready to part with them, auto quality now has overshot what most customers want or need. In fact, the most reliable cars usually go out of style long before they wear out. As a result, the basis of competition is changing. Whereas it used to take six years to design a new car model, today it takes less than two. Car companies routinely compete by customizing features to the whims of smaller and smaller market niches. In the 1960s, it was not unusual for a model to sell a million units a year. Today, the market is far more fragmented: If you sell 200,000 units of a particular model, you’re doing fine. Some makers now promise that you can walk into a dealership, custom order a car exactly to your desired configuration, and have it delivered in five days—roughly the response time that Dell Computer offers.

To compete in this way, automakers are adopting modular architectures for their mainstream models. Rather than knitting together individual components from diverse suppliers, they’re procuring subsystems from fewer tier-one suppliers. The architecture within each subsystem—braking, steering, chassis, and the like—is becoming progressively more interdependent as these suppliers work to meet the auto assemblers’ performance and cost demands. Inevitably, the subsystems’ external interfaces are becoming more modular because the economics of using the same subsystem in several car models more than compensates for any compromises in performance that might result.

As the basis of competition has shifted, the vertically integrated automakers have had to break up their value chains so they can more quickly and flexibly incorporate the best components from the best suppliers. GM subsequently spun out its component operations into a separate company, Delphi Automotive Systems, and Ford has spun out its component operations as Visteon. Thus, the same thing is happening to the auto industry that happened to computers: Overshooting has precipitated a change in the basis of competition, which has precipitated a change in architecture, which has forced the dominant, integrated firms to dis-integrate.

To become fast and flexible, IBM’s PC business out-sourced its microprocessor to Intel and its operating system to Microsoft. But in the process, IBM hung onto where the money had been—the design and assembly of the computer system—and put into business the two companies that were positioned where the money would be. GM and Ford, with the encouragement of their investment bankers, have just done exactly the same thing. They have spun out the pieces of the value chain where the money will be in order to stay where the money has been.

Ford and GM had no choice but to decouple their component operations from their design-and-assembly businesses. Indeed, they gave their shareholders the option of owning one or both. But rather than an irreversible divestiture, they might have taken a page from IBM’s recent forays into opportunistic decoupling, ignored the siren song of investment bankers, and found a way not to shed those asset- and scale-intensive businesses where the numerator of the ROA ratio will likely be more attractive in the future. This will be especially true if shifts in customer demand mandate some sort of reintegration in the future.

Managers of the slimmed-down automakers can still do well, but they’ll need to dramatically change the way they do business in the design-and-assembly stage. They need to do in their industry what Dell did in the computer industry—become consummately fast, flexible, and convenient. Overshooting changes the game. If GM and Ford can play this new game better than competitors, they can still prosper, much as Dell did in the 1990s against competitors who hadn’t mastered the new rules as effectively.

The implications of these findings are clear. The power to capture attractive profits will shift in the value chain to those activities where the immediate customer is not yet satisfied with the functionality of available products. It is in these stages that complex, interdependent integration occurs—activities that create steeper economies of scale and greater opportunities for differentiation. The power will shift away from activities where the immediate customer is more than satisfied because it is there that standard, modular integration occurs. In most markets, this power shift occurs tier by tier in a way that is quite predictable.

Executives whose companies are currently making lots of money ought not to wonder whether the power to earn attractive profits will shift, but when. If they watch for the signals, quite possibly they can prosper in all cycles, rather than in only one.

overshot the puck for sale

They have been able to watch players that are considered to be among the best on the planet almost constantly since 1984 when Mario Lemieux arrived in town.

Lemieux crossed over with and then gave way to Jaromir Jagr, before coming out of retirement to carry the torch again until Sidney Crosby and Evgeni Malkin arrived on the scene in 2005 and 2006 respectively.

Over that span, there are occasions where we’ve all taken for granted what we’ve had the luxury of watching on a nightly basis for the past 30-plus years and we need to be reminded.

Malkin took it upon himself to do just that on Saturday night against the Edmonton Oilers. He and the Penguins spotted the NHL’s last place team a 2-0 first period lead, before he single-handedly eliminated it, giving his team a chance to earn a point in a 3-2 shootout loss.

It took just 48 seconds of period two for him to score one of the finest goals of his career and that is saying something because he has plenty to choose from.

The Oilers were breaking out of their own zone and defenseman Oscar Klefbom was carrying the puck across the blue line. Malkin pounced, lifted his stick and quickly curled and took the puck down the right wing where defenseman Mark Fayne was waiting for him.

One spin-o-rama and a blistering rising backhand shot that caught the corner over goalie Anders Nilsson’s left shoulder later, the Penguins pulled within one.

“It’s up there. The one against Carolina comes to mind when I think about that but the whole steal to create that was pretty impressive too,” Crosby said of Malkin’s effort. “It was a big goal to get us going and to get us in the game.”

He’d strike again just 3:19 later during a power play. Crosby won a faceoff in the right circle and flipped the puck to Malkin, where he made a quick give-and-go play with Kris Letang. When Letang slid the puck back to him he didn’t release a shot immediately, but hesitated just enough to bring defenseman Brandon Davidson down to a knee. Davidson created a perfect screen in the lane and Malkin lashed a slap shot past Nilsson that beat him to the far post -- tie game.

Malkin, who finished the game with 11 shots on goal and 14 total shot attempts had multiple chances to notch the hat trick, but he couldn’t get another puck past Nilsson.

Malkin and Crosby had their opportunities to win the game in overtime. Each made dazzling plays during the first two minutes of the extra period, but weren’t able to capitalize.

The point in the standings and the reminders of what each player is capable of doing were very nice, but it was all marred by another slow start -- something that has plagued them since the earliest moments of the season.

“Every game we start slow. I don’t know why. The coach talks before the game -- the first 10 minutes is very important for us, but we started slow, a couple of mistakes, a penalty and it’s zero-two after the first. It’s not what we want. We talked after the first here in the locker room and started to play better. It’s not our game when we start slow.”

overshot the puck for sale

One argument is flushing the grouphead after a shot can remove any leftover puck residue from the previous shot... but if you use a puck screen a flush for that purpose would be unnecessary. Also, flushing the Bianca before or after every shot offers no benefit to the extraction. Many owners who have watched too many E61 espresso machine videos mistakenly believe they need to flush Biancas grouphead prior to their shot. As a result there are multiple Youtube videos of people proudly demonstrating their Bianca and doing a cooling flush before mounting the portafilter which only serves to mislead other Bianca owners.

The confusion is because Biancas grouphead bears an uncanny resemblance to many of the earlier model e61 groups when in reality there is a huge technology gap between the two. Flushing the grouphead was and still is a common practice for owners of heat exchanger espresso machines because their groups have a tendency to over heat when idling. For people familiar with a heat exchanger or lever machine, or another e61 group that likes to over heat, flushing the group is a popular trick to lower the group head temperature prior to pulling the shot. Another trick sometimes employed is to mount a cold portafilter which helps to draw excess heat from the group. However ... the Lelit Bianca is a state of the art dual boiler machine "with dual PIDs" which serve to maintain an accurate ratio between group and brew temperatures within a degree or two. NET: Doing a cooling flush with the Bianca is not only unnecessary, it actually serves to defeat the benefit of the technology the owners paid for.

overshot the puck for sale

So you’ve got your shuffleboard table, and now it’s time to get your hands on a quality set of pucks (woohoo!). You can’t just go out and buy any old set, though. Like shuffleboard tables, puck quality varies depending on the manufacturer, so this post will help you understand which puck is right for your table. We want you to be able to enjoy the game to its full extent!

The best shuffleboard weightsare the right size for your table and made from heavy-chrome steel with removable plastic caps. These will last for years. But some less expensive models could work too.

We studied the best shuffleboard pucks on the market and analyzed each in detail to give you the necessary information to make a buying decision. We have also created a buyer’s guide that specifies the features you should seek in your pucks.

High-quality pucks are built to better standards; they are more durable, have replaceable parts, and are less likely to get stuck on wax. This means they both last longer and are more functional, which makes them more enjoyable to use.

Recreational pucks are ideal for players who are new to the game and lack the skill to use larger pucks. They are also more suitable for the smaller shuffleboard tables that you likely have.

As with all things, there is a sweet spot for shuffleboard weights. Too light, you’ll constantly be overshooting, and your opponents will easily knock your pucks around. Yet too heavy may make the shuffleboard pucks harder to control.

According to the shuffleboard rules dictated by the Shuffleboard Federation, regulation-sized pucks must be within the 310 g-355 g range to be eligible for tournament play.

Nearly all shuffleboard table pucks sell in eight quantities, with four going to each team — enough for matches. But you may come across pucks that don’t come sold in eight quantities.

Beveled edge pucks are often superior to flat-bottom pucks. This is because they don’t cause the wax to accumulate as they travel down the length of the table. Instead, they smoothly glide over the shuffle wax. However, flat-bottom pucks often behave rather differently.

Due to their sharp bottom edges, they do not smoothly glide over wax. Instead, they plow through the wax, leaving a trail of wax devastation in their wake, which requires smoothing.

While shuffleboard puck tops are not particularly prone to damage, they can show signs of serious wear if used regularly or mistreated. When this point comes, you may feel the need to replace your shuffleboard pucks.

However, if you have selected a model with removable screw caps, you need not buy another set. Instead, simply unscrew the plastic cap and replace it with a new one.

Colors give you a clear indication of the field of play and which team is winning. They must be distinguishable. The most common colors you find among pucks are red and blue, but many more colors are available.

While having additional features with your purchase is a big selling point, it’s no good if it comes at the cost of the quality of the tabletop shuffleboard pucks. Here are a few extras to look out for:

Storage case: A case helps protect your pucks from moisture and excessive wear. It is also a means of carrying your pucks and helps make you look like a seasoned player!

Shuffleboard wax: You can never have enough shuffleboard wax, and select pucks come with a can or two. It is a great addition to any shuffleboard puck set.

If you are after a set of regulation-sized table shuffleboard pucks, you won’t get any better than these pucks by American. They fall within the regulation weight of 310 g-355 g and are made from heavy chrome-plated steel. This ensures they have the resilience to last for many years, even with harsh impacts.

The plastic-topped caps are also easy to screw off and replaceable, which is a big advantage. If they show signs of serious wear, you need only replace them rather than the whole puck itself.

If quality is your only concern, the Zieglerworld pucks are the ones for you. They are the most expensive pucks on this list, but with this price comes superior quality. Besides being beveled and well-weighted, the screw caps are replaceable. This is a sure-fire sign they are premium pucks as cheaper models are snapped on or glued on, which means you can’t easily replace them.

There are also plenty of colors to choose from. So if you want to show a bit of flair on the table with your kit, these pucks will surely achieve that. Besides the gold listing pictured above, there are 21 other colors you can choose from.

If money is tight and you want inexpensive shuffleboard pucks, we recommend this set by TORPSPORTS. They are less than half of the cost of the American brand and are fairly close in quality. They are also made from heavy chrome-plated steeland have replaceable screw caps,which are both very sought-after features.

Again, this set comes with no accessories, which is a bit of a shame. And remember, these pucks are recreational size, not tournament size, so ideally, they are best used on tables that are around 15ft or less. That being said, they are very high-quality for cheap shuffleboard pucks, so we heavily endorse them.

Up next is the YDDS shuffleboard puck set, another regulation-size bundle. While these pucks are undoubtedly of high quality having chrome-plated steel like the rest, their greatest standout feature is the accompanying extras.

A mini dustpan, brush, and two 14 oz cans of shuffleboard wax are included with the pucks. While the dustpan and brush are clearly inexpensive, the wax is worth a fair sum of money, so you’re getting a great bang for your buck when you purchase this bundle.

We also like that you can purchase a recreational-sized set, perfect for players with smaller shuffleboard tables. Both versions havemedium and high-speed pucks for you to choose from.

Yet unlike our top budget pick — the TORPSPORT set, these pucks are regulation-size, making them a better choice for longer tables. They are also a little brighterthan others, making it easier to distinguish teams. Although we suspect it shouldn’t be too difficult to tell colors apart anyway.

However, the plastic caps appear lower quality than we like and do not seem removable. We also would rather the shuffleboard pucks have beveled edges. Without this, the pucks snowplow the wax leaving wax streaks that no one wants. We recommend going for one of the more expensive sets if you have the money to spend.

The Hathaway shuffleboard puck set is the only set on this list to come with a carry case, so they possess a distinct advantage in this department. As for the pucks themselves, we find they are of decent quality. The edges are beveled, and the plastic caps provide good grip when you throw them.

They list the pucks as regulation-size, which is 2-5/16 inches, when in fact, they are 2-1/8 inches which is recreational size. This is a pretty big oversight on their part but doesn’t affect the actual quality of the pucks themselves. Make sure to consider this set only if you have a small shuffleboard table.

We hope you found this guide useful and learned what you should look out for when shopping around for shuffleboard pucks. In our opinion, the best shuffleboard pucks right now are those by thebrand. They ever-so-slightly edge out Zieglerworld’s pucks simply because they are a little bit cheaper, even though they’re both expensive models.

So if you have the money to spend, those are the pucks that we recommend. However, if you are looking for a more affordable option, we think the TORPSPORT pucks are probably your best choice. That is provided you have a small shuffleboard table. If you are using a full-size table, it is best to go for the Billiard Evolution pucks instead.

There are two shuffleboard puck sizes: 2-1/8 inch and 2-5/16 inch. The smaller 2-1/8 inch pucks suit recreational players who play on smaller shuffleboard tables.

On the other hand, the larger 2-5/16 inch pucks are for skilled players who play on the largest shuffleboard tables. This size is standard and meets tournament regulations.

Sellers rarely list the height of shuffleboard pucks, given that the diameter is the property that changes rather than height. Most pucks measure around 24 mm tall.

This depends on what kind of player you are. If you have a tendency to overshoot your pucks, a lighter set is best for you. Conversely, if you often come up short, invest in a heavier set instead. Just remember that your pucks need to be 310g to 355g range if you want to compete.

The slowest shuffleboard wax you can buy is Sun-Glo Speed 7 wax. Each type of shuffleboard wax has a rating from 1 to 7, with 7 being the slowest wax you can buy. Slow shuffleboard wax is great for smaller tables.

“Sand” is another term for shuffleboard wax. It helps pucks travel faster down the table by acting as a lubricant that minimizes friction. This makes gameplay smoother and far more enjoyable.

Unlike shuffleboard tables, shuffleboard pucks require very little maintenance. To clean your pucks, wipe them down with a clean cloth to remove any dirt or wax that they may have picked up.

No, shuffleboard pucks vary in many characteristics such as size, weight, and durability. Therefore, you should take your time when researching to ensure you select high-quality pucks.

overshot the puck for sale

The fan celebrated his new souvenir which was totally not intended for him in front of the goaltender. A sassy and perhaps seething Holtby stared back in disbelief as if to say I thought humanity was better than this, but I was wrong. So very wrong.

Eleven days ago, Holtbeast mode got engaged again after another adult got his signals crossed with the goaltender and did not give a warmup puck to a child as well.

overshot the puck for sale

A little over 40 days have passed since the tragic plane crash in Yaroslavl that took lives of Lokomotiv"s players and the plane crew. Forty days is a significant date for the Russians for religious reasons. A special memorial service was held in Yaroslavl to mark the 40 days.

On Wednesday, the news emerged that the investigators are working on the final report on the causes of the crash. The report is set to list pilot error as the cause of the crash.

Based on the data recovered from the flight recorders, investigators were able to imitate the conditions the plane was in when it overshot the runway and didn"t take off as it should have. Scientists concluded that the doomed plane crashed not due to technical difficulties, and not due to the quality of fuel, but due to the braking applied to the landing gear. This braking momentum was not created by technical difficulties with the plane, but by the pilot who put his feet on the brake pedals, according to Russian newspaper "Kommersant."

"A special device - dynamometer - was attached to the plane column during the tests to measure the efforts of the pilots" hands. As it followed from the flight data recorder, the plane"s elevating rudder was inclined at first to 10 degrees during the acceleration, and then up to 13 degrees during the take-off.

The testers claim that, when they tried to reproduce such deviation, they were required to take a strain of tens of kilograms in their hands according to the dynamometer. They could allow for it only by pushing their feet into the brake pedal. The pilot managed to keep the same elevating rudder in position at 13 degrees to within a few seconds. In order to provide the required force of 64 kg, the tester had to apply the brakes with full force.

After speeding and installing the elevating rudder in to the regular position of five degrees, the testers simply removed their arms and legs from the machine controls, and the Yak-42 rose into the air with ease.

This fact suggests that if the pilot of the crashed plane had removed himself from the management in general, the take-off would have been effected normally. The results of flight tests will form the basis of the report of the technical committee on the causes of the Yak-42 disaster."

The official online KHL store has released commemorative Lokomotiv gear, stating that all the money raised from the sale of these goods will go to the families of those who lost their lives.

A commemorative T-shirt listing all of the names of those who perished printed on the back is selling for under $30.A commemorative hoodie is going for a little over $30. And a commemorative scarf that reads "Loving. Remembering. Grieving." is selling for under $10.

The store is offering shipping orders overseas, but it is difficult for fans in North America to purchase these items at this time because there is no English language version of the online store. Our requests to the KHL for comments regarding the availability of the items to North American consumers have not yet been returned. Although some of hockey fans in North America figured out a way around it by calling the phone number listed on the store"s page and ordering over the phone.