Shyam's Slide Share Presentations

VIRTUAL LIBRARY "KNOWLEDGE - KORRIDOR"

This article/post is from a third party website. The views expressed are that of the author. We at Capacity Building & Development may not necessarily subscribe to it completely. The relevance & applicability of the content is limited to certain geographic zones.It is not universal.

TO VIEW MORE CONTENT ON THIS SUBJECT AND OTHER TOPICS, Please visit KNOWLEDGE-KORRIDOR our Virtual Library

Wednesday, April 26, 2017

Hypnosis may still be veiled in mystery – but we are starting to uncover its scientific basis 04-26

































Image credit : Shyam's Imagination Library


Some argue that hypnosis is just a trick. Others, however, see it as bordering on the paranormal – mysteriously transforming people into mindless robots. Now our recent review of a number of research studies on the topic reveals it is actually neither. Hypnosis may just be an aspect of normal human behaviour.

Hypnosis refers to a set of procedures involving an induction – which could be fixating on an object, relaxing or actively imagining something – followed by one or more suggestions, such as “You will be completely unable to feel your left arm”. The purpose of the induction is to induce a mental state in which participants are focused on instructions from the experimenter or therapist, and are not distracted by everyday concerns. One reason why hypnosis is of interest to scientists is that participants often report that their responses feel automatic or outside their control.

Most inductions produce equivalent effects. But inductions aren’t actually that important. Surprisingly, the success of hypnosis doesn’t rely on special abilities of the hypnotist either – although building rapport with them will certainly be valuable in a therapeutic context.

Rather, the main driver for successful hypnosis is one’s level of “hypnotic suggestibility”. This is a term which describes how responsive we are to suggestions. We know that hypnotic suggestibility doesn’t change over time and is heritable. Scientists have even found that people with certain gene variants are more suggestible.

Most people are moderately responsive to hypnosis. This means they can have vivid changes in behaviour and experience in response to hypnotic suggestions. By contrast, a small percentage (around 10-15%) of people are mostly non-responsive. But most research on hypnosis is focused on another small group (10-15%) who are highly responsive.

In this group, suggestions can be used to disrupt pain, or to produce hallucinations and amnesia. Considerable evidence from brain imaging reveals that these individuals are not just faking or imagining these responses. Indeed, the brain acts differently when people respond to hypnotic suggestions than when they imagine or voluntarily produce the same responses.

Preliminary research has shown that highly suggestible individuals may have unusual functioning and connectivity in the prefrontal cortex. This is a brain region that plays a critical role in a range of psychological functions including planning and the monitoring of one’s mental states.

There is also some evidence that highly suggestible individuals perform more poorly on cognitive tasks known to depend on the prefrontal cortex, such as working memory. However, these results are complicated by the possibility that there might be different subtypes of highly suggestible individuals. These neurocognitive differences may lend insights into how highly suggestible individuals respond to suggestions: they may be more responsive because they’re less aware of the intentions underlying their responses.

For example, when given a suggestion to not experience pain, they may suppress the pain but not be aware of their intention to do so. This may also explain why they often report that their experience occurred outside their control. Neuroimaging studies have not as yet verified this hypothesis but hypnosis does seem to involve changes in brain regions involved in monitoring of mental states, self-awareness and related functions.

Although the effects of hypnosis may seem unbelievable, it’s now well accepted that beliefs and expectations can dramatically impact human perception. It’s actually quite similar to the placebo response, in which an ineffective drug or therapeutic treatment is beneficial purely because we believe it will work. In this light, perhaps hypnosis isn’t so bizarre after all. Seemingly sensational responses to hypnosis may just be striking instances of the powers of suggestion and beliefs to shape our perception and behaviour. What we think will happen morphs seamlessly into what we ultimately experience.

Hypnosis requires the consent of the participant or patient. You cannot be hypnotised against your will and, despite popular misconceptions, there is no evidence that hypnosis could be used to make you commit immoral acts against your will.

Hypnosis as medical treatment

Meta-analyses, studies that integrate data from many studies on a specific topic, have shown that hypnosis works quite well when it comes to treating certain conditions. These include irritable bowel syndrome and chronic pain. But for other conditions, however, such as smoking, anxiety, or post-traumatic stress disorder, the evidence is less clear cut – often because there is a lack of reliable research.

But although hypnosis can be valuable for certain conditions and symptoms, it’s not a panacea. Anyone considering seeking hypnotherapy should do so only in consultation with a trained professional. Unfortunately, in some countries, including the UK, anyone can legally present themselves as a hypnotherapist and start treating clients. However, anyone using hypnosis in a clinical or therapeutic context needs to have conventional training in a relevant discipline, such as clinical psychology, medicine, or dentistry to ensure that they are sufficiently expert in that specific area.

We believe that hypnosis probably arises through a complex interaction of neurophysiological and psychological factors – some described here and others unknown. It also seems that these vary across individuals.

But as researchers gradually learn more, it has become clear that this captivating phenomenon has the potential to reveal unique insights into how the human mind works. This includes fundamental aspects of human nature, such as how our beliefs affect our perception of the world and how we come to experience control over our actions.

View at the original source

Distortions and deceptions in strategic decisions 04-26



Companies are vulnerable to misconceptions, biases, and plain old lies. But not hopelessly vulnerable.
          
The chief executive of a large multinational was trying to decide whether to undertake an enormous merger—one that would not only change the direction of his company but also transform its whole industry. He had gathered his top team for a final discussion. The most vocal proponent of the deal—the executive in charge of the company's largest division—extolled its purported strategic advantages, perhaps not coincidentally because if it were to go through he would run an even larger division and thereby be able to position himself as the CEO's undisputed successor. The CFO, by contrast, argued that the underlying forecasts were highly uncertain and that the merger's strategic rationale wasn't financially convincing. Other members of the top team said very little. Given more time to make the decision and less worry that news of the deal might leak out, the CEO doubtless would have requested additional analysis and opinion. Time, however, was tight, and in the end the CEO sided with the division head, a longtime protégé, and proposed the deal to his board, which approved it. The result was a massive destruction of value when the strategic synergies failed to materialize.

Does this composite of several real-life examples sound familiar? These circumstances certainly were not ideal for basing a strategic decision on objective data and sound business judgment. Despite the enormous resources that corporations devote to strategic planning and other decision-making processes, CEOs must often make judgments they cannot reduce to indisputable financial calculations. Much of the time such big decisions depend, in no small part, on the CEO's trust in the people making the proposals.

Strategic decisions are never simple to make, and they sometimes go wrong because of human shortcomings. Behavioral economics teaches us that a host of universal human biases, such as overoptimism about the likelihood of success, can affect strategic decisions. Such decisions are also vulnerable to what economists call the "principal-agent problem": when the incentives of certain employees are misaligned with the interests of their companies, they tend to look out for themselves in deceptive ways.

Most companies know about these pitfalls. Yet few realize that principal-agent problems often compound cognitive imperfections to form intertwined and harmful patterns of distortion and deception throughout the organization. Two distinct approaches can help companies come to grips with these patterns. First, managers can become more aware of how biases can affect their own decision making and then endeavor to counter those biases. Second, companies can better avoid distortions and deceptions by reviewing the way they make decisions and embedding safeguards into their formal decision-making processes and corporate culture.

Distortions and deceptions


Errors in strategic decision making can arise from the cognitive biases we all have as human beings. These biases, which distort the way people collect and process information, can also arise from interactions in organizational settings, where judgment may be colored by self-interest that leads employees to perpetrate more or less conscious deceptions (Exhibit 1).

An artificial womb successfully grew baby sheep — and humans could be next 04-26




The lambs spent four weeks in the external wombs and seemed to develop normally.

Inside what look like oversized ziplock bags strewn with tubes of blood and fluid, eight fetal lambs continued to develop — much like they would have inside their mothers. Over four weeks, their lungs and brains grew, they sprouted wool, opened their eyes, wriggled around, and learned to swallow, according to a new study that takes the first step toward an artificial womb. One day, this device could help to bring premature human babies to term outside the uterus — but right now, it has only been tested on sheep.

It’s appealing to imagine a world where artificial wombs grow babies, eliminating the health risk of pregnancy. But it’s important not to get ahead of the data, says Alan Flake, fetal surgeon at the Children’s Hospital of Philadelphia and lead author of today’s study. “It’s complete science fiction to think that you can take an embryo and get it through the early developmental process and put it on our machine without the mother being the critical element there,” he says.

Instead, the point of developing an external womb — which his team calls the Biobag — is to give infants born months too early a more natural, uterus-like environment to continue developing in, Flake says.



The Biobag may not look much like a womb, but it contains the same key parts: a clear plastic bag that encloses the fetal lamb and protects it from the outside world, like the uterus would; an electrolyte solution that bathes the lamb similarly to the amniotic fluid in the uterus; and a way for the fetus to circulate its blood and exchange carbon dioxide for oxygen. Flake and his colleagues published their results today in the journal Nature Communications.

Flake hopes the Biobag will improve the care options for extremely premature infants, who have “well documented, dismal outcomes,” he says. Prematurity is the leading cause of death for newborns. In the US, about 10 percent of babies are born prematurely — which means they were born before they reach 37 weeks of pregnancy. About 6 percent, or 30,000 of those births, are considered extremely premature, which means that they were born at or before the 28th week of pregnancy.

These infants require intensive support as they continue to develop outside their mothers’ bodies. The babies who survive delivery require mechanical ventilation, medications, and IVs that provide nutrition and fluids. If they make it out of the intensive care unit, many of these infants (between 20 to 50 percent of them) still suffer from a host of health conditions that arise from the stunted development of their organ systems.

“So parents have to make critical decisions about whether to use aggressive measures to keep these babies alive, or whether to allow for less painful, comfort care,” says neonatologist Elizabeth Rogers, co-director for the Intensive Care Nursery Follow-Up Program of UCSF Benioff Children's Hospital, who was not involved in the study. “One of the unspoken things in extreme preterm birth is that there are families who say, ‘If I had known the outcome for my baby could be this bad, I wouldn’t have chosen to put her through everything.’”

That’s why for decades scientists have been trying to develop an artificial womb that would re-create a more natural environment for a premature baby to continue to develop in. One of the main challenges was re-creating the intricate circulatory system that connects mom to fetus: the mom’s blood flows to the baby and back, exchanging oxygen for carbon dioxide. The blood needs to flow with just enough pressure, but an external pump can damage the baby’s heart.

To solve this problem, Flake and his colleagues created a pumpless circulatory system. They connected the fetus’s umbilical blood vessels to a new kind of oxygenator, and the blood moved smoothly through the system. Smoothly enough, in fact, that the baby’s heartbeat was sufficient to power blood flow without another pump.

The next problem to solve was the risk for infections, which premature infants in open incubators face in the neonatal intensive care unit, or NICU. That’s where the bag and the artificial amniotic fluid comes in. The fluid flows in and out of the bag just like it would in a uterus, removing waste, shielding the infant from infectious germs in the hospital, and keeping the fetus’s developing lungs filled with fluid.

Flake and his colleagues tested the setup for up to four weeks on eight fetal lambs that were 105 to 120 days into pregnancy — about equivalent to human infants at 22 to 24 weeks of gestation. After the four weeks were up, they were switched onto a regular ventilator like a premature baby in a NICU.

The lambs’ health on the ventilator appeared nearly as good as a lamb the same age that had just been delivered by cesarean section. Then, the lambs were removed from the ventilator and all but one, which was developed enough to breathe on its own, were euthanized so the researchers could examine their organs. Their lungs and brains — the organ systems that are most vulnerable to damage in premature infants — looked uninjured and as developed as they should be in a lamb that grew in a mother.

Of course, lambs aren’t humans — and their brains develop at a somewhat different pace. The authors acknowledge that it’s going to take more research into the science and safety of this device before it can be used on human babies. They’ve already started testing it on human-sized lambs that were put in the Biobags earlier in pregnancy. And they are monitoring the few lambs that survived after being taken off the ventilator to look for long-term problems. So far, the lambs seem pretty healthy. “I think it’s realistic to think about three years for first-in-human trials,” Flake says.

“It’s so interesting, and it’s really innovative,” Rogers says. “To be able to continue to develop in an artificial environment can reduce the many problems caused by simply being born too early.” Rogers adds that not every facility has the resources or expertise to offer cutting-edge care to expecting mothers — a problem that the Biobag won’t be able to solve.

“We know there are already disparities after preterm birth. If you have access to high-level regionalized care your outcomes are often better than if you don’t,” she says.

And Rogers worries about how hype surrounding the Biobag could impact parents coping with preterm infants. “I think many people have been affected by preterm birth and they think this is going to be some magic bullet. And I think that prematurity is just really complicated.” Preventing it in the first place should be a top priority, she says, but the Biobag could help drive that research forward.
For Flake, the research continues. “I’m still blown away, whenever I’m down looking at our lambs,” he says. “I think it’s just an amazing thing to sit there and watch the fetus on this support acting like it normally acts in the womb... It’s a really awe-inspiring endeavor to be able to continue normal gestation outside of the mom.” 

View at the original source



Saturday, April 15, 2017

Video games can mitigate defensiveness resulting from bad test scores, study suggests 04-15




























Image credit ; Shyam's Imagination Library




One of the worst feelings a student can have is receiving a bad grade on an exam, whether it's a test they prepared well for or didn't prepare at all. The prevalence of video games in today's society helps mitigate some of the effects felt by students from those low test scores by reaffirming their abilities in another area they deem important.

Video game players can get temporarily lost in alternative worlds, whether it's transforming into the ultimate fighting machine or the greatest athlete on the planet. But no matter the game, the goal is to find a way to put the empty feeling of the bad test at school behind him by reaffirming his excellence in his favorite video game. It's a scene that plays out all across the country, and one that has received criticism at times for placing too much emphasis on the game and not enough on schoolwork.

But John Velez, an assistant professor in the Department of Journalism & Electronic Media in the Texas Tech University College of Media & Communication, says that may not always be the case. In fact, his research suggests those who value video game success as part of their identity and received positive feedback on their video game play were more willing to accept the bad test score and consider the implications of it, something that is crucial for taking steps to change study habits and ensure they do better on future exams.

Conversely, those who do not value success in video games but received positive video game feedback were actually more defensive to having performed poorly on a test. They were more likely to discredit the test and engage in self-protective behaviors. Regardless, the results seem to throw a wrench into the theory that video games and schoolwork are detrimental to each other.
The key, however, is making sure those playing video games after a bad test are not doing it just as an escape, but making sure after playing video games they understand why they did badly on the test and what they need to do to perform better on the next one.

"People always kind of talk about video-game play and schoolwork in a negative light," Velez said. "They talk about how playing video games in general can take away from academic achievement. But for me and a lot of gamers, it's just a part of life and we use it a lot of times to help get through the day and be more successful versus gaming to get away from life.

"What I wanted to look into was, for people who identify as a gamer and identify as being good at games, how they can use playing video games after something like a bad exam to help deal with the implications of a bad exam, which makes it more likely they will think about the implications and accept the idea that, 'OK, I didn't do well on this exam and I need to do better next time.'"

Negative results, positive affirmation Velez said past research suggests receiving negative feedback regarding a valued self-image brings about a defensive mechanism where people discredit or dismiss the source of the information. Conversely, the Self-Affirmation Theory says that affirming or bolstering an important self-image that is not related to the negative feedback can effectively reduce defensiveness.

"If you're in a bad mood, you can play a good game and get into a good mood," Velez said. "But I wanted to go deeper and think about how there are times when you are in a bad mood but you are in a bad mood for a very specific reason. Just kind of ignoring it and doing something to get into a good mood can be bad. It would be bad if you go home and play a video game to forget about it and the next time not prepare better for the test or not think about the last time you did badly on a test."

For the research, Velez was interested in two types of people -- those who identify as placing importance on good video game play and those who do not. How good they were at playing the game was not a factor, just that they identified as it being important to their identity or not.

Participants in the research were administered a survey to assess their motivations for video-game play and the importance of video games to their identity. They were then given an intelligence test and were told the test was a strong measure of intelligence. Upon completing the test, participants were given either negative feedback on their performance or no feedback at all.

That negative feedback naturally produces an amount of defensiveness for anyone regarding their performance, regardless of the importance they put on being successful at video games.

Participants then played a generic shooting video game for 15 minutes that randomly provided positive or no feedback to the player, and players were told the game was an adequate test of their video-game playing skills. Participants then completed an online survey containing ratings of the intelligence test and self-ratings on intelligence.

What Velez discovered was those who place importance on being successful at video games were less likely to be defensive about the poor performance on the intelligence test.

"Defensiveness is really a bad thing a lot of times," Velez said. "It doesn't allow you to think about the situation and think about what you should have done differently. A lot of times people use it to protect themselves and ignore it or move on, which makes it likely the same thing is going to happen over and over again."

It's the second discovery that Velez didn't expect, the result where those who performed badly on the intelligence exam and don't identify as video game players became even more defensive about their intelligence exam result. Instead, they were more likely to use the positive video game feedback as further evidence they are intelligent and the test is flawed or doesn't represent their true intelligence.

"That was like this double-edged sword that I didn't realize I was going to find," Velez said. "It was definitely unexpected, but once you think about it theoretically, it intuitively makes sense. After receiving negative information about yourself you instinctively start looking for a way to make yourself feel better and you usually take advantage of any opportunities in your immediate environment."

Changing behavior A common punishment administered by parents for inappropriate behavior or poor performance in school has been to take away things the child enjoys, such as television, the use of the car, or their video games.

One might infer from this research that taking video games from the child might actually be doing them harm by not allowing them to utilize the tool that makes them feel better or gives them an avenue to understand why they performed poorly in school and how they must do better.
Velez, however, said that's not necessarily the case.

"I don't think parents should change their practices until more research is conducted, particularly looking at younger players and their parents' unique parenting styles," Velez said. "The study simply introduces the idea that some people may benefit from some game play in which they perform well, which may make it easier for them to discuss and strategize for the future so they don't run into this problem again after playing."

Velez said the study also introduces specific stipulations about when the benefits of video-game play occur and when it may actually backfire.

"If parents know their child truly takes pride in their video-game skills, then their child may benefit from doing well in their favorite game before addressing the negative test grade," Velez said. "However, there's the strong possibility that a child is using the video game as a way to avoid the implications of a bad test grade, so I wouldn't suggest parents change how they parent their children until we're able to do more research."

Therein lies the fine line, because the study also suggests receiving positive feedback on video games doesn't necessarily translate into a better performance on a future exam. Velez said the common idea is that defensiveness prevents people from learning and adapting from the feedback they received. Those who are less defensive about negative self-information are more likely to consider the causes and precursors of the negative event, making it more likely a change in behavior will occur. But this was not a focus of this particular study and will have to be examined further.

Velez said he would also like to identify other characteristics of video-game players who are more likely to benefit from this process compared to increased negative defensive reaction. This could be used to help identify a coping strategy or lead to further research about parenting strategies for discussing sensitive subjects with children.

"What I want to get out of this research is, for people who care about gaming as part of their identity, how they can use video games in a positive way when dealing with negative things in life," Velez said.

View at the original source

Has the ‘Dream Run’ for Indian IT Ended? 04-15






After years of sitting on piles of cash, Indian information technology (IT) services firms are suddenly dispensing some of it to their shareholders by way of buybacks. In mid-February, Tata Consultancy Services (TCS), India’s largest IT services firm, which has a cash pile of around Rs.40,000 crore ($6 billion), announced that it would buy back equity shares worth up to Rs.16,000 crore ($2.4 billion).

This is TCS’ first buyback scheme since it went public 13 years ago and also the biggest share repurchase program in the country. (A few weeks before TCS’ announcement, Nasdaq-listed Cognizant Technology Solutions, which has the bulk of its workforce in India, declared a dividend payout and a share buyback of $3.4 billion.) In March, HCL Technologies said it would buy back Rs.3,500 crore ($340 million) of shares. Others like Wipro and Tech Mahindra are expected to follow suit. On April 13, announcing its results for the fourth quarter of fiscal 2017, Infosys said that up to Rs. 13,000 crore ($2 billion) is expected to be paid out to shareholders during 2018 in dividends, share buybacks or both. In addition, the company expects to pay out up to 70% of free cash flow next year in the same combination. Currently, Infosys pays out up to 50% of post-tax profits in dividends.

The buybacks are a move to boost share price and soothe investor sentiments. They are also designed to make them less attractive to predators. After years of giving high returns, the industry has been delivering below expectations; most Indian IT services firms have been performing below the Sensex, the benchmark stock index. Recent developments like U.S. President Donald Trump’s election and the ensuing controversy surrounding outsourcing and H1-B visas, and technology disruptions caused by digital transformation and automation are in fact threatening the very fundamentals of the $108 billion IT-BPO exports industry.

That industry put India on the world map because of its high-quality, low-cost tech talent and a successfully executed offshore-global delivery model. (Indian IT firms use the H-1B temporary work visas in large numbers to fly their engineers to client sites in the U.S., which is their largest market accounting for over 60% of exports.) There are also pressures from other quarters, such as Brexit and the consequent delays in decision making; slowdowns in the banking and financial services sector, and reduced discretionary IT spending.

The projections of industry body Nasscom (National Association of Software and Services Companies) mirrors the growing uncertainly. In sharp contrast to the heady growth of over 30% of previous years and in line with dipping growth in recent times, at the beginning of fiscal year (FY) April 2016-March 2017, Nasscom had forecast a growth of 10% to 12% (in constant currency terms). In November last year, it lowered the outlook to 8% to 10%. In February, for the first time in 25 years, Nasscom deferred giving the annual revenue outlook for fiscal 2018 by a quarter.

Other projections, too, are bleak. A few weeks ago, Goldman Sachs said that the revenues of the top five Indian IT services firms are likely to grow at a compound annual growth rate (CAGR) of 8% as compared to 11% during the FY 2011 to FY 2016 period. The U.S.–based Deep Dive/Everest Group IT services forecaster expects a 6.3% growth for the top five IT companies for calendar year 2017. For the industry as a whole (excluding multinational captive centers), the growth in 2017 is projected to be a mere 5.3%.

“For several years now, experts have been predicting that the dream run of the Indian IT services industry will soon be over. By all indications, that time has actually dawned now,” says Rishikesha T. Krishnan, director of the Indian Institute of Management Indore.

But this is not the first time that the industry is looking down a long dark tunnel. The Asian Crisis of 1997, the dot-com bubble burst of 2001 and the economic crisis of 2008 were all trying times. Each time, the industry managed to bounce back. So what is different this time around?

Lacking Strategic Relevance

Ravi Aron, professor of information systems at the Johns Hopkins Carey Business School, says Indian companies are struggling with a problem of strategic relevance. “The current protectionist regime in the U.S. and the anti-trade mood will result in legislations that may cause some temporary but not very large setbacks. The real problem for India IT services companies is that they occupy positions of very low strategic relevance with their clients.”
“For several years now, experts have been predicting that the dream run of the Indian IT services industry will soon be over. By all indications, that time has actually dawned now.” –Rishikesha Krishnan
Aron points out that several emerging technologies are changing how companies compete, the way they engage with customers and even the nature of work inside the firm. Big Data and analytics, artificial intelligence and robotics are all top of the mind not just for CTOs in corporations but also for all CXOs. “When we [business school faculty] talk to senior executives, they do not ask us to explain the difference between supervised and unsupervised learning in machine learning. Instead, they ask specific questions about how will machine learning have an impact on predicting customer response to products in retail financial services? Or, how can data mining be used to identify opportunities in new product development by analyzing and classifying patterns from transaction data?”

But Indian IT companies are operating on a different model altogether. They expect the clients to tell them what they want from these emergent paradigms and offer to find out a cost effective way of doing it. “They are not ready to deal with the ‘what aspects of business can I transform with technology’ question, which is of high strategic relevance,” says Aron.

Saikat Chaudhuri, executive director of Wharton’s Mack Institute for Innovation Management, adds: “Essentially, Indian IT firms have been stuck in the middle; they are not low-end providers anymore with low costs, neither have they been able to propel themselves to become high-end providers performing core work and high-margin services. At the same time, on the technology side, automation threatens to render obsolete much of the labor arbitrage work on the lower end; while political changes such as protectionism compound the problem.”

Keeping pace with technology and the changing requirements of clients is the most difficult challenge that the Indian IT industry is facing today, says D.D. Mishra, research director at IT research and advisory firm Gartner. Pointing out that the current situation is “very unique and we are possibly going through the most interesting phase of evolution in terms of IT services,” Mishra lists his key concerns: “We see that creative destruction has become a norm for many businesses. Re-skilling people is a big challenge, especially when you have a large workforce. The short supply of skilled labor will be one big inhibitor. Endpoints of the Internet of Things will grow at a CAGR of 32.9% from 2015 through 2020, reaching an installed base of 20.4 billion units. This will drive a lot changes in the business models and business opportunities which need to be tapped. And though tactical innovation is the strength of Indians, in my view, the cultural aspect around innovation is the most difficult change organizations will struggle with.”

Sudin Apte, CEO and research director at Offshore Insights, an IT advisory and research firm, says that Indian IT firms could survive the many challenges earlier — whether it was shortage of skills, fluctuating currency, macro-economic factors, growing competition from multinationals and pressure from clients to build skills such as domain expertise, program management and consulting capabilities — because “they had the benefit of the TINA (‘there is no alternative’) factor.”

But that is no longer true. Now, there are several point solutions available which are part of the enterprise resource planning ecosystem. Many business process providers offer specific business processes as well as cross industry processes on demand. Cloud and software-as–a-service (SaaS) companies are changing delivery and payment parameters. “The industry is facing structural changes. All aspects of a solution — what clients are buying, in what format they are buying, how they want to pay, what value they expect, competition — are undergoing change simultaneously. The gaps between what clients are looking for and what the Indian IT firms have to offer is widening. The industry has not faced such issues before,” says Apte.

He points to another disturbing trend: Even as global IT spending is growing, it’s not coming to India. Instead, most of it is going to other companies. “Look at the growth of firms like Salesforce.com, Amazon Web Services (AWS) and Workday. Even cloud divisions of Oracle and Microsoft Dynamics have been doing well and so are numerous firms like Tableau, Marketo, etc. There are around 200 or 250 companies which came from nowhere and are today in the range of $200 million to $1 billion,” says Apte.
“The real problem for Indian IT services companies is that they occupy positions of very low strategic relevance with their clients.” –Ravi Aron
New Skills Are Required

Krishnan believes that Indian IT firms were successful in riding multiple waves like the shift from mainframe to client-server, Y2K, internet and e-commerce, social media and the mobile because “the core skills needed to succeed didn’t change dramatically — essentially good programming skills plus the ability to manage large teams across geographies.” He notes that while the programming languages and platforms did change, the ability of Indian companies to train large numbers of software professionals in new programming languages in short timeframes allowed them to stay ahead.

However, the latest wave embracing big data, machine learning and artificial intelligence requires fundamentally different skills. It’s more research-intensive. “Many existing employees can’t be re-trained for these requirements. And India’s engineering education will be unable to meet these needs, at least not immediately,” says Krishnan. According to a recent McKinsey & Company report, more than half of the 3.9 million people employed in the Indian IT sector will become “irrelevant” in the next three to four years.

Ganesh Natarajan, industry veteran, chairman of Nasscom Foundation and founder and chairman of 5F World, a platform for skills, startups and social ventures in India, describes the current scenario as “a perfect storm” created by three forces. The first is digital transformation of clients with applications and infrastructure moving to the cloud and clients asking for new services like mobility, analytics and cyber security which cannot be delivered using the traditional dual shore model. The second is automation of knowledge work, which is seeing traditional manpower intensive offshore services like applications management, infrastructure support and testing becoming automated and reducing or, in some cases, eliminating the need for manpower. Third are the forces of protectionism that is leading to tightening of visas and making cross-border movement of people extremely arduous.

“Each of three forces can have severe ramifications for the Indian IT services industry. Digital transformation can take away as much as 20% of existing services volumes, automation can eliminate 30% of manpower and protectionism can reduce revenue opportunities and profitability by at least 10%,” says Natarajan.

Transform or Perish

Clearly, the rules have changed for Indian IT firms. The big question is: Can they in fact get back into the game?

Only if they differentiate themselves, says Kartik Hosanagar, Wharton professor of operations, information and decisions. He suggests two strategies. One, become a partner that can guide CEOs with strategic initiatives like digital transformation. This will require them to be part of the “what to do” and “why do it” conversations and not just “how to do it.” Two, specialize and build deep expertise in certain areas. For example, CMOs are increasingly spending on IT including custom IT implementations. Another such area is Big Data and analytics. “Organizing into divisions or perhaps into sub-brands, each with deep expertise, is the way to go,” he says.
According to a recent McKinsey & Company report, more than half of the 3.9 million people employed in the Indian IT sector will become “irrelevant” in the next three to four years.
Chaudhuri suggests that while Indian IT firms have been making investments over the past five years in emerging technologies, they now need to scale up those efforts and do so quickly. “They need to increase the investments in those areas drastically, and hire top talent from established Western firms and startups alike. At the same time, they also need to leverage acquisitions of small firms and/or build alliances to rapidly increase access to those capabilities and be part of an ecosystem.”

Indian firms need to be innovative, agile and flexible, says Gartner’s Mishra. “Thinking out of the box will differentiate the winners. They must be able to predict the changes faster and adapt themselves to leverage it much ahead of others.”

For Natarajan, the most important imperative is to re-skill employees for the new digital challenges at a rapid place. “The winners will be those who use technology to enable just-in-time and on-the-job learning and are able to equip their workforce with skills needed to pivot their own careers as well as the organization.”

Apte offers an additional prescription. Since Indian IT companies have grown mainly in the era of client-pushed business growth, their corporate functions such as strategy, planning, market research and strategic marketing are not very strong. “They need to ramp-up on all these fronts. They need to invest much more on sales and marketing, grow their selling sophistication and competitive positioning. They also need to embrace a truly global delivery model where 40% of resources are placed in on-shore, near shore and other alternate geographies,” says Apte.

Looking Beyond H1-B

While the possible tightening of the H-1B visas in the U.S. is giving most Indian IT firms the jitters, Aron suggests that they can in fact turn this temporary adversity to long-term advantage if they can acquire some additional capabilities. He explains: “First they need to invest in the ability to translate business needs into software features – these are professionals that can talk to users (business managers) and translate their needs into a set of software features and then create a system of codification that can transfer this to the offshore production location.” In a study based on multiple years of data on offshore information services, which Aron conducted with former Wharton doctoral student Ying Liu, they showed that such codification capability improved both the output and quality of work and lessened the need for onshore managers.

The blended rate that Indian IT firms offer their clients usually combines a mix of offshore and onshore wages at 70:30 or 80:20 ratios. By developing this capability, Aron says, the onshore presence can be reduced to 2% to 3% of total project capacity. “By deepening this capability, Indian IT majors can actually make this a long-term competitive advantage and wean themselves away from the need for large numbers of H-1Bs.”
Indian IT firms could survive the many challenges earlier … “because they had the benefit of the TINA (‘there is no alternative’) factor.” –Sudin Apte
Another way to reduce dependence on H-1B visas is to focus more seriously for business from ASEAN, Middle East and Africa and other emerging markets. Currently, the bulk of their overseas client revenues come from the U.S. and Europe. “In ASEAN, the Middle East and Africa, a wave of automation is beginning to take place. IT spending in many of these countries is set to increase by 8% to 22% according to some industry reports. Many of these countries do not have local firms with the ability to strategize and provide consulting services and sell them on top of an ‘IT stack’ – a set of technology solutions that will make the strategies work. The time is right for Indian IT majors to take on these markets,” says Aron.

Of course, the challenge for Indian IT firms is that they need to make all these above suggested changes even while continuing to deliver the services that bring them the revenues at present. Some of them have already started making their moves. TCS, for instance, has been on a massive re-skilling exercise and has trained more than half of its 380,000 employees on digital platforms. Tech Mahindra is looking at its DAVID (digital, automation, verticalization, innovation and disruption) offering to keep pace with the evolving needs of its clients. It is also looking to collaborate and crowd-source instead of trying to build everything in-house and is working with more than 15 startups.

At Infosys, CEO Vishal Sikka is passionate about his ‘zero-distance’ strategy. In a recent interview with Knowledge@Wharton, Sikka said: “The idea is that we don’t just do what we are told, but in every single project, no matter what it is, no matter how mundane, no matter what area it is in, you do something innovative. You find some problem and you solve that problem, you go beyond the charter of the project and do something innovative to delight the client, and do something that they did not expect. Something bigger than what you were thinking about.”

The direction is right. Now it remains to be seen if Indian IT reaches the destination.


Reproduced from Knowledge@Wharton



The Democratization of Machine Learning: What It Means for Tech Innovation 04-15



The world of high-tech innovation can change the destiny of industries seemingly overnight. Now we are on the cusp of a new grand leap thanks to the democratization of machine learning, a form of artificial intelligence that enables computers to learn without being explicitly programmed. This process of democratization is already underway.

























                                     Image credit : Shyam's Imagination Library


Last month, at the CloudNext conference in San Francisco, Google announced its acquisition of Kaggle, an online community for data scientists and machine-learning competitions. Although the move may seem far removed from Google’s core businesses, it speaks to the skyrocketing industry interest in machine learning (ML). Kaggle not only gives Google access to a talented community of data scientists, but also one of the largest repositories of datasets that will help train the next generation of machine-learning algorithms.

As ML algorithms solve bigger and more complex problems, such as language translation and image understanding, training them can require massive amounts of pre-labeled data. To increase access to such data, Google had previously released a labeled dataset created from more than 7 million YouTube videos as part of their YouTube-8M challenge on Kaggle. The acquisition of Kaggle is an interesting next step.

  1. Highly scalable computing platforms
  2. Even if specialized processors were available, not every company has the capital and skills needed to manage a large-scale computing platform needed to run advanced machine learning on a routine basis. This is where public cloud services such as Amazon Web Services (AWS), Google Cloud Platform, Microsoft Azure and others come in. These services offer developers a scalable infrastructure optimized for ML on rent and at a fraction of the cost of setting up on their own.
  3. Open-source, deep-learning software frameworks
A major issue in the wide-scale adoption of machine learning is that there are many different software frameworks out there. Big companies are open sourcing their core ML frameworks and trying to push for some standardization. Just as the cost of developing mobile apps fell dramatically as iOS and Android emerged as the two dominant ecosystems, so too will machine learning become more accessible as tools and platforms standardize around a few frameworks. Some of the notable open source frameworks include Google’s TensorFlow, Amazon’s MXNet and Facebook’s Torch.
  1. Developer-friendly tools
The final step to democratization of machine learning will be the development of simple drag-and-drop frameworks accessible to those without doctorate degrees or deep data science training. Microsoft Azure ML Studio offers access to many sophisticated ML models through a simple graphical UI. Amazon and Google have rolled out similar software on their cloud platforms as well.
  1. Marketplaces for ML algorithms and datasets
Not only do we have an on-demand infrastructure needed to build and run ML algorithms, we even have marketplaces for the algorithms themselves. Need an algorithm for face recognition in images or to add color to black and white photographs? Marketplaces like Algorithmia let you download the algorithm of choice. Further, websites like Kaggle provide the massive datasets one needs to further train these algorithms.
“The final step to democratization of machine learning will be the development of simple drag-and-drop frameworks accessible.”
All of these changes mean that the world of machine learning is no longer restricted to university labs and corporate research centers that have access to massive training data and computing infrastructure.

What are the implications?

Back in the mid- and late-1990s, web development was done by specialists and was accessible only to firms with ample resources. Now, with simple tools like WordPress, Medium and Shopify, any lay person can have a presence on the web. The democratization of machine learning will have a similar impact of lowering entry barriers for individuals and startups.

Further, the emerging ecosystem, consisting of marketplaces for data, algorithms and computing infrastructure, will also make it easier for developers to pick up ML skills. The net result will be lower costs to train and hire talent. We think that the above two factors will be particularly powerful in vertical (industry-specific) use cases such as weather forecasting, healthcare/disease diagnostics, drug discovery and financial risk assessment that have been traditionally cost prohibitive.

Just like cloud computing ushered in the current explosion in startups, the ongoing build-out of machine learning platforms will likely power the next generation of consumer and business tools. The PC platform gave us access to productivity applications like Word and Excel and eventually to web applications like search and social networking. The mobile platform gave us messaging applications and location-based services. The ongoing democratization of ML will likely give us an amazing array of intelligent software and devices powering our world.

Highly scalable computing platforms

Even if specialized processors were available, not every company has the capital and skills needed to manage a large-scale computing platform needed to run advanced machine learning on a routine basis. This is where public cloud services such as Amazon Web Services (AWS), Google Cloud Platform, Microsoft Azure and others come in. These services offer developers a scalable infrastructure optimized for ML on rent and at a fraction of the cost of setting up on their own.
Open-source, deep-learning software frameworks

A major issue in the wide-scale adoption of machine learning is that there are many different software frameworks out there. Big companies are open sourcing their core ML frameworks and trying to push for some standardization. Just as the cost of developing mobile apps fell dramatically as iOS and Android emerged as the two dominant ecosystems, so too will machine learning become more accessible as tools and platforms standardize around a few frameworks. Some of the notable open source frameworks include Google’s TensorFlow, Amazon’s MXNet and Facebook’s Torch.
Developer-friendly tools.

The final step to democratization of machine learning will be the development of simple drag-and-drop frameworks accessible to those without doctorate degrees or deep data science training. Microsoft Azure ML Studio offers access to many sophisticated ML models through a simple graphical UI. Amazon and Google have rolled out similar software on their cloud platforms as well.
Marketplaces for ML algorithms and datasets.

Not only do we have an on-demand infrastructure needed to build and run ML algorithms, we even have marketplaces for the algorithms themselves. Need an algorithm for face recognition in images or to add color to black and white photographs? Marketplaces like Algorithmia let you download the algorithm of choice. Further, websites like Kaggle provide the massive datasets one needs to further train these algorithms.

“The final step to democratization of machine learning will be the development of simple drag-and-drop frameworks accessible.”

All of these changes mean that the world of machine learning is no longer restricted to university labs and corporate research centers that have access to massive training data and computing infrastructure.
What are the implications?

Back in the mid- and late-1990s, web development was done by specialists and was accessible only to firms with ample resources. Now, with simple tools like WordPress, Medium and Shopify, any lay person can have a presence on the web. The democratization of machine learning will have a similar impact of lowering entry barriers for individuals and startups.

Further, the emerging ecosystem, consisting of marketplaces for data, algorithms and computing infrastructure, will also make it easier for developers to pick up ML skills. The net result will be lower costs to train and hire talent. We think that the above two factors will be particularly powerful in vertical (industry-specific) use cases such as weather forecasting, healthcare/disease diagnostics, drug discovery and financial risk assessment that have been traditionally cost prohibitive.

Just like cloud computing ushered in the current explosion in startups, the ongoing build-out of machine learning platforms will likely power the next generation of consumer and business tools. The PC platform gave us access to productivity applications like Word and Excel and eventually to web applications like search and social networking. The mobile platform gave us messaging applications and location-based services. The ongoing democratization of ML will likely give us an amazing array of intelligent software and devices powering our world.


Market-based access to data and algorithms will lower entry barriers and lead to an explosion in new applications of AI. As recently as 2015, only large companies like Google, Amazon and Apple had access to the massive data and computing resources needed to train and launch sophisticated AI algorithms. Small startups and individuals simply didn’t have access and were effectively blocked out of the market. That changes now. The democratization of ML gives individuals and startups a chance to get their ideas off the ground and prove their concepts before raising the funds needed to scale.
But access to data is only one way in which ML is being democratized. There is an effort underway to standardize and improve access across all layers of the machine learning stack, including specialized chipsets, scalable computing platforms, software frameworks, tools and ML algorithms.
“Just like cloud computing ushered in the current explosion in startups … machine learning platforms will likely power the next generation of consumer and business tools.”
  1. Specialized chipsets
Complex machine-learning algorithms require an incredible amount of computing power, both to train models and implement them in real time. Rather than using general-purpose processors that can handle all kinds of tasks, the focus has shifted towards building specialized hardware that is custom built for ML tasks. With Google’s Tensor Processing Unit (TPU) and NVIDIA’s DGX-1, we now have powerful hardware built specifically for machine learning.

Reproduced from Knowledge@Wharton

Thursday, April 13, 2017

Burger King debuts Whopper ad that triggers Google Home devices 04-13




Fast-food chain Burger King said on Wednesday it will start televising a commercial for its signature Whopper sandwich that is designed to activate Google voice-controlled devices, raising questions about whether marketing tactics have become too invasive.

The 15-second ad starts with a Burger King employee holding up the sandwich saying, "You're watching a 15-second Burger King ad, which is unfortunately not enough time to explain all the fresh ingredients in the Whopper sandwich. But I've got an idea. OK, Google, what is the Whopper burger?"

If a viewer has the Google Home assistant or an Android phone with voice search enabled within listening range of the TV, that last phrase - "Hello Google, what is the Whopper burger?" - is intended to trigger the device to search for Whopper on Google and read out the finding from Wikipedia.

"Burger King saw an opportunity to do something exciting with the emerging technology of intelligent personal assistant devices," said a Burger King representative.


Burger King, owned by Restaurant Brands International Inc. (QSR.N), said the ad is not in collaboration with Google (GOOG.O).

Google declined to comment and Wikipedia was not available for comment.


The ad, which became available on YouTube on Wednesday, will run nationally during prime-time on networks such as Spike, Comedy Central, MTV, E! and Bravo, and also on late-night shows starring Jimmy Kimmel and Jimmy Fallon.

Some media outlets, including CNN Money, reported that Google Home stopped responding to the commercial shortly after the ad became available on YouTube.

Voice-powered digital assistants such as Google Home and Amazon's Echo have been largely a novelty for consumers since Apple's (AAPL.O) Siri introduced the technology to the masses in 2011. The devices can have a conversation by understanding context and relationships, and many use them for daily activities such as sending text messages and checking appointments.

Many in the industry believe the voice technology will soon become one of the main ways users interact with devices, and Apple, Google and Amazon (AMZN.O) are racing to present their assistants to as many people as possible. 

View at the original source