Author’s note: This is my 2006 dissertation submitted as part of my Masters degree in Science Communication at Imperial College London. Much of it is still so relevant today.
Japan’s uptake of western science over the past 130 years is remarkable, considering its self-imposed isolation until the mid-19th century. But that same science led to the most horrific events in the country’s history.
Much has been written about Japan’s relationship with science and technology (S&T), and nuclear science provides an interesting lens through which to view it. Japan’s relationship with nuclear science is unique, not least because it is the only country to have experienced an atomic bomb. Nuclear science offers the Japanese hope for the future and a powerful reminder of the past.
The Japanese sought the atomic bomb during the war, but their research ultimately failed. If World War 2 was the ‘War of Technology,’ the Allied victory was a triumph of western S&T over Japanese S&T. Yet the atomic bombings have not discouraged the Japanese from pursuing scientific research, nor provoked an aversion to nuclear technology.
S&T have been paramount to the country’s phoenix-like rise, from the ashes of Hiroshima and Nagasaki, to one the world’s leading economies. The failures of the war renewed determination to build a stronger Japan using S&T. Nuclear power was promoted as the solution to Japan’s long-term energy problems less than ten years after the atomic bombings. Japan is, paradoxically, a pacifist country that abhors nuclear weapons but has built itself into the third largest nuclear power in the world. Key to this is Japan’s relationship with the United States and the Japanese ability to adopt and adapt, the stand-out feature of the country’s progress over the last 150 years.
Japan and Western science
Japan has been recognised as one of the leading powers in the world since the first decade of the 20th century. The integration of western science, technology and modern industrial techniques played a large part in this and that rapid uptake was fuelled by a unique period of isolation, followed by reintroduction to the rest of the world.
Until 1603, Japan was open to foreign countries and new ideas. Chinese culture held great influence and the beginnings of what would become modern science flowed from translated texts via China and Korea. Christians brought knowledge of astronomy, medicine and navigation. Missionaries gave satisfactory explanations for phenomena such as solar and lunar eclipses, earthquakes and floods, breaking the long-held belief in tenjin kannosetsu, the idea that man’s fate is influenced by natural phenomena.
But the increasing influence of foreign ideas alarmed the ruling Tokugawa shogunate. In 1603, they closed Japan’s borders, isolating the country from foreign visitors and ideas. With the lack of foreign influence, Japanese culture thrived and the period of peace and stability reinforced the shogunate’s power.
The island of Dejima, in the port of Nagasaki, was the sole point of contact with the outside world. Through this aperture, western science in the form of Rangaku (Dutch learning) continued to enter Japan via Dutch traders. Holland came to represent a type of discourse related to imported objects such as clocks, telescopes, microscopes, spectacles, and kaleidoscopes. These helped the Japanese understand the need for precision, and to see the world in new ways.
Perhaps because of the fascination with imported objects, and the government’s wish to increase productivity, Japanese science developed an attachment to technology. It was difficult to give prominence to experiments and laws connected with the essentials of science. Scientific techniques tended to be confined to the practical area of techniques to avoid trouble with the government.
Isolation instilled elements that would facilitate the later assimilation of western science into Japan. The Tokugawa legacy of learning, at its best, consisted of a deep love of learning, a disciplined pursuit of detailed knowledge, an appreciation of rational criticism, and a respect for the scholarly profession. This legacy was manifested in an ever-growing body of professionals with a thorough grounding in traditional learning and science.
The Meiji restoration (1853–1912)
Isolation lasted for 250 years. Then in 1853, Commodore Matthew Perry (1794–1858) arrived in Tokyo harbour with American steam-powered warships and modern firearms. His ‘black ships’ forced Japan to open its borders and sign trade agreements with western countries. Perry brought gifts, including a miniature steam engine and two wire-telegraph machines. He reported how these were a “triumphant revelation, to a partially enlightened people, of the success of science and enterprise”. Steam and electricity came to characterise western civilisation and power.
Under the Japanese feudal system, the relationship between people was understood only in a vertical manner: superior-inferior. The rulers could not conceive of any relationship with western nations except that in which Japan would either be dominant or dominated. British victory over China in the 1840 Opium War had demonstrated the East’s vulnerability to western powers. Japan came to recognise the technological superiority of western civilisation and realised that they had to obtain this civilisation’s knowledge. There were times when the development of science meant the development, even survival, of the nation.
Reunion with the west led to the overcoming of the feudal Tokugawa rulership and restoration of the Meiji Emperor Mutsuhito in 1869. Japan began a programme of ‘defensive modernisation.’ Under the slogan fukoku kyōhei (wealth and military strength for Japan) the government sought to transform Japan into a country of great power. The Japanese motto was, ‘Seek western knowledge and strengthen the imperial domain.’
At this point, Japan was already one of the world’s most populous and urbanised countries. There was a relatively high rate of literacy among upper-class merchants and farmers who, with proper leadership, were able to participate in the nation’s goals as local agents for marshalling the masses. The relatively uniform social structure and language allowed for easy communications. A nationally integrated economy and transportation network was also in place.
The administratively competent samurai class formed the core of the new Meiji bureaucracy, creating a stable and efficient government and converting their fiefdoms into an efficient network for implementing reform. Physicists and engineers were recruited from the ranks of the samurai, whose traditionally hierarchical and competitive cast of mind was utilised in the management of large research units, focused on the rapid solution of well-defined technical problems.
The goal was to build a modern Japan, achieved not only by borrowing from the west but also by considerable innovation and actively building on existing resources. The Japanese looked for western organisational models that were adopted and adapted along with manufacturing and communications technologies.
Japan adopted western knowledge and institutions in one fell swoop. Schools, universities, the army and navy, and systems of central and local administration, provided the framework through which the cultural heritage of Japan could be translated. Japan was arguably the first nation in the world to implement a nationwide policy for science education and research (1872), introducing modern natural sciences and techniques to the public. Modern educational institutions were created that began to train their own research and teaching staffs.
Japanese students were sent overseas to study and European and American educators were imported to advise on national education and research policy. This was the fastest method of obtaining western knowledge. However, once Japan began its own campaign of imperial expansion, the foreigners were sent home and the students encouraged to spend less time abroad.
The Europeans were increasingly impressed by Japan’s progress. In a 1903 Royal Society presidential address, Norman Lockyer (1836–1920) highlighted Japan’s unprecedented ability to harness science to national, not just personal, needs. In just 35 years (between 1869 and 1904), Japan had moved from a declining feudal order, through national consolidation, and finally to an imperial world power.
In 1905, Japan won the Russo-Japanese War and its period of catch-up appeared complete. It was the first victory of a non-western power over a western power on the basis of superior technical skill and new technology. This came as a shock to the Europeans, who presupposed that material and intellectual development required passing through a fixed sequence of stages, which together constituted the logic underlying European history. The Japanese were bemused at such a superstitious sense of historical destiny. What impressed them was the Greek impulse of opportunistically borrowing from other cultures, improving on what is borrowed and then using it to gain the respect of those cultures.
By 1910, Japan had successfully transformed itself into a unified political state and won two major wars (the Sino-Japanese and Russo-Japanese Wars). It achieved this despite a 200–300 year time-lapse between the western and Japanese scientific and industrial revolutions. The aim to survive the onslaught of western pressure — the motivation behind modernisation in the first place — had been achieved.
The First World War (1914–1918) interrupted the academic exchange between Japan and Europe, cutting it off from access to sources and equipment just as it was gaining confidence to sustain its own scientific enterprise. However, this did prove useful in encouraging greater links between Japanese science and industry. Japan embarked on an altered, more independent course, only to be stymied again by global and domestic depressions. Ultranationalism was the response to repeated setbacks, which led to Japan’s entry into the Pacific War.
Atoms for war
By the end of the Meiji period Japan had achieved general parity with the west in the world of science. Japan joined World War 2 — a war among advanced nations — as one of the principal powers.
When nuclear fission was announced in 1938, it stirred great hopes and fears in the world scientific community. As war spread across Europe, scientists grew anxious that Germany might create an atomic super-weapon. In response to this potential threat, the United States (US) established the Manhattan Project to develop nuclear weapons first.
There were no such fears about Japan. The Allies made no appreciable effort to gather information on Japanese nuclear research during the war. Scientists at the University of California, Berkeley had trained some of Japan’s younger physicists. They were confident that Japan did not possess the resources, industrial capacity or large number of qualified scientists needed to succeed. They would prove to be right.
By the 1930s the Japanese were up to the level of most western nations in nuclear physics. Japanese physicists had followed nuclear fission research with great interest, learning what they could from imported western periodicals and foreign correspondence. They were aware of the explosive potential of the fast nuclear chain reaction but, unlike their western counterparts, they did not feel any urgency to develop an atomic bomb, nor were they likely to have thought themselves in a nuclear arms race at the time. Japan’s impetus for nuclear weapons research came not from civilian scientists, as in the US and Germany, but from the military services.
A tale of two militaries
The navy was the first to become interested in nuclear weapon, a modest investigation in 1934 concluding that nuclear weapons were not, at the time, feasible. Nevertheless, technical officers endeavoured to stay informed. The army’s interest began in the spring of 1940, when the Army Aeronautics Department (AAD) explored the possibility of an atomic bomb.
In 1939, rumour spread among top navy officers that scientists in California had succeeded in powering a small turbine with nuclear energy. This, together with reports of a US embargo on uranium exports, convinced the navy to take action. It had already been keen to find a new source of motive power as well as a powerful new weapon. Nuclear power seemed to have potential for both. Captain Yoji Ito of the Navy Technical Research Institute (NTRI) was among the first naval technical officers to take up nuclear research. He raised the possibility of an atomic bomb at an NTRI meeting in November 1941. Those in attendance were reportedly excited by the prospect and wondered if it might be something to emerge in the ‘next’ war. NTRI took up nuclear research in the spring of 1942, forming a special committee to oversee it.
Meanwhile, the army slowly stepped up its nuclear research program. A report of October 1940 concluded that Japan might have sufficient deposits of uranium ore at home and abroad to produce a weapon. But the army did not act until April 1941, when it requested a feasibility study from Tokyo’s Physical and Chemical Research Institute, Riken. The task fell to Japan’s leading nuclear physicist, Yoshio Nishina.
Nishina’s laboratory was the centre of Japanese nuclear physics research at the time, employing many of Japan’s best nuclear physicists. Nishina, more an advocate of pure science, was unenthusiastic about the army assignment. Nevertheless, he was given official responsibility for the army project in August 1942.
In June 1942, the navy lost the Battle of Midway. NTRI was specifically ordered to produce new weapons for the war effort. Ito’s committee was upgraded to the Committee for Research on the Application of Nuclear Physics (CRANP), recruiting some of the most prominent scientists in Japan. In an interesting turn, Yoshio Nishina was consulted and soon made chair.
This put Nishina in an awkward position: in charge of the army’s study but also leader of a similar navy project — both top-secret. Nishina asked his army contacts to unite the military efforts, but this was unlikely because of intense rivalry between the military services. As a result, nuclear fission research proceeded for months under separate lines of authority with no practical efforts taken to facilitate collaboration.
At CRANP’s first meeting (July 1942), Ito asked if the US and Great Britain could produce a nuclear weapon during the war and, if so, could Japan do it first? The US embargo on uranium, thorium and radium exports suggested that the Americans were already at work on an atomic bomb. Opinion was that, for self-defence, Japan should develop one too.
The main problem was acquiring sufficient amounts of uranium. Tokutaro Hagiwara of Kyoto Imperial University had given a lecture in May 1941, concluding that massive quantities of uranium-235 (U-235) isotope, isolated from uranium-238 (U-238), was necessary for a chain reaction. Adapting uranium separation methods to large-scale, mass production levels would require advanced engineering and vast technological and industrial resources, which only the US, Germany and possibly Great Britain possessed at the time.
No one in Japan knew exactly how much was needed to create a chain reaction and there was no known mechanism to induce it even if they did. Moreover, no one in Japan had ever successfully separated uranium isotopes. CRANP scientists were unable to say whether they could produce a nuclear weapon or how soon they might know. CRANP held its final meeting in March 1943, concluding that, while an atomic bomb was possible, they doubted that Japan or even the US could produce one in time for the war.
This was not, however, the end of the navy’s interest in nuclear research. Technical officers elsewhere in the navy had been separately investigating the potential military applications of nuclear fission. Two captains from the navy Department of Ships took interest and sometime in mid-1942 asked Bunsaku Arakatsu (1890–1973) of Kyoto Imperial University — an ex-pupil of Albert Einstein and Ernest Rutherford — to conduct initial research. Arakatsu was given an initial ¥3000 (US$750) and an additional ¥5000 ($1,250) after the Department of Ships agreed to take primary responsibility. But this was nowhere near enough to launch a viable nuclear weapons programme.
Meanwhile, rumours of American and German nuclear programmes reached top army officials. At a cabinet meeting in early spring 1943, Hideki Tōjō, Prime Minister and Minister of the Army, called for the acceleration of the army’s nuclear research efforts. Nishina was pressed and told that the army also needed a new source of power for aircraft, due to diminishing petroleum supplies. He was promised as much money and materials as he needed. The Riken project was upgraded to an official army programme, NI-gō, in May 1943 with ¥700,000 ($175,000) and a new building allocated to Nishina. NI-gō was given the highest classification of secrecy within the army, but Nishina was able to seek assistance from other physicists as needed.
Meanwhile, the navy high command was also anxious for new weapons to stop the Allied advance in the Pacific. By coincidence (or so it seems) the Department of Ships upgraded Arakatsu’s project around the same time that the army upgraded Nishina’s. In May 1943, Arakatsu’s project was given the official designation F-gō and an additional ¥300,000 ($75,000). Arakatsu was made director of the project, with a team that included future Nobel Prize winner Hideki Yukawa. Arakatsu expressed doubts about the success of the project, but reasoned that the funding and material support would help with the construction of his cyclotron.
Yet the fundamental problems remained: how to make the bomb; how to achieve critical mass; and how much uranium was needed. By this point Japan’s supplies of uranium were minimal, leaving little for basic research. Nishina speculated that about 10kg of U-235, enriched to 50 percent purity, might be enough for a bomb. Arakatsu calculated that one tonne or more of uranium oxide mixed with water would do. But whether they could get that much uranium, or get the process working, was uncertain.
It is unknown whether Nishina or Arakatsu knew of each other’s projects. There was little communication between them, the exception being a single visit Arakatsu paid to Nishina around late 1943 or early 1944. There he learnt that Nishina was pursuing a thermal diffusion method for separating isotopes and therefore chose to work on a separate process using a centrifuge.
Arakatsu struggled with his centrifuges, which were far slower than that required for uranium separation. Meanwhile, Nishina’s team focused on constructing a Clusius tube to isolate the U-235. They succeeded in spring 1944, but struggled to produce enough uranium hexafluoride gas to put in it. In any case, by November it became apparent that the Clusius tube was ineffective. Even if it had worked, thousands would have been needed to produce the minimum amount of U-235 necessary, with an energy requirement one-tenth of the total electricity available to all of Japan. The team also had problems with their cyclotrons, which were to be used for neutron generation. Despite the pessimistic outlook, they carried on, hoping that the research might be of some use after the war.
By the summer of 1944, both army and navy projects were progressing slowly and representatives finally began to discuss collaboration. All military services were having problems producing advanced weapons, resulting in the formation of the Army-Navy Technology Enforcement Committee in September 1944. But rivalry was not easily overcome. It took almost a year of negotiations before a tentative agreement on nuclear research could be reached. The navy finally agreed to take a supporting role in June 1945, after determining that the army project had progressed well beyond theirs.
Nevertheless, the Department of Ships continued to fund Arakatsu, allocating him an additional ¥300,000 just before the end of the war. The money was of little use, as Japan was already all but defeated and the F-gō remained stuck at the isotope separation stage.
On the night of 13 April 1945, Allied bombs struck close to Riken and the NI-gō building was badly damaged by fire. NI-gō was effectively abandoned in May 1945 for a variety of economic, technical and material reasons, not least of which was low uranium stocks. On 21 July 1945, the navy was told that F-gō would be unlikely to succeed in time for the war effort and the project terminated. Two weeks later the Americans dropped their atomic bombs.
Japan’s failure to produce an atomic bomb, or to anticipate the Americans’ success, revealed a widening gap between western and Japanese nuclear physics during the war. Crucially, the Manhattan Project had access to plentiful uranium-bearing ores as well as an expansive industrial infrastructure to process them in massive quantities. The Allied effort also involved top international physicists such as Oppenheimer, Fermi, and Szilard, driven by fear of a German bomb. Japanese efforts suffered a paucity of resources, isolation from the international community, and a lack of urgency — at least until the later, more desperate, stages of the war.
Their cause was not helped by the absence of internal collaboration. Although both the Japanese army and navy undertook nuclear research, they did so independently and on a comparatively small scale. There was no extensive organisation or the capacity to form special agencies at the highest levels of government, like the US National Defence Research Council and Office of Scientific Research and Development that facilitated the Manhattan Project. The rivalry between military services meant cooperation only out of necessity, and then toward the end of the war when such efforts were too late.
Despite Nishina’s involvement in both the army and navy feasibility studies, he was never given the ‘Oppenheimer-role’ of coordinating a national effort, nor was there much communication between project scientists. This meant that Japan’s technical resources were never used to their fullest. Osaka Imperial University, for example, had a vast range of equipment and a Clusius tube that could have been combined with Riken’s to form a cascade apparatus. Japan also had more cyclotrons than any country apart from the US, and these could have been converted into mass spectrographs and used to concentrate U-235 by an electromagnetic method, invented by Ernest Lawrence in 1941. Japan’s nuclear weapons program ultimately exposed deficiencies in the organisation and technical abilities of Japanese science. It was unable to adapt to ‘big science’, as demonstrated for the first time by the Manhattan Project.
Japan’s wartime scientists and the bomb
The motivations of Japanese physicists during wartime military research are difficult to gauge. Post-war reflections vary and are often revisionist. Many give the view that research was conducted under pressure and done in order to prevent young scientists from being sent to war.
Japanese physicists, such as Mitsuo Taketani (1911–2000), have argued that their less than active participation indicates passive resistance to the military leadership. Taketani carried out calculations for NI-gō but was at the same time being detained by the Japanese Thought Police for subversive ideas.
It is claimed that, as there seemed little likelihood of producing a bomb, physicists were able to accommodate the demands of the military while maintaining their own values. The military’s requests were seen more as an assigned scholarly project, a means by which related research could also be funded and conducted under the banner of building a bomb. This certainly seemed to be the case for Nishina and Arakatsu, who used military funds to advance the construction of their own cyclotrons.
In wartime Japan there was a tension between loyalty to the nation and a commitment to the internationalisation of science. Physicists had multiple identities, none more so than Yoshio Nishina. Nishina sold the ability of physicists to produce weapons of war in order to attract funding. During the war, Riken had like many Japanese industrial and research enterprises grown with Japan’s investment in military research. Much of its work was war-related, from research on cosmic rays to the experiments in the Nuclear Research Lab. Isotopes produced at Riken, including radioactive carbon, sodium, phosphorus, and copper, were used as tracers in other scientific research — especially by biologists who came to work with Nishina. By ‘using’ the military, scientists like Nishina kept Japanese science developing irrespective of the outcome of the war.
Nishina, like a number of Japanese scientists, was also quite nationalistic. His desire to see Japan catch up with the advanced countries of the world gave him an irrepressible drive. But Nishina was also keen to replicate the liberal and democratic environment he had experienced when studying under Niels Bohr at Copenhagen. He was also deeply impressed by American science. The decision to wage war with the US concerned Nishina, who thought that the military had underestimated the strength of the Americans. Paradoxically, the war enabled Nishina to continue the ‘Americanisation’ of Japanese physics with the building of his cyclotrons.
In Japan, as elsewhere in the war, it was not uncommon for senior scientists to attempt to ‘save’ their younger colleagues in order to preserve the next generation of Japanese scientists. Military research meant self-preservation for younger scientists faced with the option of going to war. Both Nishina and Arakatsu used their projects to protect younger scientists from military conscription. For NI- gō, Nishina was allowed to select ten young scientists, who would be granted draft deferrals from the army. When Arakatsu took charge of the F-gō, they permitted him to request additional technical assistants and junior scientists.
However, there seem to have been few moral misgivings about creating a weapon of mass destruction — and if there were any, they were soon dismissed. Masa Takeuchi, one of Nishina’s recruits, said he was initially taken aback by the idea of working on an atomic bomb. But like many physicists throughout the world, he was drawn by the challenge of nuclear fission research.
Since the war, Japan has exhibited a non-nuclear weapons policy based on its postwar constitution, which pledges not to “possess war-making potential”. This is formally articulated in the Three Principles of non-possession, non-production, and non-introduction of nuclear weapons outlined by Prime Minister Eisaku Satō in 1967 and officially adopted by the government in 1971, though never written into law. Japan also signed the Nuclear Non-Proliferation Treaty in 1971.
The Japanese public has been understandably adverse to the idea of nuclear weapons since Hiroshima and Nagasaki, with protests against the presence of American nuclear weapons on Japanese soil. Anti-nuclear weapons sentiment has been particularly strong after the 1954 Bikini incident, when a Japanese fishing boat was caught in the fallout of a US hydrogen bomb test.
However, since the 1950s the Japanese government has maintained that the possession of strictly defensive nuclear weapons is not unconstitutional. Various Prime Ministers (Kishi in 1957, Ohira in 1979 and Nakasone in 1984) have stated that Japan’s non-nuclear status could be changed. Eisaku Satō, who received the Nobel Peace Prize for his part in Japan’s nuclear non-proliferation efforts, was said to be privately supportive of Japanese development of nuclear weapons — introducing the Three Principles only because of public anti-nuclear sentiment at the time.
Japan maintains a pacifist constitution, but the position has been debated in recent years. The aggressive stances of North Korea, for example, have caused concern. Japanese officials reiterated in 1999 and 2002 that it is acceptable, and may become necessary, for Japan to develop nuclear weapons. With the encouragement of the US, Japan has been taking greater responsibility for its defence. And, at the time of writing, it has been suggested that the US government would not object if Japan chose to develop nuclear weapons.
Atoms for peace
The atomic bombing and the vision of US B-29 bombers in the Tokyo skies convinced many Japanese that they had lost the war because of technological inferiority to the west. Japanese-style technology had been put to the test and failed, as symbolised by the atomic bomb, or rather the lack of one. Yet there remained a belief that scientific research would ultimately contribute to Japan’s long-term economic prosperity. The special sensitivities of atomic energy in Japan distinctly shaped debates over Japanese science policy after the war.
Nuclear research was strictly prohibited during the Allied Occupation (1945–1952), with Japanese scientists and laboratories kept under strict surveillance. But neither the atomic bombings nor prohibition killed the growing interest of physicists and industrialists in the scientific and commercial possibilities offered by nuclear energy. As early as 1951, Fushimi Kōji, a Professor at Osaka University, argued to the Science Council of Japan that the soon-to-be-signed peace treaty should not contain any clauses prohibiting nuclear energy. Although Japan did not seek nuclear arms capability after the war, it was determined to establish a civilian nuclear power program.
The Japanese energy crisis
Japan’s nuclear power program is deeply bound with the question of resource security. Japan relies on imports for 80 percent of its energy requirements and Japanese industry is heavily dependent on imported raw materials and fuels. According to a 2005 white paper on energy by the Ministry of Economy, Trade and Industry, Japan’s energy self-sufficiency rate (the ratio of energy the nation secures solely at home) is just four percent, with hydroelectric power generation the main source of energy. With the inclusion of nuclear power, considered ‘semi-domestic’ energy due to the import of uranium, the rate is still only 19 percent — compared to 73 percent in the US, 51 percent in France and 39 percent in Germany.
Since the early 20th century, Japan’s political and business leaders have been acutely aware of the nation’s vulnerability in resources, particularly after suffering oil embargoes from the US, United Kingdom and the Netherlands in 1941. This was a major motivation for entering the Pacific War.
The 1950s and 1960s were an era of cheap and abundant oil. Japan made the most of the favourable circumstances by switching its energy source from coal to cheap petroleum and shifting from the development of hydroelectric power projects to the expansion of thermal power stations run on imported oil. But the first white paper (1958) of the Science and Technology Agency (STA) predicted that Japan’s energy consumption would increase rapidly over the next two decades. Energy problems, it said, were likely to become the greatest impediment to economic development.
The oil crises of 1973–74 and 1979 highlighted Japan’s dependency on foreign resources. The post-oil crisis period saw Japan score the world’s highest percentage of government research and development (R&D) funding for energy problems and nuclear power become a national strategic priority.
‘Atoms for peace’
The physicist Mitsuo Taketani (1911–2000) was the first academic to push for civilian use of atomic energy in Japan, exploring the idea in his book Genshiryoku (Atomic Energy, 1950). As the first victim of the atomic bomb, he felt that Japan had a right to conduct research on the peaceful uses of atomic energy, with the assistance of overseas countries who should unconditionally supply Japan with uranium.
On 8 December 1953, President Dwight Eisenhower launched his ‘Atoms for Peace’ plan at the General Assembly of the United Nations. Under the plan, nuclear powers would donate fissionable material to other countries for use in atomic energy. He imagined the provision of abundant electrical energy for power-starved areas of the world, with the US the market leader.
It was only after Eisenhower’s proposal that the Japanese scientific and business communities took nuclear power seriously. The concept of ‘atoms for peace’ provided Japan — and the rest of the world — with the ideal of a prosperous future courtesy of science. The Japanese were able to reconcile the establishment of a nuclear industry with the devastation of Hiroshima and Nagasaki. Eisenhower’s words were seen by many Japanese scientists to rubber-stamp nuclear energy as a safe and efficient way of generating domestic energy. ‘Atoms for Peace’ also presented a good business opportunity. Japan’s rush to establish an atomic energy program was fuelled partly by the desire to take full advantage of nuclear technology transfer from the US under Atoms for Peace.
In March 1954, the Diet (Japanese parliament) passed a ¥235 million budget to be used for the construction of a nuclear reactor. The Atomic Energy Law was passed in 1955, promoting atomic energy development and its utilisation toward peaceful objectives. On 14 November 1955, the US-Japan Atomic Energy Agreement was signed in Washington, creating an upsurge of interest among Japanese industrial circles in the commercialisation of atomic energy. A hastily organised infrastructure was devised in academic, public, and industrial sectors with a staggering number of bodies established in the following years. These included the Japan Atomic Energy Commission (JAEC, est. 1955), STA (est. 1956) and the Japan Atomic Energy Research Institute (JAERI, est. 1956). Responsibility for commercial nuclear power generation fell to the Ministry of International Trade and Industry (MITI).
As Japan experienced rapid economic growth in the 1960s, private sector research into atomic energy flourished. Funding after the 1970s oil crisis boosted the industry further. By 1986, Japan had 32 nuclear reactors generating 26 percent of the country’s electricity. At the time of writing (August 2006), Japan has 55 reactors and receives around 30–35 percent of its electricity from nuclear power.
A tale of two agencies
Since its conception, the Japanese nuclear industry has fostered two different approaches to nuclear development. Some physicists were committed to the importance of a domestic research base, advocating that atomic energy policy should be based on sound scientific research carried out in Japan. But Japan’s 1950s economic and industrial activities tended to be based on imported technology, and those industrialists who became involved tended to see nuclear energy as another moneymaking venture.
Therefore, two groups came to promote nuclear energy separately under different motivations. The STA and public corporations for research (including JAERI and the Power Reactor and Nuclear Fuel Development Corporation (PNC)) endorsed the domestic development of technology. Meanwhile a MITI-industrial complex, comprising MITI, the nuclear power industry and electric power companies, followed the postwar method of importing foreign technology.
The STA wanted to establish completely autonomous technology from the start. Officials were worried that the strengthening of nuclear non-proliferation treaties might restrict future access to advances in nuclear technology, and so advocated the development of independent nuclear generating technology specially designed to meet Japan’s needs. But political and financial establishments were happy to start new enterprises around the foreign reactors. Both were keen to get started; technologists in the JAEC and STA, as well as the industrial world, were concerned about falling further behind international trends in atomic energy.
The formation of the Japanese Atomic Power Company (JAPCO) represented something of a compromise. JAPCO was a joint concern by government and industry, 40 percent funded by private utilities, 20 percent by government utilities and the rest from private sources. Government policy encouraged JAPCO and other electric power companies to use their own funds to build nuclear power stations with proven (i.e. foreign) types of reactors while government funds were intended to promote R&D to advance Japanese technology.
JAPCO purchased two commercial nuclear power plants, one from the UK and one from the US. The British gas-cooled reactor was chosen for Japan’s first nuclear reactor project in 1956 and situated at JAERI’s site in Tokai-mura, Ibraki prefecture (north of Tokyo). An American experimental boiling water reactor began operation at Tokai-mura in 1957 and generated electricity by 1963. In 1959, MITI made a British Calder-Hall model Japan’s first commercial nuclear reactor. This caused considerable controversy after the same model was involved in a reactor accident at Sellafield in the UK in 1957.
The decision to go ahead with the Calder-Hall reactor marked the beginning of a split between the import and autonomy groups. This widened in the mid-1960s. The STA began investing heavily in national projects developing Advanced Thermal Reactors (ATR) and Fast Breeder Reactors (FBR) for autonomous energy (see later). But the import group sped ahead and in November 1965, commercial nuclear-powered electricity was generated in Japan for the first time.
Numerous problems with the British Calder-Hall reactor led to a switch to American Light Water Reactors (LWR). Through agreements with leading US firms, Japanese companies gathered expertise in nuclear technology. Since the Meiji period, there had been good relations between General Electric and Tokyo Electric Power Company, and Westinghouse and Kansai Electric Power Company. There was a clamour to develop nuclear reactors using American technology. JAPCOs first large LWR started operation at Tsuruga, Fukui prefecture in 1970 and this was followed by a Kansai Electric operation in Mihama, also in Fukui, using Westinghouse pressurised water technology. In March 1971, Tokyo Electric started a station using General Electric boiling water technology in Fukushima, Fukushima prefecture. Some of Japan’s most powerful corporations thus came to have a stake in the continued existence of the nuclear industry.
By the 1980s, the STA had started privatising its ATR and FBR projects, the STA goal being to start R&D and eventually pass them on to industry for commercialisation. The nuclear industry became commercially profitable and by 1989 Japan was the fourth largest nuclear country with 30 percent of total electric power consumption generated through nuclear means.
But the rush to commercialise nuclear power meant that a strong R&D base was not established first and an over-reliance on American expertise was the result. The 1990s saw the end of steady growth in the number of commercial nuclear power reactors, the failure of the STA’s ATR and FBR programs and uproar after a number of accidents.
It has been argued that if the whole industry had cooperated it would have met with greater success in terms of technological development. Most practical advances of the 1960s and 1970s came from the strategy of importing foreign knowledge. Postwar technology transfer to Japan has been successfully carried out, first by importing foreign technology, then by gradually gaining autonomy from the previous position of dependence. Unlike areas such as microelectronics, the STA and its associated public corporations followed their own, ambitious program to develop advanced domestic nuclear technologies such as ATR and FBR. This required larger government expenditure than MITI’s line of development. In the general structure of Japanese R&D funding, however, priority has always been given to the industrial sectors under the protection of MITI.
While MITI’s strategy has been successful, the Japanese nuclear industry is still heavily dependent on imported licenses. This prohibits the possibility of export — the customary goal in other Japanese industrial areas. And most enriched uranium for use in LWR — all but one of Japan’s commercial reactors — is still imported from the US and Europe. If MITI’s line of development had been followed with adequate subsidy, Japan might have reached the stage of technology export by now. Instead, a lot of government funding went to STA projects.
The key tension is the different ideologies of the governmental organisations at the head of the two groups. As its name suggests, MITI (Ministry of Trade and Industry) is the promoter of international trade. The STA, on the other hand, evolved from the wartime Office of Technology that was concerned by Japan’s scarcity of resources and subsequently encouraged Imperial expansion. The STA believed nuclear energy too important to leave to the unstable private sector — civil strategic resources, like energy and foodstuffs, are public goods and should be nationalised under control of the public sector in case of war. This proved too precautionary, as it became clear that Japan would stay out of future wars. With the rise of the private sector, MITI’s strategy has proved successful. Public science cannot compete with the private sector due to its structure and lack of profit incentives.
The final goal of both groups is actually the same: attainment of technologically independent commercial facilities. The complicated path of Japan’s nuclear development is the result of the dynamics of internal cooperation and conflict. This has produced a dualistic structure worryingly reminiscent of the multi-layered organisation of Japan’s wartime S&T.
Towards an autonomous future
For Japan, the most important variable in nuclear energy is the efficient use of uranium. As much as 99.3 percent of natural uranium (the part consisting of uranium-283) goes to waste in conventional nuclear power technology — with just 0.7 percent (consisting of uranium-235) used as fuel. Enrichment can increase fuel content to around three percent but even this suggests a massive waste of a limited and expensive raw material. Since Japan is wholly dependent on imported uranium, the principle aim of domestic research has been to design a reactor that would use it as sparingly as possible.
For over 30 years the JAEC has declared the ultimate technical goal of nuclear development to be the establishment of a domestic nuclear fuel cycle to recycle spent nuclear fuel. If this is achieved, atomic energy production, and that of overall production, would increase substantially, strengthening Japanese economic and national security by means of stability of energy supply. This has superficially been the most influential political goal of nuclear development in Japan.
The majority of Japan’s spent fuel is sent abroad for reprocessing in Britain and France at a cost. Because of this, emphasis has been placed on developing reprocessing facilities within its borders. Japan planned to reduce the amount sent abroad from 60 percent in 1980 to 35 percent by 1990. To this end, the country’s first uranium reprocessing plant was established in 1977 at Tokai-mura. But by 1987 only around 10 percent of Japan’s spent fuel could be handled by Tokai-mura. Therefore, a larger plant was opened at Rokkasho in the northern Aomori prefecture in 1995 at a cost of ¥1million million (£4,300 million).
However, major efforts into nuclear fuel cycle development have concentrated on ‘fast reactors’? that make more efficient use of uranium. The Nuclear Reactor and Fuel Development Agency (Dônen) started research into Advanced Thermal Converter Reactors (ATR) in 1967. Some ¥23 billion (US$64 million) was spent constructing the Fugen experimental reactor built in Fukui prefecture on Japan’s coast. It was expected to pave the way for a new generation of power plants by the late 1980s. But Fugen closed soon after opening in 1979 after unexpected technical problems. By then, construction costs had spiralled to more than ¥50 billion and falling oil prices made the expensive technology less attractive. JAEC terminated all ATR projects in August 1995.
Fugen was just a stepping-stone to the government’s more ambitious goal — Fast Breeder Reactor (FBR) technology. These are seen as ‘reactors that create their own fuel’ because they use a mix of uranium and plutonium — obtained from the waste of other nuclear plants — and produce additional plutonium that can in turn be used as a fuel. By converting the noxious wastes of existing nuclear plants into a valuable source of energy, Japan, by the miracle of technology could be turned into a resource rich country.
JAERI began work on Jōyō, a small-scale experimental FBR, in 1965, later transferring the project to PNC. Jōyō became operational in 1977 and work proceeded on a larger developmental FBR, Monju, at Tsuruga in the coastal Fukui prefecture. At a cost of ¥600,000 million (US$4,300 million) Monju was operational by 1994 and succeeded in generating electricity by August 1995.
Though FBR has been promoted with a considerable portion of the STA budget it has been plagued by delays and cost overruns. There also remain serious questions about safety and the disposal of toxic wastes.
FBR actually creates more plutonium than it consumes, necessitating more and more reactors and making FBR very difficult to abandon once started. Plutonium can of course be used in nuclear weapons and some fear that the Japanese may try to recover the costs of development and dispose of surplus plutonium by selling the technique to other parts of the world — particularly neighbouring Asian countries whose industrial expansion is creating a rapidly growing demand for energy. Japan’s search for resource security could, ironically, lead to political and military insecurity in the region.
In December 1995, a liquid sodium leak occurred at the Monju plant. No one was hurt, but a scandal emerged after PNC officials tried to conceal the seriousness of the accident. Following further problems and a fire in 1997 it is not certain whether Monju will ever operate again. After years of investigations, inspections and modifications and some ¥780 billion (US$6.56 billion) spent on the project so far, JAEC and the Japan Nuclear Cycle Development Institute (JNC) are keen to reopen the reactor — at the time of writing scheduled for 2008. But they have met considerable public resistance and gone through several court battles.
The Monju accident raised questions about the future of plutonium as nuclear fuel in Japan. Many countries have abandoned FBR, yet it is still supported in Japan by prominent bureaucrats and technicians (though Japan’s power-generating companies, as well as consumers, view it with greater caution). The reasons lie in the intricate power plays of Japan’s nuclear industry and the need for bureaucrats to look out for themselves. The demise of FBR would have undermined an STA already under pressure. Nuclear development is the STA’s largest policy domain and without FBR the amount of development under the STA control would be reduced by at least half — at a time when space exploration, STA’s second largest policy domain, is already facing uncertainty.
The government continues to pursue this line of development, regardless of costs or industry concerns. It has plans for a new demonstration reactor in 2015 and is also promoting the use of MOX — the mix of plutonium and uranium used in FBR — for conventional reactors. The government hopes to replace uranium with MOX at 16–18 of Japan’s 52 nuclear reactors by 2010. To this end, it has spent some ¥2 trillion (US$17.2 billion) on another fuel-recycling plant at Rokkasho to open in 2007, with plans for another in Genkai, Saga prefecture, by 2010. Further encouragement has been gained from the US, which in February 2006 announced the resumption of research into reprocessing for the first time since the 1970s.
Against nuclear power
On 1 March 1954, the same day that the first nuclear power budget was passed, the US conducted a hydrogen bomb test at Bikini Atoll. A Japanese fishing boat, the Fukuryu Maru (Lucky Dragon) was caught in the subsequent fallout and its 23 crewmen suffered advanced radiation poisoning. One died after a few months; many of the rest perished in their 40s or 50s from cancer, liver disease or hepatitis. Radiation poisoned fish stocks in the area, with some 457 tonnes of tuna destroyed. The radioactive fallout stretched to the Japanese mainland and as far as Australia, India, parts of Europe and the US.
The Japanese government reacted angrily, but the incident shocked the Japanese public. At the time, knowledge of the harmful effects of nuclear weapons was not widespread because the occupation suppressed protests against the use of the bomb. Coverage of the Bikini incident began grassroots opposition against nuclear policy. Plans to establish an Institute for Nuclear Study in Tanashi, Tokyo prefecture, for example, encountered no major obstacles until March 1954. After Bikini, locals became gravely concerned as to the character, function and environmental effects of the Institute and their anxiety escalated into a campaign against the construction.
The 1960s saw a change in attitude toward public participation in policy making as local residents became more empowered and able to resist contentious projects. This was seen in many large-scale projects, such as the violent disputes surrounding the building of Narita airport. Still, anti-nuclear opposition tended to be locally organised at the site of reactors, usually at remote locations along the coast where a great deal of money was spent on compensation for the loss of local fishermen’s livelihood. Only in the 1970s was nationwide opposition organised, coinciding with greater public concern over environmental issues.
By the 1980s, however, there was a general complacency about the environment and little nationwide resistance to nuclear power, reflecting a belief that technology would fix any problems that emerged. With greater affluence, Japanese people were more concerned with quality of life issues than those that dominated during reconstruction and high economic growth postwar. In recent years, global environmental problems have received more attention than local pollution and nuclear power has been promoted as a solution to the world’s carbon emissions problem.
However, opposition to nuclear power development rose again after the Chernobyl disaster of April 1986, the fallout of which was detected even in Japan. The movement has gained strength particularly from concerned housewives, who make up a sizeable voting block. For the first time, the opposition movement was taken seriously and the nuclear establishment spent a considerable amount of money on public relations.
Blocks of rural voters, such as local fishing cooperatives, have considerable power in Japan. As such, economic appeasements have been offered. The Rokkosho reprocessing plant project promised ¥177,000 million (£750 million) in contracts for local companies and 1400 jobs to win approval from sections of the local community. Towns throughout Japan, hoping for government aid to prop up local economies, have put their names forward as candidates to host nuclear waste facilities. Recently, the Japanese government has offered local governments up to ¥6 billion in subsidies if they agree to accept MOX fuel operations by the end of 2006.
Insufficient energy sources are still used as a rationale for nuclear development, but this became less convincing after energy-savings projects of the late 1970s proved an unexpected success. Advocates have also found grounds for argument in the global increase in energy usage, particularly in the developing world.
The nuclear establishment has also stressed the incomparable safety measures of Japanese nuclear technology. Japan has an enviable safety record with far fewer shutdowns due to accidents than the US. Official statistics list 424 notified faults and problems at nuclear power stations between 1968 and 1990. Most incidents are minor, but several serious accidents over the last ten years have heightened public concern.
A fire at Tokai-mura in March 1997 exposed 37 workers to low-level radiation. In July 1999 a Tsuruga plant was exposed to radiation levels 11,500 times the safety limit after a leak. Japan’s worst nuclear accident took place on 30 September 1999, when workers at a Tokai-mura plant mixed too much uranium in a tank, which subsequently went critical and exploded. Three workers were exposed to radiation, two of whom died, and hundreds of thousands of residents were forced to stay indoors. In 2004, an accident occurred at a Mihama plant, when a corroded pipe split and sprayed workers with steam and boiling water, killing five workers. The Kansai Electric Power Corporation (Kepco), which runs the plant, later admitted that the pipe had not been properly checked since it was installed in 1976.
Following the 1999 Tokai-mura accident, a Ministry of Labour study said that 15 of 17 leading nuclear facilities had inadequate safety measures. A 2001 white paper on nuclear safety claimed that Japan’s safety record had improved, with accidents falling to 14 in 2001 compared to 30 in 2000. But the Japan Nuclear Safety Commission’s reported 24 accidents and malfunctions in 2004.
There is growing discontent as to whether the government’s safety inspections are adequate, particularly as the industry deals with the problem of ageing power plants. There are concerns about the safety of workers, especially casual labourers employed by many plants to perform routine jobs in the most heavily contaminated areas. It was only in May 1993 that the Ministry of Labour acknowledged for the first time that the death of a former plant worker had been caused by radiation.
Trust is a major issue after a succession of scandals. The 1995 Monju cover-up was followed in 1999 by revelations that staff at British Nuclear Fuels had falsified data relating to a shipment of mixed uranium-plutonium fuel bound for Japan. After the 1999 Tokai-mura accident, the operating company admitted to using illegal working procedures for four years prior to the accident. In 2002, Tokyo Electric Power Company (Tepco) admitted it had covered up structural damage at its nuclear power plants, resulting in the temporary suspension of 17 plants in April 2003. Recently, Toshiba has been hit by revelations of faked test data at three of its nuclear power plants.
The recent accidents and scandals, on top of a string of minor leakages, fires and safety failures, have inevitably led to demonstrations. The effect has been profound. A government-sponsored survey on public attitudes to nuclear power conducted in 2000 found that just 19 percent of Japanese favoured continued development of nuclear plants and nearly two-thirds opposed further development or wanted Japan to stop nuclear power generation entirely. There has also been unexpected support from young politicians who, unlike their elders, have no ties to the nuclear power lobby and are tired of the scandals.
Court action has been taken on a number of occasions with mixed results. Legal challenges are often filed years before a plant is opened and go on for years after. A legal challenge against the Rokkasho experimental reprocessing plant was filed two years before the plant opened in 1992, but was ultimately dismissed in May 2006. Locals claimed the plant was susceptible to earthquakes and plane crashes. But the same reasons were successful against a new reactor in Kanazawa, Ishikawa prefecture, northwest of Tokyo. Following a 1999 lawsuit, a court ordered the plant to be shut down in March 2006, just nine days after opening. If the verdict is upheld by the high court, it would place enormous pressure on the authorities to close down other reactors as most designs in Japan are similar and the Kanazawa plant is the most modern and, in theory, the safest.
Responsibility for electricity supply and ownership of nuclear fuel and commercial power reactors is concentrated in the hands of private industry. Some scientists and members of the public feel that this tends to make public safety secondary to self-interest. The government’s persistence with FBR has attracted much of the criticism. Along with safety issues, anti-nuclear groups are concerned that worldwide proliferation of plutonium will increase proliferation of nuclear weapons.
The determination of government and industry to promote nuclear power in the face of opposition has been a feature of the nuclear power industry since the beginning. Japan’s nuclear industry has been stubbornly resilient, but its continued development brings with it escalated concerns for the Japanese public. The magnitude of environmental damage caused by an accident at a reprocessing plant or FBR would be far more serious than that at a conventional reactor. If a major accident were to occur in the future the Japanese government and industry may well have no choice but to abandon the promotion of nuclear power.
The advent of nuclear research has shaped Japanese science over the past 60 years. Nuclear fission represented a big test to Japan, after almost a decade of remarkable progress in S&T.
But if this was a test, the Japanese seemed unaware of it. By researching nuclear weapons Japan unknowingly entered an arms race — and finished some distance behind the US and Germany. This may be partly put down to attitude. Japanese physicists and their backers, though very interested in nuclear research, did not take it as a serious war pursuit until it was too late.
The Manhattan Project was propelled by scientists’ deep fear that the Nazis might succeed first. This powerful motivator was absent from Japan’s nuclear weapons programs. Military sponsors were supremely confident of the superiority of the Japanese ‘spirit’ and their destined victory. Japanese physicists, suffering limited resources, were pessimistic of success in the projects during wartime, taking part only at the request of the military. Breakthroughs in the US were certainly unknown to them.
Given the slow and moderate manner in which both army and navy approached nuclear fission research, it seems as if nuclear research was not a high military priority at the time. Atomic bombs were seen as something for the ‘next war’ until urgency required otherwise. Even when that urgency arrived, the financial backing was too late and nowhere near the scale of the Manhattan Project.
Japan’s failure to produce an atomic bomb exposed deficiencies in the organisation and conduct of Japanese science. The reluctance of the two main military organisations to collaborate at a time of national necessity showed a distinct lack of foresight — particularly given the deficiency of resources at the time. This was perhaps the major reason why Japan’s nuclear program never really got off the ground.
The rivalry of the army and navy, and the scientists’ failure to communicate between institutions, even when permitted, harks back to feudal customs like hiden — secretiveness and sectarianism to safeguard their own groups, which Japanese scientists thought they had left behind in the 19th Century. Elements of it are still rife in Japanese society today, as demonstrated by secretive government decision-making and attempted cover-ups in the nuclear power industry.
But the failures of wartime Japanese S&T and the devastation of the atomic bombings did not discourage Japanese faith in science. If anything, they allowed Japan to start again. It seems paradoxical that the only country to fall victim to the atomic bomb would become the world’s third biggest user of nuclear energy. The Japanese oppose nuclear weapons with one hand, but have enthusiastically pursued nuclear energy with the other.
It is the speed with which Japan took up nuclear energy research that is most surprising. Within ten years of the Hiroshima and Nagasaki bombings, the Japanese government began promoting nuclear power for Japan’s energy needs. Then again, perhaps this is not so surprising. In some ways, it seems appropriate that science should provide for Japan’s energy needs. The Charter of Riken, Japan’s foremost research institute, states, “In our country especially, given its dense population and paucity of industrial raw materials, science is really the only means by which industrial development and national power can be made to grow.”
The key motivations behind Japan’s development of nuclear power seem to have been good business opportunity and the pursuit of long-term self-sustaining energy. Like Germany, Japan is densely populated and lacking in natural resources. This was one of the prime motivators of Japan’s entry into the Second World War and the problem remains, with Japan’s energy demands increasing with its economic success. S&T have replaced Imperial expansion as the method to solve this so it seems logical that nuclear energy would be pursued as a possible solution. Since the 1973–74 oil crisis, the Japanese have feared energy starvation, but have come to realise that too much dependence on atomic energy is risky because almost all natural and enriched uranium is imported and nuclear energy by its nature is not a stable energy source.
Discussion of Japan and S&T often throws up two common themes: Japan as a resource-poor nation and Japan as an imitator. Through nuclear development the Japanese sought to solve the former via the latter. Japan’s successful nuclear industry was developed using the same practice as in the Meiji Restoration — importing foreign technology and adapting it. Japan’s history is a combination of domestic development and waves of Chinese and western influence. Indigenisation of foreign technology, diffusion of it through the economy and efforts to encourage Japanese enterprises to take advantage of it have been seen as fundamental to national security since the 19th Century. World War 2 left Japanese institutions, human skills and public attitudes toward S&T remarkably well prepared for the import of western technology in the postwar years. But balancing the need for indigenous development against the relative ease of foreign borrowing has been a problem throughout Japanese history.
Dependency on foreign resources continues to be a worry and a source of conflict within Japan’s nuclear industry. Autonomous energy is the Japanese nuclear industry’s long-term goal, but slow R&D has conflicted with the desire for quick commercial development. This has resulted in a dualistic structure that complicates the long-term future of the industry and needs to be resolved if the industry is to progress. With commercial reactors still dependent on foreign imports and the relative failure of R&D into autonomous (or at least semi-autonomous) reactors, Japan’s nuclear industry has stagnated. Moreover, the industry is losing public approval and trust.
Safety issues are of primary concern. The string of accidents and scandals has put nuclear energy in the spotlight and split opinions in the media. Critics say the safety appraisal process — which takes place before a power plant is even built — is extremely lax, while the inspections carried out afterwards are haphazard. They point to the inadequacy of government regulation and a culture within the industry’s management of covering up mistakes — hiden again. But others say the industry is finally learning from its mistakes, pointing to Kepco’s prompt and open response to the 2004 Mihama accident.
People ask whether nuclear energy is needed any more. Even Mitsuo Taketani, one of the first advocates for nuclear power, has argued that the transition of Japanese industrial structure from heavy to light industry means that there is no real need for nuclear energy. Some are of the opinion that the industry serves mostly to justify its own existence, run by bureaucrats more concerned with business than Japan’s long-term energy resources. It is said that the promotion of nuclear energy is preventing the uptake of renewable energy sources. Japan has some of the most advanced solar energy technology in the world and the government aims to have natural energy sources account for 7 percent of the national energy supply by 2010. But the relative cheapness of conventional nuclear power has made Japanese utility companies reluctant to take up alternative means.
Nuclear power now supplies a third of Japan’s electricity and is so ingrained into the energy system as to make its removal extremely difficult. When 17 Tepco reactors were shut down for safety checks in 2003, it threatened power shortages throughout Tokyo. Japanese business still sees nuclear power as enormously profitable, and continues to make investments at home and abroad.
The expansion of Japan’s nuclear industry is impressive, particularly since it lacks the economic input of the military. Other countries are able to recover some of the costs through the strategic value of nuclear weapons, but Japan’s pacifist postwar constitution forbids it from such uses. Though Japan does not conduct military research, it emphasises R&D for economic development and both government and industry invest heavily. In 2004, Japan spent more of its GDP on R&D than any other nation — 2.12 percent compared to 1.97 percent in the US and 1.22 percent in the UK.
The aggressions of North Korea have again raised the topic of nuclear weapons. Commentators have speculated as to whether Japan (and South Korea) may feel pressured to develop their own nuclear weapons. Few doubt that Japan has the capability, though it does have a reported plutonium stockpile of some 38 tonnes from its reactors — enough to make thousands of bombs and more if Japan succeeds in developing FBR or MOX fuel. It has been suggested that if Japan wanted to develop nuclear weapons it would find scientists willing to do so. A 2003 Nature article claimed that the current reticence comes from an aversion to political involvement rather than any objections to nuclear weapons. Scientists also seem unwilling to make individual peace pledges if their organisations have not done so themselves. The Japanese say ‘the nail that sticks out gets hit by the hammer’ and most are keen to conform. Critically, public opinion remains largely against nuclear weapons, at least as far as opinion polls indicate. But according to a 2003 Reuters poll, 42 percent of people favoured a revision of Article Nine, which renounces the use of force in settling international disputes.
The history of Japanese nuclear science has been a series of dualistic and conflicting relationships: the wartime army and navy, MITI and the STA, atoms for war versus atoms for peace. But one relationship above all will influence Japan’s nuclear future, and it has been a constant factor throughout Japan’s scientific history: the United States.
Commodore Perry’s arrival in 1853 sparked Japan’s love affair with S&T and ignited a desire to catch-up — and surpass — the West, something that still motivates the Japanese today. The last years of the 19th century and the early years of the 20th century saw the establishment of associations between large Japanese and American companies, such as Toshiba and General Electric, Mitsubishi and Westinghouse, and ITT and Nippon Electric. The Allied occupation facilitated further ties between US industry and Japanese utilities and equipment manufacturers, often with militaristic overtones. Inter-firm transfers of technology between the US and Japan have been important ever since and these associations have been crucial to the development of Japan’s nuclear industry.
American science held great influence on Japanese physicists such as Yoshio Nishina and the Pacific War led to urgency for nuclear weapons. While the Americans dropped the atomic bombs on Hiroshima and Nagasaki, the sad outcome was, for some Japanese, arguably more the result of the backwardness of Japanese S&T than evil Americans. Such beliefs have facilitated the Americanisation of both Japanese society and science.
The Allied occupation and US policy shaped Japan’s science policy. The 1946 US Atomic Energy Act placed severe restrictions on the dissemination of nuclear research. Prohibition on Japanese nuclear research provided the US with a ready customer when nuclear energy was promoted under ‘Atoms for Peace’. As in Germany’s postwar nuclear energy development, most of the Japan’s nuclear technology and materials were imported from the US. Most of Japan’s reactors still operate under US licence and the Americans continue to supply most of Japan’s uranium. This has caused problems, particularly over the reprocessing of nuclear fuel.
Despite increased defence capability, Japan still relies heavily on the US nuclear umbrella for deterrence. Some feel Japan’s participation in the 2003 Iraq war was to maintain relations in case of a possible North Korean attack.
The US-Japan relationship continues to be one of the most important bilateral relationships in the world. Underpinning this is the strength of the Japanese economy, the belief that Japan’s future is tied to technological innovation, and the maintenance of a healthy export trade in value-added goods. Whatever Japan’s nuclear future, the US seems certain to be a part of it.