Search results for: condensed matter physics
402 Agricultural Mechanization for Transformation
Authors: Lawrence Gumbe
Abstract:
Kenya Vision 2030 is the country's programme for transformation covering the period 2008 to 2030. Its objective is to help transform Kenya into a newly industrializing, middle-income, exceeding US$10000, country providing a high quality of life to all its citizens by 2030, in a clean and secure environment. Increased agricultural and production and productivity is crucial for the realization of Vision 2030. Mechanization of agriculture in order to achieve greater yields is the only way to achieve these objectives. There are contending groups and views on the strategy for agricultural mechanization. The first group are those who oppose the widespread adoption of advanced technologies (mostly internal combustion engines and tractors) in agricultural mechanization as entirely inappropriate in most situations in developing countries. This group argues that mechanically powered -agricultural mechanization often leads to displacement of labour and hence increased unemployment, and this results in a host of other socio-economic problems, amongst them, rural-urban migration, inequitable distribution of wealth and in many cases an increase in absolute poverty, balance of payments due to the need to import machinery, fuel and sometimes technical assistance to manage them. The second group comprises of those who view the use of the improved hand tools and animal powered technology as transitional step between the most rudimentary step in technological development (characterized by entire reliance on human muscle power) and the advanced technologies (characterized 'by reliance on tractors and other machinery). The third group comprises those who regard these intermediate technologies (ie. improved hand tools and draught animal technology in agriculture) as a ‘delaying’ tactic and they advocate the use of mechanical technologies as-the most appropriate. This group argues that alternatives to the mechanical technologies do not just exist as a practical matter, or, if they are available, they are inefficient and they cannot be compared to the mechanical technologies in terms of economics and productivity. The fourth group advocates a compromise between groups two and third above. This group views the improved hand tools and draught animal technology as more of an 18th century technology and the modem tractor and combine harvester as too advanced for developing countries. This group has been busy designing an ‘intermediate’, ‘appropriate’, ‘mini’, ‘micro’ tractor for use by farmers in developing countries. This paper analyses and concludes on the different agricultural mechanization strategies available to Kenya and other third world countriesKeywords: agriculture, mechanazation, transformation, industrialization
Procedia PDF Downloads 336401 Use and Effects of Kanban Board from the Aspects of Brothers Furniture Limited
Authors: Kazi Rizvan, Yamin Rekhu
Abstract:
Due to high competitiveness in industries throughout the world, every industry is trying hard to utilize all their resources to keep their productivity as high as possible. Many tools have been being used to ensure smoother flow of an operation, to balance tasks, to maintain proper schedules for tasks, to maintain proper sequence for tasks, to reduce unproductive time. All of these tools are used to augment productivity within an industry. Kanban board is one of them and of the many important tools of lean production system. Kanban Board is a visual depiction of the status of tasks. Kanban board shows the actual status of the tasks. It conveys the progress and issues of tasks as well. Using Kanban Board, tasks can be distributed among workers and operation targets can be visually represented to them. In this paper, an example of Kanban board from the aspects of Brothers Furniture Limited was taken and how the Kanban board system was implemented, how the board was designed and how it was made easily perceivable for the less literate or illiterate workers. The Kanban board was designed for the packing section of Brothers Furniture Limited. It was implemented for the purpose of representing the tasks flow to the workers and to mitigate the time that was wasted while the workers remained wondering about what task they should start after they finish one. Kanban board subsumed seven columns and there was a column for comments where if any problem occurred during working on the tasks. Kanban board was helpful for the workers as the board showed the urgency of the tasks. It was also helpful for the store section as they could understand which products and how much of them could be delivered to store at any certain time. Kanban board had all the information centralized which is why the work-flow got paced up and idle time was minimized. Regardless of many workers being illiterate or less literate, Kanban board was still explicable for the workers as the Kanban cards were colored. Since the significance of colors can be conveniently interpretable to them, colored cards helped a great deal in that matter. Hence, the illiterate or less literate workers didn’t have to spend time wondering about the significance of the cards. Even when the workers weren’t told the significance of the colored cards, they could grow a feeling about their meaning as colors can trigger anyone’s mind to perceive the situation. As a result, the board elucidated the workers about what board required them to do, when to do and what to do next. Kanban board alleviated excessive time between tasks by setting day-plan for targeted tasks and it also reduced time during tasks as the workers were acknowledged of forthcoming tasks for a day. Being very specific to the tasks, Kanban board helped the workers become more focused on their tasks helped them do their job with more perfection. As a result, The Kanban board helped achieve a 8.75% increase in productivity than the productivity before the Kanban board was implemented.Keywords: color, Kanban Board, Lean Tool, literacy, packing, productivity
Procedia PDF Downloads 232400 A Pilot Study on the Development and Validation of an Instrument to Evaluate Inpatient Beliefs, Expectations and Attitudes toward Reflexology (IBEAR)-16
Authors: Samuel Attias, Elad Schiff, Zahi Arnon, Eran Ben-Arye, Yael Keshet, Ibrahim Matter, Boker Lital Keinan
Abstract:
Background: Despite the extensive use of manual therapies, reflexology in particular, no validated tools have been developed to evaluate patients' beliefs, attitudes and expectations regarding reflexology. Such tools however are essential to improve the results of the reflexology treatment, by better adjusting it to the patients' attitudes and expectations. The tool also enables assessing correlations with clinical results of interventional studies using reflexology. Methods: The IBEAR (Inpatient Beliefs, Expectations and Attitudes toward Reflexology) tool contains 25 questions (8 demographic and 17 specifically addressing reflexology), and was constructed in several stages: brainstorming by a multidisciplinary team of experts; evaluation of each of the proposed questions by the experts' team; and assessment of the experts' degree of agreement per each question, based on a Likert 1-7 scale (1 – don't agree at all; 7 – agree completely). Cronbach's Alpha was computed to evaluate the questionnaire's reliability while the Factor analysis test was used for further validation (228 patients). The questionnaire was tested and re-tested (48h) on a group of 199 patients to assure clarity and reliability, using the Pearson coefficient and the Kappa test. It was modified based on these results into its final form. Results: After its construction, the IBEAR questionnaire passed the expert group's preliminary consensus, evaluation of the questions' clarity (from 5.1 to 7.0), inner validation (from 5.5 to 7) and structural validation (from 5.5 to 6.75). Factor analysis pointed to two content worlds in a division into 4 questions discussing attitudes and expectations versus 5 questions on belief and attitudes. Of the 221 questionnaires collected, a Cronbach's Alpha coefficient was calculated on nine questions relating to beliefs, expectations, and attitudes regarding reflexology. This measure stood at 0.716 (satisfactory reliability). At the Test-Retest stage, 199 research participants filled in the questionnaire a second time. The Pearson coefficient for all questions ranged between 0.73 and 0.94 (good to excellent reliability). As for dichotomic answers, Kappa scores ranged between 0.66 and 1.0 (mediocre to high). One of the questions was removed from the IBEAR following questionnaire validation. Conclusions: The present study provides evidence that the proposed IBEAR-16 questionnaire is a valid and reliable tool for the characterization of potential reflexology patients and may be effectively used in settings which include the evaluation of inpatients' beliefs, expectations, and attitudes toward reflexology.Keywords: reflexology, attitude, expectation, belief, CAM, inpatient
Procedia PDF Downloads 228399 On the Dwindling Supply of the Observable Cosmic Microwave Background Radiation
Authors: Jia-Chao Wang
Abstract:
The cosmic microwave background radiation (CMB) freed during the recombination era can be considered as a photon source of small duration; a one-time event happened everywhere in the universe simultaneously. If space is divided into concentric shells centered at an observer’s location, one can imagine that the CMB photons originated from the nearby shells would reach and pass the observer first, and those in shells farther away would follow as time goes forward. In the Big Bang model, space expands rapidly in a time-dependent manner as described by the scale factor. This expansion results in an event horizon coincident with one of the shells, and its radius can be calculated using cosmological calculators available online. Using Planck 2015 results, its value during the recombination era at cosmological time t = 0.379 million years (My) is calculated to be Revent = 56.95 million light-years (Mly). The event horizon sets a boundary beyond which the freed CMB photons will never reach the observer. The photons within the event horizon also exhibit a peculiar behavior. Calculated results show that the CMB observed today was freed in a shell located at 41.8 Mly away (inside the boundary set by Revent) at t = 0.379 My. These photons traveled 13.8 billion years (Gy) to reach here. Similarly, the CMB reaching the observer at t = 1, 5, 10, 20, 40, 60, 80, 100 and 120 Gy are calculated to be originated at shells of R = 16.98, 29.96, 37.79, 46.47, 53.66, 55.91, 56.62, 56.85 and 56.92 Mly, respectively. The results show that as time goes by, the R value approaches Revent = 56.95 Mly but never exceeds it, consistent with the earlier statement that beyond Revent the freed CMB photons will never reach the observer. The difference Revert - R can be used as a measure of the remaining observable CMB photons. Its value becomes smaller and smaller as R approaching Revent, indicating a dwindling supply of the observable CMB radiation. In this paper, detailed dwindling effects near the event horizon are analyzed with the help of online cosmological calculators based on the lambda cold dark matter (ΛCDM) model. It is demonstrated in the literature that assuming the CMB to be a blackbody at recombination (about 3000 K), then it will remain so over time under cosmological redshift and homogeneous expansion of space, but with the temperature lowered (2.725 K now). The present result suggests that the observable CMB photon density, besides changing with space expansion, can also be affected by the dwindling supply associated with the event horizon. This raises the question of whether the blackbody of CMB at recombination can remain so over time. Being able to explain the blackbody nature of the observed CMB is an import part of the success of the Big Bang model. The present results cast some doubts on that and suggest that the model may have an additional challenge to deal with.Keywords: blackbody of CMB, CMB radiation, dwindling supply of CMB, event horizon
Procedia PDF Downloads 118398 From Stalemate to Progress: Navigating the Restitution Maze in Belgium and DRCongo
Authors: Gracia Lwanzo Kasongo
Abstract:
In the realm of cultural heritage, few issues loom larger than the ongoing battle for restitution faced by European and African museums. In Belgium, this contentious process was set in motion by two pivotal events. Firstly, the resounding revelations of the French report on restitution, which boldly declared that 'over 90% of African cultural heritage resides outside of Africa Secondly, the seismic impact of the Black Lives Matter movement following the tragic death of George Floyd. These two events unleashed a wave of outrage among Afro-descendants, who viewed the possession of colonial collections as an enduring symbol of colonial dominance and a stark validation of the systemic racism deeply ingrained within Belgian society. The instrumentalization of cultural property as a means of wielding political power is by no means a novel concept. Its roots can be traced back to the constructed justifications that emerged in the 1950s, during which the Royal Museum for Central Africa in Tervuren played a pivotal role as the self-proclaimed 'guardian of Congolese cultural heritage'. This legacy of legitimizing colonial presence permeates the fabric of Belgium's museum reform policies and the structural management of museums in the Democratic Republic of Congo (DRC). Employing a dialectical approach, I embark on an exploration of the intricate historical interplay between the Royal Museum for Central Africa and the Institute of National Museums of Congo. From this vantage point, I delve into the arduous struggles faced by museums in both the DRC and Belgium as they grapple with the complex and contentious issue of cultural heritage restitution. Central to these struggles is the profound quest for meaning and (re)definition of museums, particularly for Congolese and Afro-descendant communities whose identities and narratives have long been marginalized and suppressed. As the narrative unfolds, I shed light on the prospects for cooperation that have emerged from my extensive fieldwork. Within the interplay of historical entanglements, struggles for restitution, and the search for a more inclusive and equitable museum landscape, glimmers of hope emerge. Collaborative efforts and potential avenues for mutual understanding between Belgium and the DRC begin to take shape, offering a beacon of possibility amidst the often tumultuous discourse surrounding cultural heritage.Keywords: restitution, museum stuggles, belgium, DRCongo
Procedia PDF Downloads 73397 Environmental Related Mortality Rates through Artificial Intelligence Tools
Authors: Stamatis Zoras, Vasilis Evagelopoulos, Theodoros Staurakas
Abstract:
The association between elevated air pollution levels and extreme climate conditions (temperature, particulate matter, ozone levels, etc.) and mental consequences has been, recently, the focus of significant number of studies. It varies depending on the time of the year it occurs either during the hot period or cold periods but, specifically, when extreme air pollution and weather events are observed, e.g. air pollution episodes and persistent heatwaves. It also varies spatially due to different effects of air quality and climate extremes to human health when considering metropolitan or rural areas. An air pollutant concentration and a climate extreme are taking a different form of impact if the focus area is countryside or in the urban environment. In the built environment the climate extreme effects are driven through the formed microclimate which must be studied more efficiently. Variables such as biological, age groups etc may be implicated by different environmental factors such as increased air pollution/noise levels and overheating of buildings in comparison to rural areas. Gridded air quality and climate variables derived from the land surface observations network of West Macedonia in Greece will be analysed against mortality data in a spatial format in the region of West Macedonia. Artificial intelligence (AI) tools will be used for data correction and prediction of health deterioration with climatic conditions and air pollution at local scale. This would reveal the built environment implications against the countryside. The air pollution and climatic data have been collected from meteorological stations and span the period from 2000 to 2009. These will be projected against the mortality rates data in daily, monthly, seasonal and annual grids. The grids will be operated as AI-based warning models for decision makers in order to map the health conditions in rural and urban areas to ensure improved awareness of the healthcare system by taken into account the predicted changing climate conditions. Gridded data of climate conditions, air quality levels against mortality rates will be presented by AI-analysed gridded indicators of the implicated variables. An Al-based gridded warning platform at local scales is then developed for future system awareness platform for regional level.Keywords: air quality, artificial inteligence, climatic conditions, mortality
Procedia PDF Downloads 112396 Origins of the Tattoo: Decoding the Ancient Meanings of Terrestrial Body Art to Establish a Connection between the Natural World and Humans Today
Authors: Sangeet Anand
Abstract:
Body art and tattooing have long been practiced as a form of self-expression for centuries, and this study studies and analyzes the pertinence of tattoo culture in our everyday lives and ancient past. Individuals of different cultures represent ideas, practices, and elements of their cultures through symbolic representation. These symbols come in all shapes and sizes and can be as simple as the makeup you put on every day to something more permanent such as a tattoo. In the long run, these individuals who choose to display art on their bodies are seeking to express their individuality. In addition, these visuals are ultimately a reflection of our own appropriate cultures deem as beautiful, important, and powerful to the human eye. They make us known to the world and give us a plausible identity in an ever-changing world. We have lived through and seen a rise in hippie culture today. This type of bodily decoration displayed by this fad has made it seem as though body art is a visual language that is relatively new. But quite to the contrary, it is not. Through cultural symbolic exploration, we can answer key questions to ideas that have been raised for centuries. Through careful, in-depth interviews, this study takes a broad subject matter-art, and symbolism-and culminates it into a deeper philosophical connection between the world and its past. The basic methodologies used in this sociocultural study include interview questionnaires and textual analysis, which encompass a subject and interviewer as well as source material. The major findings of this study contain a distinct connection between cultural heritage and the day-to-day likings of an individual. The participant that was studied during this project demonstrated a clear passion for hobbies that were practiced even by her ancestors. We can conclude, through these findings, that there is a deeper cultural connection between modern day humans, the first humans, and the surrounding environments. Our symbols today are a direct reflection of the elements of nature that our human ancestors were exposed to, and, through cultural acceptance, we can adorn ourselves with these representations to help others identify our pasts. Body art embraces the different aspects of different cultures and holds significance, tells stories, and persists, even as the human population rapidly integrates. With this pattern, our human descendents will continue to represent their cultures and identities in the future. Body art is an integral element in understanding how and why people identify with certain aspects of life over others and broaden the scope for conducting more analysis cross-culturally.Keywords: natural, symbolism, tattoo, terrestrial
Procedia PDF Downloads 105395 Member States 'Perception of Threat' to Migration Crises as a Determinant Factor of Change in Cooperation: A Comparison between the Yugoslav Migration Crisis and the Syrian Refugees' Crisis
Authors: Diego Caballero Vélez
Abstract:
In 1997 the Schengen Convention was incorporated in the mainstream of EU law by the Amsterdam Treaty. It came into effect in 1999 with the abolition of internal border controls in the EU, a milestone in the European integration project. In the meantime, due to the Yugoslav wars, nearly 700,000 asylum applications were filed in the European countries provoking a major refugee crisis. During this period, the opening of Eastern Europe fostered more cooperation and policy-making at the EU level in migration issues. Currently, a similar migratory crisis is taking place in Europe. The Syrian war has caused the most massive influx of immigrants in Europe since World War II. Nevertheless, the EU is adopting different migration policies from those implemented during the Yugoslav migration crisis. The current crisis has not led to a common European position but national responses have been offered on migration policies and responsibility for border security and asylum-seekers. A lot of factors can explain this change from a cooperation scenario to a no cooperation one, such as the economic crisis, but this research is focused on the premise that 'threat perception' lies at the core of some states grand strategies towards migration and it also influences in multilateral or unilateral responses. Migration rests at the nexus of three dimensions of security, including geopolitical interests, material production, and internal security. According to some scholars, migration policy is an 'integral instrument' of state grand strategy in that context. Political integration at the EU might be altered with the emergence of existential threats. In other words, some areas of the European cooperation can be transformed when a 'critical juncture' occurs, for instance a migration crisis. In that instance, Member states could see migration as a matter of threat that modifies their national interests and willingness to embrace international cooperation. This research will focus on EU Member states´ perceptions of the 90´s migration crisis and the current one. The goal is to evaluate to what extent the perceptions of threat are one of the main factors for explaining the transition from a cooperation scenario to a no-cooperation one in European asylum and security policies. To analyze threat perception in both migration crisis, some relevant Member states are treated as cases of study and a comparative analysis is carried out based on public opinion polls, public and policy discourse in migration, voting practices and deconstruction of the migration policies themselves both at EU level and a national one.Keywords: cooperation, migration crisis, national responses, threat perception
Procedia PDF Downloads 239394 Simons, Ehrlichs and the Case for Polycentricity – Why Growth-Enthusiasts and Growth-Sceptics Must Embrace Polycentricity
Authors: Justus Enninga
Abstract:
Enthusiasts and skeptics about economic growth have not much in common in their preference for institutional arrangements that solve ecological conflicts. This paper argues that agreement between both opposing schools can be found in the Bloomington Schools’ concept of polycentricity. Growth-enthusiasts who will be referred to as Simons after the economist Julian Simon and growth-skeptics named Ehrlichs after the ecologist Paul R. Ehrlich both profit from a governance structure where many officials and decision structures are assigned limited and relatively autonomous prerogatives to determine, enforce and alter legal relationships. The paper advances this argument in four steps. First, it will provide clarification of what Simons and Ehrlichs mean when they talk about growth and what the arguments for and against growth-enhancing or degrowth policies are for them and for the other site. Secondly, the paper advances the concept of polycentricity as first introduced by Michael Polanyi and later refined to the study of governance by the Bloomington School of institutional analysis around the Nobel Prize laureate Elinor Ostrom. The Bloomington School defines polycentricity as a non-hierarchical, institutional, and cultural framework that makes possible the coexistence of multiple centers of decision making with different objectives and values, that sets the stage for an evolutionary competition between the complementary ideas and methods of those different decision centers. In the third and fourth parts, it is shown how the concept of polycentricity is of crucial importance for growth-enthusiasts and growth-skeptics alike. The shorter third part demonstrates the literature on growth-enhancing policies and argues that large parts of the literature already accept that polycentric forms of governance like markets, the rule of law and federalism are an important part of economic growth. Part four delves into the more nuanced question of how a stagnant steady-state economy or even an economy that de-grows will still find polycentric governance desirable. While the majority of degrowth proposals follow a top-down approach by requiring direct governmental control, a contrasting bottom-up approach is advanced. A decentralized, polycentric approach is desirable because it allows for the utilization of tacit information dispersed in society and an institutionalized discovery process for new solutions to the problem of ecological collective action – no matter whether you belong to the Simons or Ehrlichs in a green political economy.Keywords: degrowth, green political theory, polycentricity, institutional robustness
Procedia PDF Downloads 182393 Spectrogram Pre-Processing to Improve Isotopic Identification to Discriminate Gamma and Neutrons Sources
Authors: Mustafa Alhamdi
Abstract:
Industrial application to classify gamma rays and neutron events is investigated in this study using deep machine learning. The identification using a convolutional neural network and recursive neural network showed a significant improvement in predication accuracy in a variety of applications. The ability to identify the isotope type and activity from spectral information depends on feature extraction methods, followed by classification. The features extracted from the spectrum profiles try to find patterns and relationships to present the actual spectrum energy in low dimensional space. Increasing the level of separation between classes in feature space improves the possibility to enhance classification accuracy. The nonlinear nature to extract features by neural network contains a variety of transformation and mathematical optimization, while principal component analysis depends on linear transformations to extract features and subsequently improve the classification accuracy. In this paper, the isotope spectrum information has been preprocessed by finding the frequencies components relative to time and using them as a training dataset. Fourier transform implementation to extract frequencies component has been optimized by a suitable windowing function. Training and validation samples of different isotope profiles interacted with CdTe crystal have been simulated using Geant4. The readout electronic noise has been simulated by optimizing the mean and variance of normal distribution. Ensemble learning by combing voting of many models managed to improve the classification accuracy of neural networks. The ability to discriminate gamma and neutron events in a single predication approach using deep machine learning has shown high accuracy using deep learning. The paper findings show the ability to improve the classification accuracy by applying the spectrogram preprocessing stage to the gamma and neutron spectrums of different isotopes. Tuning deep machine learning models by hyperparameter optimization of neural network models enhanced the separation in the latent space and provided the ability to extend the number of detected isotopes in the training database. Ensemble learning contributed significantly to improve the final prediction.Keywords: machine learning, nuclear physics, Monte Carlo simulation, noise estimation, feature extraction, classification
Procedia PDF Downloads 150392 Adopting New Knowledge and Approaches to Sustainable Urban Drainage in Saudi Arabia
Authors: Ali Alahmari
Abstract:
Urban drainage in Saudi Arabia is an increasingly challenging issue due to factors such as climate change and rapid urban expansion. The existing infrastructure, based on traditional drainage systems, is not always able to cope with the increased precipitation, sometimes leading to rainwater runoff and floods causing disturbances and damage to property. Therefore, there is a need to find new ways of managing drainage, such as Sustainable Urban Drainage Systems (SUDS). The research has highlighted the main driving forces behind the need for change, revealed by the participants, to the need to adopt new ideas and approaches for urban drainage. However, while moving towards this, certain factors that may hinder the aim of using the experiences of other countries and taking advantage of innovative solutions. The research illustrates an initial conceptual model for these factors emerging from the analysis. It identifies some of the fundamental issues affecting the resistance to change towards the adoption of the concept of sustainability in Saudi Arabia, with Riyadh city as a case study. This was by using a qualitative approach, whereby, through two phases of fieldwork during 2013 and 2014, twenty-six semi-structured interviews were conducted with a number of representative officials and professionals from key government departments and organisations related to urban drainage management. Grounded Theory approach was followed to analyse the qualitative data obtained. Resistance to change was classified to: firstly: individual inertia (e.g. familiarity with the conventional solutions and approaches, lack of awareness, and considering sustainability as a marginal matter in urban planning). This resulted in not paying the desired attention, and impact on planning and setting priorities for development. Secondly: institutionalised inertia (e.g. lack of technical and design specifications for other unconventional drainage solutions, lack of consideration by decision makers in other disciplines such as contributions from environmental and geographical studies, and routine work and bureaucracy). This contributes to the weakness of decision-making, weakness in the role of research, and a lack of human resources. It seems that attitudes towards change may have reduced the ability to move forward towards sustainable development, in addition to contributing towards difficulties in some aspects of the decision-making process. Thus, the chapter provides insights into the current situation in Saudi Arabia and contributes to understanding the decisions that are made regarding change.Keywords: climate change, new knowledge and approaches, resistance to change, Saudi Arabia, SUDS, urban drainage, urban expansion
Procedia PDF Downloads 173391 Microplastic Storages in Riverbed Sediments: Experimental on the Settling Process and Its Deposits
Authors: Alvarez Barrantes, Robert Dorrell, Christopher Hackney, Anne Baar, Roberto Fernandez, Daniel Parsons
Abstract:
Microplastic particles entering fluvial environments are deposited with natural sediments. Their settling properties can change by the absorption or adsorption of contaminants, organic matter, and organisms. These deposits include positively, neutrally, and negatively buoyant particles. This study aims to understand how plastic particles of different densities interact with natural sediments as they settle and how they are stored within the sediment deposit. The results of this study contribute to a better understanding of the deposition of microplastic particles and associated pollution in rivers. A set of 48 experiments was designed to investigate the settling process of microplastic particles in freshwater. The experimental work describes the vertical variation of cohesive and/or non-cohesive sediment versus microplastic densities in deposited sediment. The experiment consisted of adding microplastic particles, sediment, and water in a waterproof carton tube of a height of 24 cm and a diameter of 5 cm. The plastic selected is positively, neutrally, and negatively buoyant. The sediments consist of sand and clay with four different concentrations. The mixture of materials was shaken until is thoroughly mixed and left to settle for 24 hours. After the settlement, the tubes were frozen at -20 °C to be able to cut them and measure the thickness of the deposits and analyze the sediment and plastic distribution. The most representative experiments were repeated in a glass tube of the same size; to analyse the influences of current flows and depositional process. Finally, the glass tube experiments were used to study organic materials adsorption in plastic, settling the sample for four months. Defined microplastic layers were identified as the density of the plastic change. Preliminary results show that most of the positive buoyancy particles floated, neutral buoyancy particles form a layer above the sediment and negative buoyancy particles mixed with the sediment. The vertical grain size distribution of the deposits was analysed to determine deposition variation with and without plastic. It is expected that the positively buoyant particles are trapped in the sediment by the currents flows and sink due to organic material adsorption. Finally, the experiments will explain how microplastic particles, including positively buoyant ones, are stored in natural sediment deposits.Keywords: microplastic adsorption process, microplastic deposition in natural sediment, microplastic pollution in rivers, storages of positive buoyancy microplastic particles
Procedia PDF Downloads 194390 A Framework of Virtualized Software Controller for Smart Manufacturing
Authors: Pin Xiu Chen, Shang Liang Chen
Abstract:
A virtualized software controller is developed in this research to replace traditional hardware control units. This virtualized software controller transfers motion interpolation calculations from the motion control units of end devices to edge computing platforms, thereby reducing the end devices' computational load and hardware requirements and making maintenance and updates easier. The study also applies the concept of microservices, dividing the control system into several small functional modules and then deploy into a cloud data server. This reduces the interdependency among modules and enhances the overall system's flexibility and scalability. Finally, with containerization technology, the system can be deployed and started in a matter of seconds, which is more efficient than traditional virtual machine deployment methods. Furthermore, this virtualized software controller communicates with end control devices via wireless networks, making the placement of production equipment or the redesign of processes more flexible and no longer limited by physical wiring. To handle the large data flow and maintain low-latency transmission, this study integrates 5G technology, fully utilizing its high speed, wide bandwidth, and low latency features to achieve rapid and stable remote machine control. An experimental setup is designed to verify the feasibility and test the performance of this framework. This study designs a smart manufacturing site with a 5G communication architecture, serving as a field for experimental data collection and performance testing. The smart manufacturing site includes one robotic arm, three Computer Numerical Control machine tools, several Input/Output ports, and an edge computing architecture. All machinery information is uploaded to edge computing servers and cloud servers via 5G communication and the Internet of Things framework. After analysis and computation, this information is converted into motion control commands, which are transmitted back to the relevant machinery for motion control through 5G communication. The communication time intervals at each stage are calculated using the C++ chrono library to measure the time difference for each command transmission. The relevant test results will be organized and displayed in the full-text.Keywords: 5G, MEC, microservices, virtualized software controller, smart manufacturing
Procedia PDF Downloads 81389 Application of Neutron Stimulated Gamma Spectroscopy for Soil Elemental Analysis and Mapping
Authors: Aleksandr Kavetskiy, Galina Yakubova, Nikolay Sargsyan, Stephen A. Prior, H. Allen Torbert
Abstract:
Determining soil elemental content and distribution (mapping) within a field are key features of modern agricultural practice. While traditional chemical analysis is a time consuming and labor-intensive multi-step process (e.g., sample collections, transport to laboratory, physical preparations, and chemical analysis), neutron-gamma soil analysis can be performed in-situ. This analysis is based on the registration of gamma rays issued from nuclei upon interaction with neutrons. Soil elements such as Si, C, Fe, O, Al, K, and H (moisture) can be assessed with this method. Data received from analysis can be directly used for creating soil elemental distribution maps (based on ArcGIS software) suitable for agricultural purposes. The neutron-gamma analysis system developed for field application consisted of an MP320 Neutron Generator (Thermo Fisher Scientific, Inc.), 3 sodium iodide gamma detectors (SCIONIX, Inc.) with a total volume of 7 liters, 'split electronics' (XIA, LLC), a power system, and an operational computer. Paired with GPS, this system can be used in the scanning mode to acquire gamma spectra while traversing a field. Using acquired spectra, soil elemental content can be calculated. These data can be combined with geographical coordinates in a geographical information system (i.e., ArcGIS) to produce elemental distribution maps suitable for agricultural purposes. Special software has been developed that will acquire gamma spectra, process and sort data, calculate soil elemental content, and combine these data with measured geographic coordinates to create soil elemental distribution maps. For example, 5.5 hours was needed to acquire necessary data for creating a carbon distribution map of an 8.5 ha field. This paper will briefly describe the physics behind the neutron gamma analysis method, physical construction the measurement system, and main characteristics and modes of work when conducting field surveys. Soil elemental distribution maps resulting from field surveys will be presented. and discussed. Comparison of these maps with maps created on the bases of chemical analysis and soil moisture measurements determined by soil electrical conductivity was similar. The maps created by neutron-gamma analysis were reproducible, as well. Based on these facts, it can be asserted that neutron stimulated soil gamma spectroscopy paired with GPS system is fully applicable for soil elemental agricultural field mapping.Keywords: ArcGIS mapping, neutron gamma analysis, soil elemental content, soil gamma spectroscopy
Procedia PDF Downloads 131388 Research on Tight Sandstone Oil Accumulation Process of the Third Member of Shahejie Formation in Dongpu Depression, China
Authors: Hui Li, Xiongqi Pang
Abstract:
In recent years, tight oil has become a hot spot for unconventional oil and gas exploration and development in the world. Dongpu Depression is a typical hydrocarbon-rich basin in the southwest of Bohai Bay Basin, in which tight sandstone oil and gas have been discovered in deep reservoirs, most of which are buried more than 3500m. The distribution and development characteristics of deep tight sandstone reservoirs need to be studied. The main source rocks in study area are dark mudstone and shale of the middle and lower third sub-member of Shahejie Formation. Total Organic Carbon (TOC) content of source rock is between 0.08-11.54%, generally higher than 0.6% and the value of S1+S2 is between 0.04–72.93 mg/g, generally higher than 2 mg/g. It can be evaluated as middle to fine level overall. The kerogen type of organic matter is predominantly typeⅡ1 andⅡ2. Vitrinite reflectance (Ro) is mostly greater than 0.6% indicating that the source rock entered the hydrocarbon generation threshold. The physical property of reservoir was poor, the most reservoir has a porosity lower than 12% and a permeability of less than 1×10⁻³μm. The rocks in this area showed great heterogeneity, some areas developed desserts with high porosity and permeability. According to SEM, thin section image, inclusion test and so on, the reservoir was affected by compaction and cementation during early diagenesis stage (44-31Ma). The diagenesis caused the tight reservoir in Huzhuangji, Pucheng, Weicheng Area while the porosity in Machang, Qiaokou, Wenliu Area was still over 12%. In the process of middle diagenesis phase stage A (31-17Ma), the reservoir porosity in Machang, Pucheng, Huzhuangji Area increased due to dissolution; after that the oil generation window of source rock was achieved for the first phase hydrocarbon charging (31-23Ma), formed the conventional oil deposition in Machang, Qiaokou, Wenliu, Huzhuangji Area and unconventional tight reservoir in Pucheng, Weicheng Area. Then came to stage B of middle diagenesis phase (17-7Ma), in this stage, the porosity of reservoir continued to decrease after the dissolution and led to a situation that the reservoirs were generally compacted. And since then, the second hydrocarbon filling has been processing since 7Ma. Most of the pools charged and formed in this procedure are tight sandstone oil reservoir. In conclusion, tight sandstone oil was formed in two patterns in Dongpu Depression, which could be concluded as ‘density fist then accumulation’ pattern and ‘accumulation fist next density’ pattern.Keywords: accumulation process, diagenesis, dongpu depression, tight sandstone oil
Procedia PDF Downloads 115387 The Ballistics Case Study of the Enrica Lexie Incident
Authors: Diego Abbo
Abstract:
On February 15, 2012 off the Indian coast of Kerala, in position 091702N-0760180E by the oil tanker Enrica Lexie, flying the Italian flag, bursts of 5.56 x45 caliber shots were fired from assault rifles AR/70 Italian-made Beretta towards the Indian fisher boat St. Anthony. The shots that hit the St. Anthony fishing boat were six, of which two killed the Indian fishermen Ajesh Pink and Valentine Jelestine. From the analysis concerning the kinematic engagement of the two ships and from the autopsy and ballistic results of the Indian judicial authorities it is possible to reconstruct the trajectories of the six aforementioned shots. This essay reconstructs the trajectories of the six shots that cannot be of direct shooting but have undergone a rebound on the water. The investigation carried out scientifically demonstrates the rebound of the blows on the water, the gyrostatic deviation due to the rebound and the tumbling effect always due to the rebound as regards intermediate ballistics. In consideration of the four shots that directly impacted the fishing vessel, the current examination proves, with scientific value, that the trajectories could not be downwards but upwards. Also, the trajectory of two shots that hit to death the two fishermen could not be downwards but only upwards. In fact, this paper demonstrates, with scientific value: The loss of speed of the projectiles due to the rebound on the water; The tumbling effect in the ballistic medium within the two victims; The permanent cavities subject to the injury ballistics and the related ballistic trauma that prevented homeostasis causing bleeding in one case; The thermo-hardening deformation of the bullet found in Valentine Jelestine's skull; The upward and non-downward trajectories. The paper constitutes a tool in forensic ballistics in that it manages to reconstruct, from the final spot of the projectiles fired, all phases of ballistics like the internal one of the weapons that fired, the intermediate one, the terminal one and the penetrative structural one. In general terms the ballistics reconstruction is based on measurable parameters whose entity is contained with certainty within a lower and upper limit. Therefore, quantities that refer to angles, speed, impact energy and firing position of the shooter can be identified within the aforementioned limits. Finally, the investigation into the internal bullet track, obtained from any autopsy examination, offers a significant “lesson learned” but overall a starting point to contain or mitigate bleeding as a rescue from future gunshot wounds.Keywords: impact physics, intermediate ballistics, terminal ballistics, tumbling effect
Procedia PDF Downloads 175386 Understanding the Role of Concussions as a Risk Factor for Multiple Sclerosis
Authors: Alvin Han, Reema Shafi, Alishba Afaq, Jennifer Gommerman, Valeria Ramaglia, Shannon E. Dunn
Abstract:
Adolescents engaged in contact-sports can suffer from recurrent brain concussions with no loss of consciousness and no need for hospitalization, yet they face the possibility of long-term neurocognitive problems. Recent studies suggest that head concussive injuries during adolescence can also predispose individuals to multiple sclerosis (MS). The underlying mechanisms of how brain concussions predispose to MS is not understood. Here, we hypothesize that: (1) recurrent brain concussions prime microglial cells, the tissue resident myeloid cells of the brain, setting them up for exacerbated responses when exposed to additional challenges later in life; and (2) brain concussions lead to the sensitization of myelin-specific T cells in the peripheral lymphoid organs. Towards addressing these hypotheses, we implemented a mouse model of closed head injury that uses a weight-drop device. First, we calibrated the model in male 12 week-old mice and established that a weight drop from a 3 cm height induced mild neurological symptoms (mean neurological score of 1.6+0.4 at 1 hour post-injury) from which the mice fully recovered by 72 hours post-trauma. Then, we performed immunohistochemistry on the brain of concussed mice at 72 hours post-trauma. Despite mice having recovered from all neurological symptoms, immunostaining for leukocytes (CD45) and IBA-1 revealed no peripheral immune infiltration, but an increase in the intensity of IBA1+ staining compared to uninjured controls, suggesting that resident microglia had acquired a more active phenotype. This microglia activation was most apparent in the white matter tracts in the brain and in the olfactory bulb. Immunostaining for the microglia-specific homeostatic marker TMEM119, showed a reduction in TMEM119+ area in the brain of concussed mice compared to uninjured controls, confirming a loss of this homeostatic signal by microglia after injury. Future studies will test whether single or repetitive concussive injury can worsen or accelerate autoimmunity in male and female mice. Understanding these mechanisms will guide the development of timed and targeted therapies to prevent MS from getting started in people at risk.Keywords: concussion, microglia, microglial priming, multiple sclerosis
Procedia PDF Downloads 100385 Holographic Art as an Approach to Enhance Visual Communication in Egyptian Community: Experimental Study
Authors: Diaa Ahmed Mohamed Ahmedien
Abstract:
Nowadays, it cannot be denied that the most important interactive arts trends have appeared as a result of significant scientific mutations in the modern sciences, and holographic art is not an exception, where it is considered as a one of the most important major contemporary interactive arts trends in visual arts. Holographic technique had been evoked through the modern physics application in late 1940s, for the improvement of the quality of electron microscope images by Denis Gabor, until it had arrived to Margaret Benyon’s art exhibitions, and then it passed through a lot of procedures to enhance its quality and artistic applications technically and visually more over 70 years in visual arts. As a modest extension to these great efforts, this research aimed to invoke extraordinary attempt to enroll sample of normal people in Egyptian community in holographic recording program to record their appreciated objects or antiques, therefore examine their abilities to interact with modern techniques in visual communication arts. So this research tried to answer to main three questions: 'can we use the analog holographic techniques to unleash new theoretical and practical knowledge in interactive arts for public in Egyptian community?', 'to what extent holographic art can be familiar with public and make them able to produce interactive artistic samples?', 'are there possibilities to build holographic interactive program for normal people which lead them to enhance their understanding to visual communication in public and, be aware of interactive arts trends?' This research was depending in its first part on experimental methods, where it conducted in Laser lab at Cairo University, using Nd: Yag Laser 532 nm, and holographic optical layout, with selected samples of Egyptian people that they have been asked to record their appreciated object, after they had already learned recording methods, and in its second part on a lot of discussion panel had conducted to discuss the result and how participants felt towards their holographic artistic products through survey, questionnaires, take notes and critiquing holographic artworks. Our practical experiments and final discussions have already lead us to say that this experimental research was able to make most of participants pass through paradigm shift in their visual and conceptual experiences towards more interaction with contemporary visual arts trends, as an attempt to emphasize to the role of mature relationship between the art, science and technology, to spread interactive arts out in our community through the latest scientific and artistic mutations around the world and the role of this relationship in our societies particularly with those who have never been enrolled in practical arts programs before.Keywords: Egyptian community, holographic art, laser art, visual art
Procedia PDF Downloads 479384 Ecosystem, Environment Being Threatened by the Activities of Major Industries
Authors: Charles Akinola Imolehin
Abstract:
According to the news on world population record, over 6.6 billion people on earth, and almost a quarter million added each day, the scale of human activity and environmental impact is unprecedented. Soaring human population growth over the past century has created a visible challenge to earth’s life support systems. Critical natural resources such as clean ground water, fertile topsoil, and biodiversity are diminishing at an exponential rate, orders of magnitude above that at which they can be regenerated. In addition, the world faces an onslaught of other environmental threats including degenerated global climate change, global warming, intensified acid rain, stratospheric ozone depletion and health threatening pollution. Overpopulation and the use of deleterious technologies combine to increase the scale of human activities to a level that underlies these entire problems. These intensifying trends cannot continue indefinitely, hopefully, through increased understanding and valuation of ecosystems and their services, earth’s basic life-support system will be protected for the future. To say the fact, human civilization is now the dominant cause of change in the global environment. Now that human relationship to the earth has change so utterly, there is need to see to that change and understand its implication. These are two aspects to the challenges which all should believe. The first is to realize that human activity has power to harm the earth and can indeed have global and even permanent effects. Second is to realize that the only way to understand human new role as a co-architect of nature is to see human activities as part of a complex system that does operate according to the same simple rules of cause and effect commonly used to. So, understanding the physical/biological dimension of earth system is an important precondition for making sensible policy to protect our environment. Because believing in Sustainable Development is a matter of reconciling respect for the environment, social equity, and economic profitability. Also, there is strong believe that environmental protection is naturally about reducing air and water pollution, but it also includes the improvement of the environmental performance of existing process. That is why is important to always have it at the heart of business policy that the environmental problem is not our effect on the environment so much as the relationship of production activities on the environment. There should be this positive thinking in all operation to always be environmentally friendly especially in projection and considering Sustainable ALL awareness in all sites of operation.Keywords: earth's ocean, marine animals life under treat, flooding, ctritical natiural resouces polluted
Procedia PDF Downloads 17383 Management of Soil Borne Plant Diseases Using Agricultural Waste Residues as Green Waste and Organic Amendment
Authors: Temitayo Tosin Alawiye
Abstract:
Plant disease control is important in maintaining plant vigour, grain quantity, abundance of food, feed, and fibre produced by farmers all over the world. Farmers make use of different methods in controlling these diseases but one of the commonly used method is the use of chemicals. However, the continuous and excessive usages of these agrochemicals pose a danger to the environment, man and wildlife. The more the population growth the more the food security challenge which leads to more pressure on agronomic growth. Agricultural waste also known as green waste are the residues from the growing and processing of raw agricultural products such as fruits, vegetables, rice husk, corn cob, mushroom growth medium waste, coconut husk. They are widely used in land bioremediation, crop production and protection which include disease control. These agricultural wastes help the crop by improving the soil fertility, increase soil organic matter and reduce in many cases incidence and severity of disease. The objective was to review the agricultural waste that has worked effectively against certain soil-borne diseases such as Fusarium oxysporum, Pythiumspp, Rhizoctonia spp so as to help minimize the use of chemicals. Climate change is a major problem of agriculture and vice versa. Climate change and agriculture are interrelated. Change in climatic conditions is already affecting agriculture with effects unevenly distributed across the world. It will increase the risk of food insecurity for some vulnerable groups such as the poor in Sub Saharan Africa. The food security challenge will become more difficult as the world will need to produce more food estimated to feed billions of people in the near future with Africa likely to be the biggest hit. In order to surmount this hurdle, smallholder farmers in Africa must embrace climate-smart agricultural techniques and innovations which includes the use of green waste in agriculture, conservative agriculture, pasture and manure management, mulching, intercropping, etc. Training and retraining of smallholder farmers on the use of green energy to mitigate the effect of climate change should be encouraged. Policy makers, academia, researchers, donors, and farmers should pay more attention to the use of green energy as a way of reducing incidence and severity of soilborne plant diseases to solve looming food security challenges.Keywords: agricultural waste, climate change, green energy, soil borne plant disease
Procedia PDF Downloads 268382 Reconstructing the Segmental System of Proto-Graeco-Phrygian: a Bottom-Up Approach
Authors: Aljoša Šorgo
Abstract:
Recent scholarship on Phrygian has begun to more closely examine the long-held belief that Greek and Phrygian are two very closely related languages. It is now clear that Graeco-Phrygian can be firmly postulated as a subclade of the Indo-European languages. The present paper will focus on the reconstruction of the phonological and phonetic segments of Proto-Graeco-Phrygian (= PGPh.) by providing relevant correspondence sets and reconstructing the classes of segments. The PGPh. basic vowel system consisted of ten phonemic oral vowels: */a e o ā ē ī ō ū/. The correspondences of the vowels are clear and leave little open to ambiguity. There were four resonants and two semi-vowels in PGPh.: */r l m n i̯ u̯/, which could appear in both a consonantal and a syllabic function, with the distribution between the two still being phonotactically predictable. Of note is the fact that the segments *m and *n seem to have merged when their phonotactic position would see them used in a syllabic function. Whether the segment resulting from this merger was a nasalized vowel (most likely *[ã]) or a syllabic nasal *[N̥] (underspecified for place of articulation) cannot be determined at this stage. There were three fricatives in PGPh.: */s h ç/. *s and *h are easily identifiable. The existence of *ç, which may seem unexpected, is postulated on the basis of the correspondence Gr. ὄς ~ Phr. yos/ιος. It is of note that Bozzone has previously proposed the existence of *ç ( < PIE *h₁i̯-) in an early stage of Greek even without taking into account Phrygian data. Finally, the system of stops in PGPh. distinguished four places of articulation (labial, dental, velar, and labiovelar) and three phonation types. The question of which three phonation types were actually present in PGPh. is one of great importance for the ongoing debate on the realization of the three series in PIE. Since the matter is still very much in dispute, we ought to, at this stage, endeavour to reconstruct the PGPh. system without recourse to the other IE languages. The three series of correspondences are: 1. Gr. T (= tenuis) ~ Phr. T; 2. Gr. D (= media) ~ Phr. T; 3. Gr. TA (= tenuis aspirata) ~ Phr. M. The first series must clearly be reconstructed as composed of voiceless stops. The second and third series are more problematic. With a bottom-up approach, neither the second nor the third series of correspondences are compatible with simple modal voicing, and the reflexes differ greatly in voice onset time. Rather, the defining feature distinguishing the two series was [±spread glottis], with ancillary vibration of the vocal cords. In PGPh. the second series was undergoing further spreading of the glottis. As the two languages split, this process would continue, but be affected by dissimilar changes in VOT, which was ultimately phonemicized in both languages as the defining feature distinguishing between their series of stops.Keywords: bottom-up reconstruction, Proto-Graeco-Phrygian, spread glottis, syllabic resonant
Procedia PDF Downloads 48381 Managing Shallow Gas for Offshore Platforms via Fit-For-Purpose Solutions: Case Study for Offshore Malaysia
Authors: Noorizal Huang, Christian Girsang, Mohamad Razi Mansoor
Abstract:
Shallow gas seepage was first spotted at a central processing platform offshore Malaysia in 2010, acknowledged as Platform T in this paper. Frequent monitoring of the gas seepage was performed through remotely operated vehicle (ROV) baseline survey and a comprehensive geophysical survey was conducted to understand the characteristics of the gas seepage and to ensure that the integrity of the foundation at Platform T was not compromised. The origin of the gas back then was unknown. A soil investigation campaign was performed in 2016 to study the origin of the gas seepage. Two boreholes were drilled; a composite borehole to 150m below seabed for the purpose of soil sampling and in-situ testing and a pilot hole to 155m below the seabed, which was later converted to a fit-for-purpose relief well as an alternate migration path for the gas. During the soil investigation campaign, dissipation tests were performed at several layers which were potentially the source or migration path for the gas. Five (5) soil samples were segregated for headspace test, to identify the gas type which subsequently can be used to identify the origin of the gas. Dissipation tests performed at four depth intervals indicates pore water pressure less than 20 % of the effective vertical stress and appear to continue decreasing if the test had not been stopped. It was concluded that a low to a negligible amount of excess pore pressure exist in clayey silt layers. Results from headspace test show presence of methane corresponding to the clayey silt layers as reported in the boring logs. The gas most likely comes from biogenic sources, feeding on organic matter in situ over a large depth range. It is unlikely that there are large pockets of gas in the soil due to its homogeneous clayey nature and the lack of excess pore pressure in other permeable clayey silt layers encountered. Instead, it is more likely that when pore water at certain depth encounters a more permeable path, such as a borehole, it rises up through this path due to the temperature gradient in the soil. As the water rises the pressure decreases, which could cause gases dissolved in the water to come out of solution and form bubbles. As a result, the gas will have no impact on the integrity of the foundation at Platform T. The fit-for-purpose relief well design as well as adopting headspace testing can be used to address the shallow gas issue at Platform T in a cost effective and efficient manners.Keywords: dissipation test, headspace test, excess pore pressure, relief well, shallow gas
Procedia PDF Downloads 272380 Geothermal Resources to Ensure Energy Security During Climate Change
Authors: Debasmita Misra, Arthur Nash
Abstract:
Energy security and sufficiency enables the economic development and welfare of a nation or a society. Currently, the global energy system is dominated by fossil fuels, which is a non-renewable energy resource, which renders vulnerability to energy security. Hence, many nations have begun augmenting their energy system with renewable energy resources, such as solar, wind, biomass and hydro. However, with climate change, how sustainable are some of the renewable energy resources in the future is a matter of concern. Geothermal energy resources have been underexplored or underexploited in global renewable energy production and security, although it is gaining attractiveness as a renewable energy resource. The question is, whether geothermal energy resources are more sustainable than other renewable energy resources. High-temperature reservoirs (> 220 °F) can produce electricity from flash/dry steam plants as well as binary cycle production facilities. Most of the world’s high enthalpy geothermal resources are within the seismo-tectonic belt. However, exploration for geothermal energy is of great importance in conventional geothermal systems in order to improve its economic viability. In recent years, there has been an increase in the use and development of several exploration methods for geo-thermal resources, such as seismic or electromagnetic methods. The thermal infrared band of the Landsat can reflect land surface temperature difference, so the ETM+ data with specific grey stretch enhancement has been used to explore underground heat water. Another way of exploring for potential power is utilizing fairway play analysis for sites without surface expression and in rift zones. Utilizing this type of analysis can improve the success rate of project development by reducing exploration costs. Identifying the basin distribution of geologic factors that control the geothermal environment would help in identifying the control of resource concentration aside from the heat flow, thus improving the probability of success. The first step is compiling existing geophysical data. This leads to constructing conceptual models of potential geothermal concentrations which can then be utilized in creating a geodatabase to analyze risk maps. Geospatial analysis and other GIS tools can be used in such efforts to produce spatial distribution maps. The goal of this paper is to discuss how climate change may impact renewable energy resources and how could a synthesized analysis be developed for geothermal resources to ensure sustainable and cost effective exploitation of the resource.Keywords: exploration, geothermal, renewable energy, sustainable
Procedia PDF Downloads 152379 Systematic Mapping Study of Digitization and Analysis of Manufacturing Data
Authors: R. Clancy, M. Ahern, D. O’Sullivan, K. Bruton
Abstract:
The manufacturing industry is currently undergoing a digital transformation as part of the mega-trend Industry 4.0. As part of this phase of the industrial revolution, traditional manufacturing processes are being combined with digital technologies to achieve smarter and more efficient production. To successfully digitally transform a manufacturing facility, the processes must first be digitized. This is the conversion of information from an analogue format to a digital format. The objective of this study was to explore the research area of digitizing manufacturing data as part of the worldwide paradigm, Industry 4.0. The formal methodology of a systematic mapping study was utilized to capture a representative sample of the research area and assess its current state. Specific research questions were defined to assess the key benefits and limitations associated with the digitization of manufacturing data. Research papers were classified according to the type of research and type of contribution to the research area. Upon analyzing 54 papers identified in this area, it was noted that 23 of the papers originated in Germany. This is an unsurprising finding as Industry 4.0 is originally a German strategy with supporting strong policy instruments being utilized in Germany to support its implementation. It was also found that the Fraunhofer Institute for Mechatronic Systems Design, in collaboration with the University of Paderborn in Germany, was the most frequent contributing Institution of the research papers with three papers published. The literature suggested future research directions and highlighted one specific gap in the area. There exists an unresolved gap between the data science experts and the manufacturing process experts in the industry. The data analytics expertise is not useful unless the manufacturing process information is utilized. A legitimate understanding of the data is crucial to perform accurate analytics and gain true, valuable insights into the manufacturing process. There lies a gap between the manufacturing operations and the information technology/data analytics departments within enterprises, which was borne out by the results of many of the case studies reviewed as part of this work. To test the concept of this gap existing, the researcher initiated an industrial case study in which they embedded themselves between the subject matter expert of the manufacturing process and the data scientist. Of the papers resulting from the systematic mapping study, 12 of the papers contributed a framework, another 12 of the papers were based on a case study, and 11 of the papers focused on theory. However, there were only three papers that contributed a methodology. This provides further evidence for the need for an industry-focused methodology for digitizing and analyzing manufacturing data, which will be developed in future research.Keywords: analytics, digitization, industry 4.0, manufacturing
Procedia PDF Downloads 111378 Fahr Dsease vs Fahr Syndrome in the Field of a Case Report
Authors: Angelis P. Barlampas
Abstract:
Objective: The confusion of terms is a common practice in many situations of the everyday life. But, in some circumstances, such as in medicine, the precise meaning of a word curries a critical role for the health of the patient. Fahr disease and Fahr syndrome are often falsely used interchangeably, but they are two different conditions with different physical histories of different etiology and different medical management. A case of the seldom Fahr disease is presented, and a comparison with the more common Fahr syndrome follows. Materials and method: A 72 years old patient came to the emergency department, complaining of some kind of non specific medal disturbances, like anxiety, difficulty of concentrating, and tremor. The problems had a long course, but he had the impression of getting worse lately, so he decided to check them. Past history and laboratory tests were unremarkable. Then, a computed tomography examination was ordered. Results: The CT exam showed bilateral, hyperattenuating areas of heavy, dense calcium type deposits in basal ganglia, striatum, pallidum, thalami, the dentate nucleus, and the cerebral white matter of frontal, parietal and iniac lobes, as well as small areas of the pons. Taking into account the absence of any known preexisting illness and the fact that the emergency laboratory tests were without findings, a hypothesis of the rare Fahr disease was supposed. The suspicion was confirmed with further, more specific tests, which showed the lack of any other conditions which could probably share the same radiological image. Differentiating between Fahr disease and Fahr syndrome. Fahr disease: Primarily autosomal dominant Symmetrical and bilateral intracranial calcifications The patient is healthy until the middle age Absence of biochemical abnormalities. Family history consistent with autosomal dominant Fahr syndrome :Earlier between 30 to 40 years old. Symmetrical and bilateral intracranial calcifications Endocrinopathies: Idiopathic hypoparathyroidism, secondary hypoparathyroidism, hyperparathyroidism, pseudohypoparathyroidism ,pseudopseudohypoparathyroidism, e.t.c The disease appears at any age There are abnormal laboratory or imaging findings. Conclusion: Fahr disease and Fahr syndrome are not the same illness, although this is not well known to the inexperienced doctors. As clinical radiologists, we have to inform our colleagues that a radiological image, along with the patient's history, probably implies a rare condition and not something more usual and prompt the investigation to the right route. In our case, a genetic test could be done earlier and reveal the problem, and thus avoiding unnecessary and specific tests which cost in time and are uncomfortable to the patient.Keywords: fahr disease, fahr syndrome, CT, brain calcifications
Procedia PDF Downloads 61377 Experimental Investigation of the Thermal Conductivity of Neodymium and Samarium Melts by a Laser Flash Technique
Authors: Igor V. Savchenko, Dmitrii A. Samoshkin
Abstract:
The active study of the properties of lanthanides has begun in the late 50s of the last century, when methods for their purification were developed and metals with a relatively low content of impurities were obtained. Nevertheless, up to date, many properties of the rare earth metals (REM) have not been experimentally investigated, or insufficiently studied. Currently, the thermal conductivity and thermal diffusivity of lanthanides have been studied most thoroughly in the low-temperature region and at moderate temperatures (near 293 K). In the high-temperature region, corresponding to the solid phase, data on the thermophysical characteristics of the REM are fragmentary and in some cases contradictory. Analysis of the literature showed that the data on the thermal conductivity and thermal diffusivity of light REM in the liquid state are few in number, little informative (only one point corresponds to the liquid state region), contradictory (the nature of the thermal conductivity change with temperature is not reproduced), as well as the results of measurements diverge significantly beyond the limits of the total errors. Thereby our experimental results allow to fill this gap and to clarify the existing information on the heat transfer coefficients of neodymium and samarium in a wide temperature range from the melting point up to 1770 K. The measurement of the thermal conductivity of investigated metallic melts was carried out by laser flash technique on an automated experimental setup LFA-427. Neodymium sample of brand NM-1 (99.21 wt % purity) and samarium sample of brand SmM-1 (99.94 wt % purity) were cut from metal ingots and then ones were annealed in a vacuum (1 mPa) at a temperature of 1400 K for 3 hours. Measuring cells of a special design from tantalum were used for experiments. Sealing of the cell with a sample inside it was carried out by argon-arc welding in the protective atmosphere of the glovebox. The glovebox was filled with argon with purity of 99.998 vol. %; argon was additionally cleaned up by continuous running through sponge titanium heated to 900–1000 K. The general systematic error in determining the thermal conductivity of investigated metallic melts was 2–5%. The approximation dependences and the reference tables of the thermal conductivity and thermal diffusivity coefficients were developed. New reliable experimental data on the transport properties of the REM and their changes in phase transitions can serve as a scientific basis for optimizing the industrial processes of production and use of these materials, as well as ones are of interest for the theory of thermophysical properties of substances, physics of metals, liquids and phase transformations.Keywords: high temperatures, laser flash technique, liquid state, metallic melt, rare earth metals, thermal conductivity, thermal diffusivity
Procedia PDF Downloads 198376 Photocatalytic Disintegration of Naphthalene and Naphthalene Similar Compounds in Indoors Air
Authors: Tobias Schnabel
Abstract:
Naphthalene and naphthalene similar compounds are a common problem in the indoor air of buildings from the 1960s and 1970s in Germany. Often tar containing roof felt was used under the concrete floor to prevent humidity to come through the floor. This tar containing roof felt has high concentrations of PAH (Polycyclic aromatic hydrocarbon) and naphthalene. Naphthalene easily evaporates and contaminates the indoor air. Especially after renovations and energetically modernization of the buildings, the naphthalene concentration rises because no forced air exchange can happen. Because of this problem, it is often necessary to change the floors after renovation of the buildings. The MFPA Weimar (Material research and testing facility) developed in cooperation a project with LEJ GmbH and Reichmann Gebäudetechnik GmbH. It is a technical solution for the disintegration of naphthalene in naphthalene, similar compounds in indoor air with photocatalytic reforming. Photocatalytic systems produce active oxygen species (hydroxyl radicals) through trading semiconductors on a wavelength of their bandgap. The light energy separates the charges in the semiconductor and produces free electrons in the line tape and defect electrons. The defect electrons can react with hydroxide ions to hydroxyl radicals. The produced hydroxyl radicals are a strong oxidation agent, and can oxidate organic matter to carbon dioxide and water. During the research, new titanium oxide catalysator surface coatings were developed. This coating technology allows the production of very porous titan oxide layer on temperature stable carrier materials. The porosity allows the naphthalene to get easily absorbed by the surface coating, what accelerates the reaction of the heterogeneous photocatalysis. The photocatalytic reaction is induced by high power and high efficient UV-A (ultra violet light) Leds with a wavelength of 365nm. Various tests in emission chambers and on the reformer itself show that a reduction of naphthalene in important concentrations between 2 and 250 µg/m³ is possible. The disintegration rate was at least 80%. To reduce the concentration of naphthalene from 30 µg/m³ to a level below 5 µg/m³ in a usual 50 ² classroom, an energy of 6 kWh is needed. The benefits of the photocatalytic indoor air treatment are that every organic compound in the air can be disintegrated and reduced. The use of new photocatalytic materials in combination with highly efficient UV leds make a safe and energy efficient reduction of organic compounds in indoor air possible. At the moment the air cleaning systems take the step from prototype stage into the usage in real buildings.Keywords: naphthalene, titandioxide, indoor air, photocatalysis
Procedia PDF Downloads 142375 Resonant Tunnelling Diode Output Characteristics Dependence on Structural Parameters: Simulations Based on Non-Equilibrium Green Functions
Authors: Saif Alomari
Abstract:
The paper aims at giving physical and mathematical descriptions of how the structural parameters of a resonant tunnelling diode (RTD) affect its output characteristics. Specifically, the value of the peak voltage, peak current, peak to valley current ratio (PVCR), and the difference between peak and valley voltages and currents ΔV and ΔI. A simulation-based approach using the Non-Equilibrium Green Function (NEGF) formalism based on the Silvaco ATLAS simulator is employed to conduct a series of designed experiments. These experiments show how the doping concentration in the emitter and collector layers, their thicknesses, and the width of the barriers and the quantum well influence the above-mentioned output characteristics. Each of these parameters was systematically changed while holding others fixed in each set of experiments. Factorial experiments are outside the scope of this work and will be investigated in future. The physics involved in the operation of the device is thoroughly explained and mathematical models based on curve fitting and underlaying physical principles are deduced. The models can be used to design devices with predictable output characteristics. These models were found absent in the literature that the author acanned. Results show that the doping concentration in each region has an effect on the value of the peak voltage. It is found that increasing the carrier concentration in the collector region shifts the peak to lower values, whereas increasing it in the emitter shifts the peak to higher values. In the collector’s case, the shift is either controlled by the built-in potential resulting from the concentration gradient or the conductivity enhancement in the collector. The shift to higher voltages is found to be also related to the location of the Fermi-level. The thicknesses of these layers play a role in the location of the peak as well. It was found that increasing the thickness of each region shifts the peak to higher values until a specific characteristic length, afterwards the peak becomes independent of the thickness. Finally, it is shown that the thickness of the barriers can be optimized for a particular well width to produce the highest PVCR or the highest ΔV and ΔI. The location of the peak voltage is important in optoelectronic applications of RTDs where the operating point of the device is usually the peak voltage point. Furthermore, the PVCR, ΔV, and ΔI are of great importance for building RTD-based oscillators as they affect the frequency response and output power of the oscillator.Keywords: peak to valley ratio, peak voltage shift, resonant tunneling diodes, structural parameters
Procedia PDF Downloads 141374 Factors Controlling Marine Shale Porosity: A Case Study between Lower Cambrian and Lower Silurian of Upper Yangtze Area, South China
Authors: Xin Li, Zhenxue Jiang, Zhuo Li
Abstract:
Generally, shale gas is trapped within shale systems with low porosity and ultralow permeability as free and adsorbing states. Its production is controlled by properties, in terms of occurrence phases, gas contents, and percolation characteristics. These properties are all influenced by porous features. In this paper, porosity differences of marine shales were explored between Lower Cambrian shale and Lower Silurian shale of Sichuan Basin, South China. Both the two shales were marine shales with abundant oil-prone kerogen and rich siliceous minerals. Whereas Lower Cambrian shale (3.56% Ro) possessed a higher thermal degree than that of Lower Silurian shale (2.31% Ro). Samples were measured by a combination of organic-chemistry geology measurement, organic matter (OM) isolation, X-ray diffraction (XRD), N2 adsorption, and focused ion beam milling and scanning electron microscopy (FIB-SEM). Lower Cambrian shale presented relatively low pore properties, with averaging 0.008ml/g pore volume (PV), averaging 7.99m²/g pore surface area (PSA) and averaging 5.94nm average pore diameter (APD). Lower Silurian shale showed as relatively high pore properties, with averaging 0.015ml/g PV, averaging 10.53m²/g PSA and averaging 18.60nm APD. Additionally, fractal analysis indicated that the two shales presented discrepant pore morphologies, mainly caused by differences in the combination of pore types between the two shales. More specifically, OM-hosted pores with pin-hole shape and dissolved pores with dead-end openings were the main types in Lower Cambrian shale, while OM-hosted pore with a cellular structure was the main type in Lower Silurian shale. Moreover, porous characteristics of isolated OM suggested that OM of Lower Silurian shale was more capable than that of Lower Cambrian shale in the aspect of pore contribution. PV of isolated OM in Lower Silurian shale was almost 6.6 times higher than that in Lower Cambrian shale, and PSA of isolated OM in Lower Silurian shale was almost 4.3 times higher than that in Lower Cambrian shale. However, no apparent differences existed among samples with various matrix compositions. At late diagenetic or metamorphic epoch, extensive diagenesis overprints the effects of minerals on pore properties and OM plays the dominant role in pore developments. Hence, differences of porous features between the two marine shales highlight the effect of diagenetic degree on OM-hosted pore development. Consequently, distinctive pore characteristics may be caused by the different degrees of diagenetic evolution, even with similar matrix basics.Keywords: marine shale, lower Cambrian, lower Silurian, om isolation, pore properties, om-hosted pore
Procedia PDF Downloads 132373 An Institutional Mapping and Stakeholder Analysis of ASEAN’s Preparedness for Nuclear Power Disaster
Authors: Nur Azha Putra Abdul Azim, Denise Cheong, S. Nivedita
Abstract:
Currently, there are no nuclear power reactors among the Association of Southeast Asian Nations (ASEAN) member states (AMS) but there are seven operational nuclear research reactors, and Indonesia is about to construct the region’s first experimental power reactor by the end of the decade. If successful, the experimental power reactor will lay the foundation for the country’s and region’s first nuclear power plant. Despite projecting confidence during the period of nuclear power renaissance in the region in the last decade, none of the AMS has committed to a political decision on the use of nuclear energy and this is largely due to the Fukushima nuclear power accident in 2011. Of the ten AMS, Vietnam, Indonesia and Malaysia have demonstrated the most progress in developing nuclear energy based on the nuclear power infrastructure development assessments made by the International Atomic Energy Agency. Of these three states, Vietnam came closest to building its first nuclear power plant but decided to delay construction further due to safety and security concerns. Meanwhile, Vietnam along with Indonesia and Malaysia continue with their nuclear power infrastructure development and the remaining SEA states, with the exception of Brunei and Singapore, continue to build their expertise and capacity for nuclear power energy. At the current rate of progress, Indonesia is expected to make a national decision on the use of nuclear power by 2023 while Malaysia, the Philippines, and Thailand have included the use of nuclear power in their mid to long-term power development plans. Vietnam remains open to nuclear power but has not placed a timeline. The medium to short-term power development projection in the region suggests that the use of nuclear energy in the region is a matter of 'when' rather than 'if'. In lieu of the prospects for nuclear energy in Southeast Asia (SEA), this presentation will review the literature on ASEAN radiological emergency and preparedness response (EPR) plans and examine ASEAN’s disaster management and emergency framework. Through a combination of institutional mapping and stakeholder analysis methods, which we examine in the context of the international EPR, and nuclear safety and security regimes, we will identify the issues and challenges in developing a regional radiological EPR framework in the SEA. We will conclude with the observation that ASEAN faces serious structural, institutional and governance challenges due to the AMS inherent political structures and history of interstate conflicts, and propose that ASEAN should either enlarge the existing scope of its disaster management and response framework or that its radiological EPR framework should exist as a separate entity.Keywords: nuclear power, nuclear accident, ASEAN, Southeast Asia
Procedia PDF Downloads 150