Search results for: Extended arithmetic precision.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2174

Search results for: Extended arithmetic precision.

404 Integrating Data Mining with Case-Based Reasoning for Diagnosing Sorghum Anthracnose

Authors: Mariamawit T. Belete

Abstract:

Cereal production and marketing are the means of livelihood for millions of households in Ethiopia. However, cereal production is constrained by technical and socio-economic factors. Among the technical factors, cereal crop diseases are the major contributing factors to the low yield. The aim of this research is to develop an integration of data mining and knowledge based system for sorghum anthracnose disease diagnosis that assists agriculture experts and development agents to make timely decisions. Anthracnose diagnosing systems gather information from Melkassa agricultural research center and attempt to score anthracnose severity scale. Empirical research is designed for data exploration, modeling, and confirmatory procedures for testing hypothesis and prediction to draw a sound conclusion. WEKA (Waikato Environment for Knowledge Analysis) was employed for the modeling. Knowledge based system has come across a variety of approaches based on the knowledge representation method; case-based reasoning (CBR) is one of the popular approaches used in knowledge-based system. CBR is a problem solving strategy that uses previous cases to solve new problems. The system utilizes hidden knowledge extracted by employing clustering algorithms, specifically K-means clustering from sampled anthracnose dataset. Clustered cases with centroid value are mapped to jCOLIBRI, and then the integrator application is created using NetBeans with JDK 8.0.2. The important part of a case based reasoning model includes case retrieval; the similarity measuring stage, reuse; which allows domain expert to transfer retrieval case solution to suit for the current case, revise; to test the solution, and retain to store the confirmed solution to the case base for future use. Evaluation of the system was done for both system performance and user acceptance. For testing the prototype, seven test cases were used. Experimental result shows that the system achieves an average precision and recall values of 70% and 83%, respectively. User acceptance testing also performed by involving five domain experts, and an average of 83% acceptance is achieved. Although the result of this study is promising, however, further study should be done an investigation on hybrid approach such as rule based reasoning, and pictorial retrieval process are recommended.

Keywords: sorghum anthracnose, data mining, case based reasoning, integration

Procedia PDF Downloads 73
403 Embedding Looping Concept into Corporate CSR Strategy for Sustainable Growth: An Exploratory Study

Authors: Vani Tanggamani, Azlan Amran

Abstract:

The issues of Corporate Social Responsibility (CSR) have been extended from developmental economics to corporate and business in recent years. Research in issues related to CSR is deemed to make higher impacts as CSR encourages long-term economy and business success without neglecting social, environmental risks, obligations and opportunities. Therefore, CSR is a key matter for any organisation aiming for long term sustainability since business incorporates principles of social responsibility into each of its business decisions. Thus, this paper presents a theoretical proposition based on stakeholder theory from the organisational perspective as a foundation for better CSR practices. The primary subject of this paper is to explore how looping concept can be effectively embedded into corporate CSR strategy to foster sustainable long term growth. In general, the concept of a loop is a structure or process, the end of which is connected to the beginning, whereas the narrow view of a loop in business field means plan, do, check, and improve. In this sense, looping concept is a blend of balance and agility with the awareness to know when to which. Organisations can introduce similar pull mechanisms by formulating CSR strategies in order to perform the best plan of actions in real time, then a chance to change those actions, pushing them toward well-organized planning and successful performance. Through the analysis of an exploratory study, this paper demonstrates that approaching looping concept in the context of corporate CSR strategy is an important source of new idea to propel CSR practices by deepening basic understanding through the looping concept which is increasingly necessary to attract and retain business stakeholders include people such as employees, customers, suppliers and other communities for long-term business survival. This paper contributes to the literature by providing a fundamental explanation of how the organisations will experience less financial and reputation risk if looping concept logic is integrated into core business CSR strategy.The value of the paper rests in the treatment of looping concept as a corporate CSR strategy which demonstrates "looping concept implementation framework for CSR" that could further foster business sustainability, and help organisations move along the path from laggards to leaders.

Keywords: corporate social responsibility, looping concept, stakeholder theory, sustainable growth

Procedia PDF Downloads 386
402 High Resolution Satellite Imagery and Lidar Data for Object-Based Tree Species Classification in Quebec, Canada

Authors: Bilel Chalghaf, Mathieu Varin

Abstract:

Forest characterization in Quebec, Canada, is usually assessed based on photo-interpretation at the stand level. For species identification, this often results in a lack of precision. Very high spatial resolution imagery, such as DigitalGlobe, and Light Detection and Ranging (LiDAR), have the potential to overcome the limitations of aerial imagery. To date, few studies have used that data to map a large number of species at the tree level using machine learning techniques. The main objective of this study is to map 11 individual high tree species ( > 17m) at the tree level using an object-based approach in the broadleaf forest of Kenauk Nature, Quebec. For the individual tree crown segmentation, three canopy-height models (CHMs) from LiDAR data were assessed: 1) the original, 2) a filtered, and 3) a corrected model. The corrected CHM gave the best accuracy and was then coupled with imagery to refine tree species crown identification. When compared with photo-interpretation, 90% of the objects represented a single species. For modeling, 313 variables were derived from 16-band WorldView-3 imagery and LiDAR data, using radiance, reflectance, pixel, and object-based calculation techniques. Variable selection procedures were employed to reduce their number from 313 to 16, using only 11 bands to aid reproducibility. For classification, a global approach using all 11 species was compared to a semi-hierarchical hybrid classification approach at two levels: (1) tree type (broadleaf/conifer) and (2) individual broadleaf (five) and conifer (six) species. Five different model techniques were used: (1) support vector machine (SVM), (2) classification and regression tree (CART), (3) random forest (RF), (4) k-nearest neighbors (k-NN), and (5) linear discriminant analysis (LDA). Each model was tuned separately for all approaches and levels. For the global approach, the best model was the SVM using eight variables (overall accuracy (OA): 80%, Kappa: 0.77). With the semi-hierarchical hybrid approach, at the tree type level, the best model was the k-NN using six variables (OA: 100% and Kappa: 1.00). At the level of identifying broadleaf and conifer species, the best model was the SVM, with OA of 80% and 97% and Kappa values of 0.74 and 0.97, respectively, using seven variables for both models. This paper demonstrates that a hybrid classification approach gives better results and that using 16-band WorldView-3 with LiDAR data leads to more precise predictions for tree segmentation and classification, especially when the number of tree species is large.

Keywords: tree species, object-based, classification, multispectral, machine learning, WorldView-3, LiDAR

Procedia PDF Downloads 123
401 The Foundation Binary-Signals Mechanics and Actual-Information Model of Universe

Authors: Elsadig Naseraddeen Ahmed Mohamed

Abstract:

In contrast to the uncertainty and complementary principle, it will be shown in the present paper that the probability of the simultaneous occupation event of any definite values of coordinates by any definite values of momentum and energy at any definite instance of time can be described by a binary definite function equivalent to the difference between their numbers of occupation and evacuation epochs up to that time and also equivalent to the number of exchanges between those occupation and evacuation epochs up to that times modulus two, these binary definite quantities can be defined at all point in the time’s real-line so it form a binary signal represent a complete mechanical description of physical reality, the time of these exchanges represent the boundary of occupation and evacuation epochs from which we can calculate these binary signals using the fact that the time of universe events actually extends in the positive and negative of time’s real-line in one direction of extension when these number of exchanges increase, so there exists noninvertible transformation matrix can be defined as the matrix multiplication of invertible rotation matrix and noninvertible scaling matrix change the direction and magnitude of exchange event vector respectively, these noninvertible transformation will be called actual transformation in contrast to information transformations by which we can navigate the universe’s events transformed by actual transformations backward and forward in time’s real-line, so these information transformations will be derived as an elements of a group can be associated to their corresponded actual transformations. The actual and information model of the universe will be derived by assuming the existence of time instance zero before and at which there is no coordinate occupied by any definite values of momentum and energy, and then after that time, the universe begin its expanding in spacetime, this assumption makes the need for the existence of Laplace’s demon who at one moment can measure the positions and momentums of all constituent particle of the universe and then use the law of classical mechanics to predict all future and past of universe’s events, superfluous, we only need for the establishment of our analog to digital converters to sense the binary signals that determine the boundaries of occupation and evacuation epochs of the definite values of coordinates relative to its origin by the definite values of momentum and energy as present events of the universe from them we can predict approximately in high precision it's past and future events.

Keywords: binary-signal mechanics, actual-information model of the universe, actual-transformation, information-transformation, uncertainty principle, Laplace's demon

Procedia PDF Downloads 170
400 Measuring the Resilience of e-Governments Using an Ontology

Authors: Onyekachi Onwudike, Russell Lock, Iain Phillips

Abstract:

The variability that exists across governments, her departments and the provisioning of services has been areas of concern in the E-Government domain. There is a need for reuse and integration across government departments which are accompanied by varying degrees of risks and threats. There is also the need for assessment, prevention, preparation, response and recovery when dealing with these risks or threats. The ability of a government to cope with the emerging changes that occur within it is known as resilience. In order to forge ahead with concerted efforts to manage reuse and integration induced risks or threats to governments, the ambiguities contained within resilience must be addressed. Enhancing resilience in the E-Government domain is synonymous with reducing risks governments face with provisioning of services as well as reuse of components across departments. Therefore, it can be said that resilience is responsible for the reduction in government’s vulnerability to changes. In this paper, we present the use of the ontology to measure the resilience of governments. This ontology is made up of a well-defined construct for the taxonomy of resilience. A specific class known as ‘Resilience Requirements’ is added to the ontology. This class embraces the concept of resilience into the E-Government domain ontology. Considering that the E-Government domain is a highly complex one made up of different departments offering different services, the reliability and resilience of the E-Government domain have become more complex and critical to understand. We present questions that can help a government access how prepared they are in the face of risks and what steps can be taken to recover from them. These questions can be asked with the use of queries. The ontology focuses on developing a case study section that is used to explore ways in which government departments can become resilient to the different kinds of risks and threats they may face. A collection of resilience tools and resources have been developed in our ontology to encourage governments to take steps to prepare for emergencies and risks that a government may face with the integration of departments and reuse of components across government departments. To achieve this, the ontology has been extended by rules. We present two tools for understanding resilience in the E-Government domain as a risk analysis target and the output of these tools when applied to resilience in the E-Government domain. We introduce the classification of resilience using the defined taxonomy and modelling of existent relationships based on the defined taxonomy. The ontology is constructed on formal theory and it provides a semantic reference framework for the concept of resilience. Key terms which fall under the purview of resilience with respect to E-Governments are defined. Terms are made explicit and the relationships that exist between risks and resilience are made explicit. The overall aim of the ontology is to use it within standards that would be followed by all governments for government-based resilience measures.

Keywords: E-Government, Ontology, Relationships, Resilience, Risks, Threats

Procedia PDF Downloads 331
399 Characterising Performative Technological Innovation: Developing a Strategic Framework That Incorporates the Social Mechanisms That Promote Change within a Technological Environment

Authors: Joan Edwards, J. Lawlor

Abstract:

Technological innovation is frequently defined in terms of bringing a new invention to market through a relatively straightforward process of diffusion. In reality, this process is complex and non-linear in nature, and includes social and cognitive factors that influence the development of an emerging technology and its related market or environment. As recent studies contend technological trajectory is part of technological paradigms, which arise from the expectations and desires of industry agents and results in co-evolution, it may be realised that social factors play a major role in the development of a technology. It is conjectured that collective social behaviour is fuelled by individual motivations and expectations, which inform the possibilities and uses for a new technology. The individual outlook highlights the issues present at the micro-level of developing a technology. Accordingly, this may be zoomed out to realise how these embedded social structures, influence activities and expectations at a macro level and can ultimately strategically shape the development and use of a technology. These social factors rely on communication to foster the innovation process. As innovation may be defined as the implementation of inventions, technological change results from the complex interactions and feedback occurring within an extended environment. The framework presented in this paper, recognises that social mechanisms provide the basis for an iterative dialogue between an innovator, a new technology, and an environment - within which social and cognitive ‘identity-shaping’ elements of the innovation process occur. Identity-shaping characteristics indicate that an emerging technology has a performative nature that transforms, alters, and ultimately configures the environment to which it joins. This identity–shaping quality is termed as ‘performative’. This paper examines how technologies evolve within a socio-technological sphere and how 'performativity' facilitates the process. A framework is proposed that incorporates the performative elements which are identified as feedback, iteration, routine, expectations, and motivations. Additionally, the concept of affordances is employed to determine how the role of the innovator and technology change over time - constituting a more conducive environment for successful innovation.

Keywords: affordances, framework, performativity, strategic innovation

Procedia PDF Downloads 200
398 Jan’s Life-History: Changing Faces of Managerial Masculinities and Consequences for Health

Authors: Susanne Gustafsson

Abstract:

Life-history research is an extraordinarily fruitful method to use for social analysis and gendered health analysis in particular. Its potential is illustrated through a case study drawn from a Swedish project. It reveals an old type of masculinity that faces difficulties when carrying out two sets of demands simultaneously, as a worker/manager and as a father/husband. The paper illuminates the historical transformation of masculinity and the consequences of this for health. We draw on the idea of the “changing faces of masculinity” to explore the dynamism and complexity of gendered health. An empirical case is used for its illustrative abilities. Jan, a middle-level manager and father employed in the energy sector in urban Sweden is the subject of this paper. Jan’s story is one of 32 semi-structured interviews included in an extended study focusing on well-being at work. The results reveal a face of masculinity conceived of in middle-level management as tacitly linked to the neoliberal doctrine. Over a couple of decades, the idea of “flexibility” was turned into a valuable characteristic that everyone was supposed to strive for. This resulted in increased workloads. Quite a few employees, and managers, in particular, find themselves working both day and night. This may explain why not having enough time to spend with children and family members is a recurring theme in the data. Can this way of doing be linked to masculinity and health? The first author’s research has revealed that the use of gender in health science is not sufficiently or critically questioned. This lack of critical questioning is a serious problem, especially since ways of doing gender affect health. We suggest that gender reproduction and gender transformation are interconnected, regardless of how they affect health. They are recognized as two sides of the same phenomenon, and minor movements in one direction or the other become crucial for understanding its relation to health. More or less, at the same time, as Jan’s masculinity was reproduced in response to workplace practices, Jan’s family position was transformed—not totally but by a degree or two, and these degrees became significant for the family’s health and well-being. By moving back and forth between varied events in Jan’s biographical history and his sociohistorical life span, it becomes possible to show that in a time of gender transformations, power relations can be renegotiated, leading to consequences for health.

Keywords: changing faces of masculinity, gendered health, life-history research method, subverter

Procedia PDF Downloads 103
397 A Literature Review and a Proposed Conceptual Framework for Learning Activities in Business Process Management

Authors: Carin Lindskog

Abstract:

Introduction: Long-term success requires an organizational balance between continuity (exploitation) and change (exploration). The problem of balancing exploitation and exploration is a common issue in studies of organizational learning. In order to better face the tough competition in the face of changes, organizations need to exploit their current business and explore new business fields by developing new capabilities. The purpose of this work in progress is to develop a conceptual framework to shed light on the relevance of 'learning activities', i.e., exploitation and exploration, on different levels. The research questions that will be addressed are as follows: What sort of learning activities are found in the Business Process Management (BPM) field? How can these activities be linked to the individual level, group, level, and organizational level? In the work, a literature review will first be conducted. This review will explore the status of learning activities in the BPM field. An outcome from the literature review will be a conceptual framework of learning activities based on the included publications. The learning activities will be categorized to focus on the categories exploitation, exploration or both and into the levels of individual, group, and organization. The proposed conceptual framework will be a valuable tool for analyzing the research field as well as identification of future research directions. Related Work: BPM has increased in popularity as a way of working to strengthen the quality of the work and meet the demands of efficiency. Due to the increase in BPM popularity, more and more organizations reporting on BPM failure. One reason for this is the lack of knowledge about the extended scope of BPM to other business contexts that include, for example, more creative business fields. Yet another reason for the failures are the fact of the employees’ are resistant to changes. The learning process in an organization is an ongoing cycle of reflection and action and is a process that can be initiated, developed and practiced. Furthermore, organizational learning is multilevel; therefore the theory of organizational learning needs to consider the individual, the group, and the organization level. Learning happens over time and across levels, but it also creates a tension between incorporating new learning (feed-forward) and exploiting or using what has already been learned (feedback). Through feed-forward processes, new ideas and actions move from the individual to the group to the organization level. At the same time, what has already been learned feeds back from the organization to a group to an individual and has an impact on how people act and think.

Keywords: business process management, exploitation, exploration, learning activities

Procedia PDF Downloads 116
396 Hydraulic Performance of Curtain Wall Breakwaters Based on Improved Moving Particle Semi-Implicit Method

Authors: Iddy Iddy, Qin Jiang, Changkuan Zhang

Abstract:

This paper addresses the hydraulic performance of curtain wall breakwaters as a coastal structure protection based on the particles method modelling. The hydraulic functions of curtain wall as wave barriers by reflecting large parts of incident waves through the vertical wall, a part transmitted and a particular part was dissipating the wave energies through the eddy flows formed beneath the lower end of the plate. As a Lagrangian particle, the Moving Particle Semi-implicit (MPS) method which has a robust capability for numerical representation has proven useful for design of structures application that concern free-surface hydrodynamic flow, such as wave breaking and overtopping. In this study, a vertical two-dimensional numerical model for the simulation of violent flow associated with the interaction between the curtain-wall breakwaters and progressive water waves is developed by MPS method in which a higher precision pressure gradient model and free surface particle recognition model were proposed. The wave transmission, reflection, and energy dissipation of the vertical wall were experimentally and theoretically examined. With the numerical wave flume by particle method, very detailed velocity and pressure fields around the curtain-walls under the action of waves can be computed in each calculation steps, and the effect of different wave and structural parameters on the hydrodynamic characteristics was investigated. Also, the simulated results of temporal profiles and distributions of velocity and pressure in the vicinity of curtain-wall breakwaters are compared with the experimental data. Herein, the numerical investigation of hydraulic performance of curtain wall breakwaters indicated that the incident wave is largely reflected from the structure, while the large eddies or turbulent flows occur beneath the curtain-wall resulting in big energy losses. The improved MPS method shows a good agreement between numerical results and analytical/experimental data which are compared to related researches. It is thus verified that the improved pressure gradient model and free surface particle recognition methods are useful for enhancement of stability and accuracy of MPS model for water waves and marine structures. Therefore, it is possible for particle method (MPS method) to achieve an appropriate level of correctness to be applied in engineering fields through further study.

Keywords: curtain wall breakwaters, free surface flow, hydraulic performance, improved MPS method

Procedia PDF Downloads 142
395 Core-Shell Nanofibers for Prevention of Postsurgical Adhesion

Authors: Jyh-Ping Chen, Chia-Lin Sheu

Abstract:

In this study, we propose to use electrospinning to fabricate porous nanofibrous membranes as postsurgical anti-adhesion barriers and to improve the properties of current post-surgical anti-adhesion products. We propose to combine FDA-approved biomaterials with anti-adhesion properties, polycaprolactone (PCL), polyethylene glycol (PEG), hyaluronic acid (HA) with silver nanoparticles (Ag) and ibuprofen (IBU), to produce anti-adhesion barrier nanofibrous membranes. For this purpose, PEG/PCL/Ag/HA/IBU core-shell nanofibers were prepared. The shell layer contains PEG + PCL to provide mechanical supports and Ag was added to the outer PEG-PCL shell layer during electrospinning to endow the nanofibrous membrane with anti-bacterial properties. The core contains HA to exert anti-adhesion and IBU to exert anti-inflammation effects, respectively. The nanofibrous structure of the membranes can reduce cell penetration while allowing nutrient and waste transports to prevent postsurgical adhesion. Nanofibers with different core/shell thickness ratio were prepared. The nanofibrous membranes were first characterized for their physico-chemical properties in detail, followed by in vitro cell culture studies for cell attachment and proliferation. The HA released from the core region showed extended release up to 21 days for prolonged anti-adhesion effects. The attachment of adhesion-forming fibroblasts is reduced using the nanofibrous membrane from DNA assays and confocal microscopic observation of adhesion protein vinculin expression. The Ag released from the shell showed burst release to prevent E Coli and S. aureus infection immediately and prevent bacterial resistance to Ag. Minimum cytotoxicity was observed from Ag and IBU when fibroblasts were culture with the extraction medium of the nanofibrous membranes. The peritendinous anti-adhesion model in rabbits and the peritoneal anti-adhesion model in rats were used to test the efficacy of the anti-adhesion barriers as determined by gross observation, histology, and biomechanical tests. Within all membranes, the PEG/PCL/Ag/HA/IBU core-shell nanofibers showed the best reduction in cell attachment and proliferation when tested with fibroblasts in vitro. The PEG/PCL/Ag/HA/IBU nanofibrous membranes also showed significant improvement in preventing both peritendinous and peritoneal adhesions when compared with other groups and a commercial adhesion barrier film.

Keywords: anti-adhesion, electrospinning, hyaluronic acid, ibuprofen, nanofibers

Procedia PDF Downloads 174
394 Digital Phase Shifting Holography in a Non-Linear Interferometer using Undetected Photons

Authors: Sebastian Töpfer, Marta Gilaberte Basset, Jorge Fuenzalida, Fabian Steinlechner, Juan P. Torres, Markus Gräfe

Abstract:

This work introduces a combination of digital phase-shifting holography with a non-linear interferometer using undetected photons. Non-linear interferometers can be used in combination with a measurement scheme called quantum imaging with undetected photons, which allows for the separation of the wavelengths used for sampling an object and detecting it in the imaging sensor. This method recently faced increasing attention, as it allows to use of exotic wavelengths (e.g., mid-infrared, ultraviolet) for object interaction while at the same time keeping the detection in spectral areas with highly developed, comparable low-cost imaging sensors. The object information, including its transmission and phase influence, is recorded in the form of an interferometric pattern. To collect these, this work combines the method of quantum imaging with undetected photons with digital phase-shifting holography with a minimal sampling of the interference. With this, the quantum imaging scheme gets extended in its measurement capabilities and brings it one step closer to application. Quantum imaging with undetected photons uses correlated photons generated by spontaneous parametric down-conversion in a non-linear interferometer to create indistinguishable photon pairs, which leads to an effect called induced coherence without induced emission. Placing an object inside changes the interferometric pattern depending on the object’s properties. Digital phase-shifting holography records multiple images of the interference with determined phase shifts to reconstruct the complete interference shape, which can afterward be used to analyze the changes introduced by the object and conclude its properties. An extensive characterization of this method was done using a proof-of-principle setup. The measured spatial resolution, phase accuracy, and transmission accuracy are compared for different combinations of camera exposure times and the number of interference sampling steps. The current limits of this method are shown to allow further improvements. To summarize, this work presents an alternative holographic measurement method using non-linear interferometers in combination with quantum imaging to enable new ways of measuring and motivating continuing research.

Keywords: digital holography, quantum imaging, quantum holography, quantum metrology

Procedia PDF Downloads 83
393 Designing Offshore Pipelines Facing the Geohazard of Active Seismic Faults

Authors: Maria Trimintziou, Michael Sakellariou, Prodromos Psarropoulos

Abstract:

Nowadays, the exploitation of hydrocarbons reserves in deep seas and oceans, in combination with the need to transport hydrocarbons among countries, has made the design, construction and operation of offshore pipelines very significant. Under this perspective, it is evident that many more offshore pipelines are expected to be constructed in the near future. Since offshore pipelines are usually crossing extended areas, they may face a variety of geohazards that impose substantial permanent ground deformations (PGDs) to the pipeline and potentially threaten its integrity. In case of a geohazard area, there exist three options to proceed. The first option is to avoid the problematic area through rerouting, which is usually regarded as an unfavorable solution due to its high cost. The second is to apply (if possible) mitigation/protection measures in order to eliminate the geohazard itself. Finally, the last appealing option is to allow the pipeline crossing through the geohazard area, provided that the pipeline will have been verified against the expected PGDs. In areas with moderate or high seismicity the design of an offshore pipeline is more demanding due to the earthquake-related geohazards, such as landslides, soil liquefaction phenomena, and active faults. It is worthy to mention that although worldwide there is a great experience in offshore geotechnics and pipeline design, the experience in seismic design of offshore pipelines is rather limited due to the fact that most of the pipelines have been constructed in non-seismic regions (e.g. North Sea, West Australia, Gulf of Mexico, etc.). The current study focuses on the seismic design of offshore pipelines against active faults. After an extensive literature review of the provisions of the seismic norms worldwide and of the available analytical methods, the study simulates numerically (through finite-element modeling and strain-based criteria) the distress of offshore pipelines subjected to PGDs induced by active seismic faults at the seabed. Factors, such as the geometrical properties of the fault, the mechanical properties of the ruptured soil formations, and the pipeline characteristics, are examined. After some interesting conclusions regarding the seismic vulnerability of offshore pipelines, potential cost-effective mitigation measures are proposed taking into account constructability issues.

Keywords: offhore pipelines, seismic design, active faults, permanent ground deformations (PGDs)

Procedia PDF Downloads 576
392 Optimization of Heat Insulation Structure and Heat Flux Calculation Method of Slug Calorimeter

Authors: Zhu Xinxin, Wang Hui, Yang Kai

Abstract:

Heat flux is one of the most important test parameters in the ground thermal protection test. Slug calorimeter is selected as the main sensor measuring heat flux in arc wind tunnel test due to the convenience and low cost. However, because of excessive lateral heat transfer and the disadvantage of the calculation method, the heat flux measurement error of the slug calorimeter is large. In order to enhance measurement accuracy, the heat insulation structure and heat flux calculation method of slug calorimeter were improved. The heat transfer model of the slug calorimeter was built according to the energy conservation principle. Based on the heat transfer model, the insulating sleeve of the hollow structure was designed, which helped to greatly decrease lateral heat transfer. And the slug with insulating sleeve of hollow structure was encapsulated using a package shell. The improved insulation structure reduced heat loss and ensured that the heat transfer characteristics were almost the same when calibrated and tested. The heat flux calibration test was carried out in arc lamp system for heat flux sensor calibration, and the results show that test accuracy and precision of slug calorimeter are improved greatly. In the meantime, the simulation model of the slug calorimeter was built. The heat flux values in different temperature rise time periods were calculated by the simulation model. The results show that extracting the data of the temperature rise rate as soon as possible can result in a smaller heat flux calculation error. Then the different thermal contact resistance affecting calculation error was analyzed by the simulation model. The contact resistance between the slug and the insulating sleeve was identified as the main influencing factor. The direct comparison calibration correction method was proposed based on only heat flux calibration. The numerical calculation correction method was proposed based on the heat flux calibration and simulation model of slug calorimeter after the simulation model was solved by solving the contact resistance between the slug and the insulating sleeve. The simulation and test results show that two methods can greatly reduce the heat flux measurement error. Finally, the improved slug calorimeter was tested in the arc wind tunnel. And test results show that the repeatability accuracy of improved slug calorimeter is less than 3%. The deviation of measurement value from different slug calorimeters is less than 3% in the same fluid field. The deviation of measurement value between slug calorimeter and Gordon Gage is less than 4% in the same fluid field.

Keywords: correction method, heat flux calculation, heat insulation structure, heat transfer model, slug calorimeter

Procedia PDF Downloads 113
391 Synthesis of Deformed Nuclei 260Rf, 261Rf and 262Rf in the Decay of 266Rf*Formed via Different Fusion Reactions: Entrance Channel Effects

Authors: Niyti, Aman Deep, Rajesh Kharab, Sahila Chopra, Raj. K. Gupta

Abstract:

Relatively long-lived transactinide elements (i.e., elements with atomic number Z≥104) up to Z = 108 have been produced in nuclear reactions between low Z projectiles (C to Al) and actinide targets. Cross sections have been observed to decrease steeply with increasing Z. Recently, production cross sections of several picobarns have been reported for comparatively neutron-rich nuclides of 112 through 118 produced via hot fusion reactions with 48Ca and actinide targets. Some of those heavy nuclides are reported to have lifetimes on the order of seconds or longer. The relatively high cross sections in these hot fusion reactions are not fully understood and this has renewed interest in systematic studies of heavy-ion reactions with actinide targets. The main aim of this work is to understand the dynamics hot fusion reactions 18O+ 248Cm and 22Ne+244Pu (carried out at RIKEN and TASCA respectively) using the collective clusterization technique, carried out by undertaking the decay of the compound nucleus 266Rf∗ into 4n, 5n and 6n neutron evaporation channels. Here we extend our earlier study of the excitation functions (EFs) of 266Rf∗, formed in fusion reaction 18O+248Cm, based on Dynamical Cluster-decay Model (DCM) using the pocket formula for nuclear proximity potential, to the use of other nuclear interaction potentials derived from Skyrme energy density formalism (SEDF) based on semiclassical extended Thomas Fermi (ETF) approach and also study entrance channel effects by considering the synthesis of 266Rf* in 22Ne+244Pu reaction. The Skyrme forces used are the old force SIII, and new forces GSkI and KDE0(v1). Here, the EFs for the production of 260Rf, 261Rf and 262Rf isotope via 6n, 5n and 4n decay channel from the 266Rf∗ compound nucleus are studied at Elab = 88.2 to 125 MeV, including quadrupole deformations β2i and ‘hot-optimum’ orientations θi. The calculations are made within the DCM where the neck-length ∆R is the only parameter representing the relative separation distance between two fragments and/or clusters Ai which assimilates the neck formation effects.

Keywords: entrance channel effects, fusion reactions, skyrme force, superheavy nucleus

Procedia PDF Downloads 241
390 The Outcome of Using Machine Learning in Medical Imaging

Authors: Adel Edwar Waheeb Louka

Abstract:

Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.

Keywords: artificial intelligence, convolutional neural networks, deeplearning, image processing, machine learningSarapin, intraarticular, chronic knee pain, osteoarthritisFNS, trauma, hip, neck femur fracture, minimally invasive surgery

Procedia PDF Downloads 57
389 Characteristics of Acute Bacterial Prostatitis in Elderly Patients Attended in the Emergency Department

Authors: Carles Ferré, Ferran Llopis, Javier Jacob, Jordi Giol, Xavier Palom, Ignasi Bardés

Abstract:

Objective: To analyze the characteristics of acute bacterial prostatitis (ABP) in elderly patients attended in the emergency department (ED). Methods: Observational and cohort study with prospective follow-up including patients with ABP presenting to the ED from January-December 2012. Data were collected for demographic variables, comorbidities, clinical and microbiological findings, treatment, outcome, and reconsultation at 30 days follow up. Findings were compared between patients ≥ 75 years (study group) and < 75 years (control group). Results: During the study period 241 episodes of ABP were included for analysis. Mean age was 62,9 ± 16 years, and 64 (26.5%) were ≥ 75 years old. A history of prostate adenoma was reported in 54 cases (22,4%), diabetes mellitus in 47 patients (19,5%) and prior manipulation of the lower urinary tract in 40 (17%). Mean symptoms duration was 3.38 ± 4.04 days, voiding symptoms were present in 176 cases (73%) and fever in 154 (64%). From 216 urine cultures, 128 were positive (59%) and 24 (17,6%) out of 136 blood cultures. Escherichia coli was the main pathogen in 58.6% of urine cultures and 64% of blood cultures (with resistant strains to fluoroquinolones in 27,7%, cotrimoxazole in 22,9% and amoxicillin/clavulanic in 27.7% of cases). Seventy patients (29%) were admitted to the hospital, and 3 died. At 30-day follow-up, 29 patients (12%) returned to the ED. In the bivariate analysis previous manipulation of the urinary tract, history of cancer, previous antibiotic treatment, resistant E. coli strains to amoxicillin-clavulanate and ciprofloxacin and extended spectrum beta-lactamase (ESBL) producers, renal impairment, and admission to the hospital were significantly more frequent (p < 0.05) among patients ≥ 75 years compared to those younger than 75 years. Conclusions: Ciprofloxacin and amoxicillin-clavulanate appear not to be good options for the empiric treatment of ABP for patients ≥ 75 years given the drug-resistance pattern in our series, and the proportion of ESBL-producing strains of E. coli should be taken into account. Awaiting bacteria identification and antibiogram from urine and/or blood cultures, treatment on an inpatient basis should be considered in older patients with ABP.

Keywords: acute bacterial prostatitits, antibiotic resistance, elderly patients, emergency

Procedia PDF Downloads 368
388 Evaluation of Zr/NH₄ClO₄ and Zr/KClO₄ Compositions for Development of Igniter for Ammonium Perchlorate and Hydroxyl-Terminated Polybutadiene Based Base Bleed System

Authors: Amir Mukhtar, Habib Nasir

Abstract:

To achieve an enhanced range of large calibre artillery a base bleed unit equipped with ammonium perchlorate and hydroxyl-terminated polybutadiene (AP/HTPB) based composite propellant grain is installed at the bottom of a projectile which produces jet of hot gasses and reduces base drag during flight of the projectile. Upon leaving the muzzle at very high muzzle velocity, due to sudden pressure drop, the propellant grain gets quenched. Therefore, base-bleed unit is equipped with an igniter to ensure ignition as well as reignition of the propellant grain. Pyrotechnic compositions based on Zr/NH₄ClO₄ and Zr/KClO₄ mixtures have been studied for the effect of fuel/oxidizer ratio and oxidizer type on ballistic properties. Calorific values of mixtures were investigated by bomb calorimeter, the average burning rate was measured by fuse wire technique at ambient conditions, and high-pressure closed vessel was used to record pressure-time profile, maximum pressure achieved (Pmax), time to achieve Pmax and differential pressure (dP/dt). It was observed that the 30, 40, 50 and 60 wt.% of Zr has a very significant effect on ballistic properties of mixtures. Compositions with NH₄ClO₄ produced higher values of Pmax, dP/dt and Calorific value as compared to Zr/KClO₄ based mixtures. Composition containing KClO₄ comparatively produced higher burning rate and maximum burning rate was recorded at 8.30 mm/s with 60 wt.% Zr in Zr/KClO₄ pyrotechnic mixture. Zr/KClO₄ with 50 wt. % of Zr was tests fired in igniter assembly by electric initiation method. Igniter assembly was test fired several times and average burning time of 3.5 sec with igniter mass burning rate of 6.85 g/sec was recorded. Igniter was finally fired on static and dynamic level with base bleed unit which gave successful ignition to the base bleed grain and extended range was achieved with 155 mm artillery projectile.

Keywords: base bleed, closed vessel, igniter, zirconium

Procedia PDF Downloads 149
387 Health Psychology Intervention: Identifying Early Symptoms in Neurological Disorders

Authors: Simon B. N. Thompson

Abstract:

Early indicator of neurological disease has been proposed by the expanded Thompson Cortisol Hypothesis which suggests that yawning is linked to rises in cortisol levels. Cortisol is essential to the regulation of the immune system and pathological yawning is a symptom of multiple sclerosis (MS). Electromyography activity (EMG) in the jaw muscles typically rises when the muscles are moved – extended or flexed; and yawning has been shown to be highly correlated with cortisol levels in healthy people. It is likely that these elevated cortisol levels are also seen in people with MS. The possible link between EMG in the jaw muscles and rises in saliva cortisol levels during yawning were investigated in a randomized controlled trial of 60 volunteers aged 18-69 years who were exposed to conditions that were designed to elicit the yawning response. Saliva samples were collected at the start and after yawning, or at the end of the presentation of yawning-provoking stimuli, in the absence of a yawn, and EMG data was additionally collected during rest and yawning phases. Hospital Anxiety and Depression Scale, Yawning Susceptibility Scale, General Health Questionnaire, demographic, and health details were collected and the following exclusion criteria were adopted: chronic fatigue, diabetes, fibromyalgia, heart condition, high blood pressure, hormone replacement therapy, multiple sclerosis, and stroke. Significant differences were found between the saliva cortisol samples for the yawners, t (23) = -4.263, p = 0.000, as compared with the non-yawners between rest and post-stimuli, which was non-significant. There were also significant differences between yawners and non-yawners for the EMG potentials with the yawners having higher rest and post-yawning potentials. Significant evidence was found to support the Thompson Cortisol Hypothesis suggesting that rises in cortisol levels are associated with the yawning response. Further research is underway to explore the use of cortisol as a potential diagnostic tool as an assist to the early diagnosis of symptoms related to neurological disorders. Bournemouth University Research & Ethics approval granted: JC28/1/13-KA6/9/13. Professional code of conduct, confidentiality, and safety issues have been addressed and approved in the Ethics submission. Trials identification number: ISRCTN61942768. http://www.controlled-trials.com/isrctn/

Keywords: cortisol, electromyography, neurology, yawning

Procedia PDF Downloads 579
386 The Presence of Dogs in Nursing Homes: Experiences Concerning the Mental Health of Residents

Authors: Ellen Dahl Gundersen, Berit Johannessen

Abstract:

Introduction: Dementia and depression are common mental disorders of nursing home residents. The care of these residents consists of providing both physical, social and mental care. Too often, the physical needs are given priority, and municipal health services are urged to focus more on the patients mental and social needs. The presence of dogs may have positive impact on the mental health of nursing home residents by improving mood, social interaction and enjoyment of the visits. The voluntary organization Red Cross, has given priority to this subject by training and certifying dogs and owners (equipages), committed for regular visits at local nursing homes. Focus of this study: How do the dog owners and employees experience the presence of a dog equipage concerning the mental health of nursing home residents? Method: Individual interviews with 8-10 certified dog owners who are volunteers from Red Cross, contributing with regular visits at local nursing homes. Focus group interviews with 10 employees working in two different nursing homes. Preliminary results: Five to seven residents and one or two employees attended weekly dog equipage visits during a period of six months. The presence of an equipage seems to have made the residents calm and more social orientated with a lighter mood and better verbal expression. Some of the residents with dementia remembered the name of the dog from one week to another. The informants also reported positive outcome for the residents by their opportunity to give and get close through physical contact with a dog. Further, the presence of an equipage affected the atmosphere at the nursing home positively by promoting joy and initiating conversations about dogs. A conscious approach by the dog owners towards the residents seems to be of significance to this matter. The positive attitude and support from employees also seem to be of crucial importance for the maintenance of these visits. Conclusion: The presence of trained dog equipages in nursing homes seems to have had an overall positive impact on the mental health of residents. A conscious approach from the dog owners as well as positive support from employees seems to have a crucial impact on the success and maintenance of the visits. These findings correspond well to former research and can thereby give implications for more extended use of dogs as a mental health promoting initiative towards geriatric consumers of municipal health care services. Further research through larger studies is needed.

Keywords: animal assisted intervention, geriatric mental health, nursing home, resident

Procedia PDF Downloads 254
385 An Adaptive Conversational AI Approach for Self-Learning

Authors: Airy Huang, Fuji Foo, Aries Prasetya Wibowo

Abstract:

In recent years, the focus of Natural Language Processing (NLP) development has been gradually shifting from the semantics-based approach to deep learning one, which performs faster with fewer resources. Although it performs well in many applications, the deep learning approach, due to the lack of semantics understanding, has difficulties in noticing and expressing a novel business case with a pre-defined scope. In order to meet the requirements of specific robotic services, deep learning approach is very labor-intensive and time consuming. It is very difficult to improve the capabilities of conversational AI in a short time, and it is even more difficult to self-learn from experiences to deliver the same service in a better way. In this paper, we present an adaptive conversational AI algorithm that combines both semantic knowledge and deep learning to address this issue by learning new business cases through conversations. After self-learning from experience, the robot adapts to the business cases originally out of scope. The idea is to build new or extended robotic services in a systematic and fast-training manner with self-configured programs and constructed dialog flows. For every cycle in which a chat bot (conversational AI) delivers a given set of business cases, it is trapped to self-measure its performance and rethink every unknown dialog flows to improve the service by retraining with those new business cases. If the training process reaches a bottleneck and incurs some difficulties, human personnel will be informed of further instructions. He or she may retrain the chat bot with newly configured programs, or new dialog flows for new services. One approach employs semantics analysis to learn the dialogues for new business cases and then establish the necessary ontology for the new service. With the newly learned programs, it completes the understanding of the reaction behavior and finally uses dialog flows to connect all the understanding results and programs, achieving the goal of self-learning process. We have developed a chat bot service mounted on a kiosk, with a camera for facial recognition and a directional microphone array for voice capture. The chat bot serves as a concierge with polite conversation for visitors. As a proof of concept. We have demonstrated to complete 90% of reception services with limited self-learning capability.

Keywords: conversational AI, chatbot, dialog management, semantic analysis

Procedia PDF Downloads 127
384 Development of a Test Plant for Parabolic Trough Solar Collectors Characterization

Authors: Nelson Ponce Jr., Jonas R. Gazoli, Alessandro Sete, Roberto M. G. Velásquez, Valério L. Borges, Moacir A. S. de Andrade

Abstract:

The search for increased efficiency in generation systems has been of great importance in recent years to reduce the impact of greenhouse gas emissions and global warming. For clean energy sources, such as the generation systems that use concentrated solar power technology, this efficiency improvement impacts a lower investment per kW, improving the project’s viability. For the specific case of parabolic trough solar concentrators, their performance is strongly linked to their geometric precision of assembly and the individual efficiencies of their main components, such as parabolic mirrors and receiver tubes. Thus, for accurate efficiency analysis, it should be conducted empirically, looking for mounting and operating conditions like those observed in the field. The Brazilian power generation and distribution company Eletrobras Furnas, through the R&D program of the National Agency of Electrical Energy, has developed a plant for testing parabolic trough concentrators located in Aparecida de Goiânia, in the state of Goiás, Brazil. The main objective of this test plant is the characterization of the prototype concentrator that is being developed by the company itself in partnership with Eudora Energia, seeking to optimize it to obtain the same or better efficiency than the concentrators of this type already known commercially. This test plant is a closed pipe system where a pump circulates a heat transfer fluid, also calledHTF, in the concentrator that is being characterized. A flow meter and two temperature transmitters, installed at the inlet and outlet of the concentrator, record the parameters necessary to know the power absorbed by the system and then calculate its efficiency based on the direct solar irradiation available during the test period. After the HTF gains heat in the concentrator, it flows through heat exchangers that allow the acquired energy to be dissipated into the ambient. The goal is to keep the concentrator inlet temperature constant throughout the desired test period. The developed plant performs the tests in an autonomous way, where the operator must enter the HTF flow rate in the control system, the desired concentrator inlet temperature, and the test time. This paper presents the methodology employed for design and operation, as well as the instrumentation needed for the development of a parabolic trough test plant, being a guideline for standardization facilities.

Keywords: parabolic trough, concentrated solar power, CSP, solar power, test plant, energy efficiency, performance characterization, renewable energy

Procedia PDF Downloads 111
383 DEMs: A Multivariate Comparison Approach

Authors: Juan Francisco Reinoso Gordo, Francisco Javier Ariza-López, José Rodríguez Avi, Domingo Barrera Rosillo

Abstract:

The evaluation of the quality of a data product is based on the comparison of the product with a reference of greater accuracy. In the case of MDE data products, quality assessment usually focuses on positional accuracy and few studies consider other terrain characteristics, such as slope and orientation. The proposal that is made consists of evaluating the similarity of two DEMs (a product and a reference), through the joint analysis of the distribution functions of the variables of interest, for example, elevations, slopes and orientations. This is a multivariable approach that focuses on distribution functions, not on single parameters such as mean values or dispersions (e.g. root mean squared error or variance). This is considered to be a more holistic approach. The use of the Kolmogorov-Smirnov test is proposed due to its non-parametric nature, since the distributions of the variables of interest cannot always be adequately modeled by parametric models (e.g. the Normal distribution model). In addition, its application to the multivariate case is carried out jointly by means of a single test on the convolution of the distribution functions of the variables considered, which avoids the use of corrections such as Bonferroni when several statistics hypothesis tests are carried out together. In this work, two DEM products have been considered, DEM02 with a resolution of 2x2 meters and DEM05 with a resolution of 5x5 meters, both generated by the National Geographic Institute of Spain. DEM02 is considered as the reference and DEM05 as the product to be evaluated. In addition, the slope and aspect derived models have been calculated by GIS operations on the two DEM datasets. Through sample simulation processes, the adequate behavior of the Kolmogorov-Smirnov statistical test has been verified when the null hypothesis is true, which allows calibrating the value of the statistic for the desired significance value (e.g. 5%). Once the process has been calibrated, the same process can be applied to compare the similarity of different DEM data sets (e.g. the DEM05 versus the DEM02). In summary, an innovative alternative for the comparison of DEM data sets based on a multinomial non-parametric perspective has been proposed by means of a single Kolmogorov-Smirnov test. This new approach could be extended to other DEM features of interest (e.g. curvature, etc.) and to more than three variables

Keywords: data quality, DEM, kolmogorov-smirnov test, multivariate DEM comparison

Procedia PDF Downloads 106
382 Miniaturized PVC Sensors for Determination of Fe2+, Mn2+ and Zn2+ in Buffalo-Cows’ Cervical Mucus Samples

Authors: Ahmed S. Fayed, Umima M. Mansour

Abstract:

Three polyvinyl chloride membrane sensors were developed for the electrochemical evaluation of ferrous, manganese and zinc ions. The sensors were used for assaying metal ions in cervical mucus (CM) of Egyptian river buffalo-cows (Bubalus bubalis) as their levels vary dependent on cyclical hormone variation during different phases of estrus cycle. The presented sensors are based on using ionophores, β-cyclodextrin (β-CD), hydroxypropyl β-cyclodextrin (HP-β-CD) and sulfocalix-4-arene (SCAL) for sensors 1, 2 and 3 for Fe2+, Mn2+ and Zn2+, respectively. Dioctyl phthalate (DOP) was used as the plasticizer in a polymeric matrix of polyvinylchloride (PVC). For increasing the selectivity and sensitivity of the sensors, each sensor was enriched with a suitable complexing agent, which enhanced the sensor’s response. For sensor 1, β-CD was mixed with bathophenanthroline; for sensor 2, porphyrin was incorporated with HP-β-CD; while for sensor 3, oxine was the used complexing agent with SCAL. Linear responses of 10-7-10-2 M with cationic slopes of 53.46, 45.01 and 50.96 over pH range 4-8 were obtained using coated graphite sensors for ferrous, manganese and zinc ionic solutions, respectively. The three sensors were validated, according to the IUPAC guidelines. The obtained results by the presented potentiometric procedures were statistically analyzed and compared with those obtained by atomic absorption spectrophotometric method (AAS). No significant differences for either accuracy or precision were observed between the two techniques. Successful application for the determination of the three studied cations in CM, for the purpose to determine the proper time for artificial insemination (AI) was achieved. The results were compared with those obtained upon analyzing the samples by AAS. Proper detection of estrus and correct time of AI was necessary to maximize the production of buffaloes. In this experiment, 30 multi-parous buffalo-cows were in second to third lactation and weighting 415-530 kg, and were synchronized with OVSynch protocol. Samples were taken in three times around ovulation, on day 8 of OVSynch protocol, on day 9 (20 h before AI) and on day 10 (1 h before AI). Beside analysis of trace elements (Fe2+, Mn2+ and Zn2+) in CM using the three sensors, the samples were analyzed for the three cations and also Cu2+ by AAS in the CM samples and blood samples. The results obtained were correlated with hormonal analysis of serum samples and ultrasonography for the purpose of determining of the optimum time of AI. The results showed significant differences and powerful correlation with Zn2+ composition of CM during heat phase and the ovulation time, indicating that the parameter could be used as a tool to decide optimal time of AI in buffalo-cows.

Keywords: PVC Sensors, buffalo-cows, cyclodextrins, atomic absorption spectrophotometry, artificial insemination, OVSynch protocol

Procedia PDF Downloads 213
381 Application Reliability Method for the Analysis of the Stability Limit States of Large Concrete Dams

Authors: Mustapha Kamel Mihoubi, Essadik Kerkar, Abdelhamid Hebbouche

Abstract:

According to the randomness of most of the factors affecting the stability of a gravity dam, probability theory is generally used to TESTING the risk of failure and there is a confusing logical transition from the state of stability failed state, so the stability failure process is considered as a probable event. The control of risk of product failures is of capital importance for the control from a cross analysis of the gravity of the consequences and effects of the probability of occurrence of identified major accidents and can incur a significant risk to the concrete dam structures. Probabilistic risk analysis models are used to provide a better understanding the reliability and structural failure of the works, including when calculating stability of large structures to a major risk in the event of an accident or breakdown. This work is interested in the study of the probability of failure of concrete dams through the application of the reliability analysis methods including the methods used in engineering. It is in our case of the use of level II methods via the study limit state. Hence, the probability of product failures is estimated by analytical methods of the type FORM (First Order Reliability Method), SORM (Second Order Reliability Method). By way of comparison, a second level III method was used which generates a full analysis of the problem and involving an integration of the probability density function of, random variables are extended to the field of security by using of the method of Mont-Carlo simulations. Taking into account the change in stress following load combinations: normal, exceptional and extreme the acting on the dam, calculation results obtained have provided acceptable failure probability values which largely corroborate the theory, in fact, the probability of failure tends to increase with increasing load intensities thus causing a significant decrease in strength, especially in the presence of combinations of unique and extreme loads. Shear forces then induce a shift threatens the reliability of the structure by intolerable values of the probability of product failures. Especially, in case THE increase of uplift in a hypothetical default of the drainage system.

Keywords: dam, failure, limit state, monte-carlo, reliability, probability, sliding, Taylor

Procedia PDF Downloads 311
380 Living or Surviving in an Intercultural Context: A Study on Transformative Learning of UK Students in China and Chinese Students in the UK

Authors: Yiran Wang

Abstract:

As international education continues to expand countries providing such opportunities not only benefit but also face challenges. For traditional destinations, including the United States and the United Kingdom, the number of international students has been falling. At the same time emerging economies, such as China, are witnessing a rapid increase in the number of international students enrolled in their universities. China is, therefore, beginning to play an important role in the competitive global market for higher education. This study analyses and compares the experiences of international students in the UK and China using Transformative Learning theory. While there is an extensive literature on both international higher education and also Transformative Learning theory there are currently three contributions this study makes. First, this research applies the theory to two international student groups: UK students in Chinese universities and Chinese students in UK universities.Second, this study includes a focus on the intercultural learning of Chinese doctoral students in the UK filling a gap in current research. Finally, this investigation has extended the very limited number of current research projects on UK students in China. It is generally acknowledged that international students will experience various challenges when they are in a culturally different context. Little research has focused on how, why, and why not learners are transformed through exposure to their new environment. This study applies Transformative Learning theory to address two research questions: first, do UK international students in Chinese universities and Chinese international students in UK universities experience transformational learning in/during their overseas studies? Second, what factors foster or impede international students’ experience of transformative learning? To answer the above questions, semi-structured interviews were used to investigate international students’ academic and social experiences. Based on the insights provided by Mezirow,Taylor,and previous studies on international students, this study argues that international students’ intercultural experience is a complex process.Transformation can occur in various ways and social and personal perspectives underpin the transformative learning of the students studied. Contributing factors include culture shock, educational conventions,the student’s motivation, expectations, personality, gender and previous work experience.The results reflect the significance of differences in teaching styles in the UK and China and the impact this can have on the student teaching and learning process when they move to a new university.

Keywords: intercultural learning, international higher education, transformative learning, UK and Chinese international students

Procedia PDF Downloads 405
379 Messiness and Strategies for Elite Interview: Multi-Sited Ethnographic Research in Mainland China

Authors: Yali Liu

Abstract:

The ethnographic research involved a multi-sited field trip study in China to compile in-depth data from Chinese multilingual academics of Korean, Japanese, and Russian. It aimed to create a culturally-informed portrait of their values and perceptions regarding their choice of language for academic publishing. Extended and lengthy fieldwork, or known as ‘deep hanging out’, enabled the author to gain a comprehensive understanding of the research context at the macro-level and the participants’ experiences at the micro-level. This research involved multiple fieldwork sites, which the author selected in acknowledgment of the diversity in China’s regions with respect to their geopolitical context, socio-economic development, cultural traditions, and administrative status. The 14 weeks of data collection took the author over-land to five regions in northern China: Hebei province, Tianjin, Jilin province, Gansu province, and Xinjiang. Responding to the fieldwork dynamics, the author positioned herself at different degrees of insiderness and outsiderness. This occurred at three levels: the regional level, the individual level, and the within-individual level. To enhance the ability to reflect on the authors’ researcher subjectivity, the author explored her understanding of the five ‘I’s, derived from the authors’ natural attributes. This helped the author to monitor her subjectivity, particularly during critical decision-making. The methodological challenges the author navigated were related to interviewing elites; this involved the initial approach, establishing a relationship, and negotiating the unequal power relationship during our contact. The author developed a number of strategies to strengthen her authority, and to gain the confidence of her envisaged participants and secure their collaboration, and the author negotiated a form of reciprocity that reflected their needs and expectations. The current ethnographic research has both theoretical and practical significance. It contributes to the methodological development regarding multi-sited ethnographic research. The messiness and strategies about positioning and interviewing elites will provide practical lessons for researchers who conduct ethnographic research, especially from power-‘less’ positions.

Keywords: multi-sited ethnographic research, elite interview, multilingual China, subjectivity, reciprocity

Procedia PDF Downloads 106
378 Transport Properties of Alkali Nitrites

Authors: Y. Mateyshina, A.Ulihin, N.Uvarov

Abstract:

Electrolytes with different type of charge carrier can find widely application in different using, e.g. sensors, electrochemical equipments, batteries and others. One of important components ensuring stable functioning of the equipment is electrolyte. Electrolyte has to be characterized by high conductivity, thermal stability, and wide electrochemical window. In addition to many advantageous characteristic for liquid electrolytes, the solid state electrolytes have good mechanical stability, wide working range of temperature range. Thus search of new system of solid electrolytes with high conductivity is an actual task of solid state chemistry. Families of alkali perchlorates and nitrates have been investigated by us earlier. In literature data about transport properties of alkali nitrites are absent. Nevertheless, alkali nitrites MeNO2 (Me= Li+, Na+, K+, Rb+ and Cs+), except for the lithium salt, have high-temperature phases with crystal structure of the NaCl-type. High-temperature phases of nitrites are orientationally disordered, i.e. non-spherical anions are reoriented over several equivalents directions in the crystal lattice. Pure lithium nitrite LiNO2 is characterized by ionic conductivity near 10-4 S/cm at 180°C and more stable as compared with lithium nitrate and can be used as a component for synthesis of composite electrolytes. In this work composite solid electrolytes in the binary system LiNO2 - A (A= MgO, -Al2O3, Fe2O3, CeO2, SnO2, SiO2) were synthesized and their structural, thermodynamic and electrical properties investigated. Alkali nitrite was obtained by exchange reaction from water solutions of barium nitrite and alkali sulfate. The synthesized salt was characterized by X-ray powder diffraction technique using D8 Advance X-Ray Diffractometer with Cu K radiation. Using thermal analysis, the temperatures of dehydration and thermal decomposition of salt were determined.. The conductivity was measured using a two electrode scheme in a forevacuum (6.7 Pa) with an HP 4284A (Precision LCR meter) in a frequency range 20 Hz < ν < 1 MHz. Solid composite electrolytes LiNO2 - A A (A= MgO, -Al2O3, Fe2O3, CeO2, SnO2, SiO2) have been synthesized by mixing of preliminary dehydrated components followed by sintering at 250°C. In the series of nitrite of alkaline metals Li+-Cs+, the conductivity varies not monotonically with increasing radius of cation. The minimum conductivity is observed for KNO2; however, with further increase in the radius of cation in the series, the conductivity tends to increase. The work was supported by the Russian Foundation for Basic research, grant #14-03-31442.

Keywords: conductivity, alkali nitrites, composite electrolytes, transport properties

Procedia PDF Downloads 307
377 Nonlinear Modelling of Sloshing Waves and Solitary Waves in Shallow Basins

Authors: Mohammad R. Jalali, Mohammad M. Jalali

Abstract:

The earliest theories of sloshing waves and solitary waves based on potential theory idealisations and irrotational flow have been extended to be applicable to more realistic domains. To this end, the computational fluid dynamics (CFD) methods are widely used. Three-dimensional CFD methods such as Navier-Stokes solvers with volume of fluid treatment of the free surface and Navier-Stokes solvers with mappings of the free surface inherently impose high computational expense; therefore, considerable effort has gone into developing depth-averaged approaches. Examples of such approaches include Green–Naghdi (GN) equations. In Cartesian system, GN velocity profile depends on horizontal directions, x-direction and y-direction. The effect of vertical direction (z-direction) is also taken into consideration by applying weighting function in approximation. GN theory considers the effect of vertical acceleration and the consequent non-hydrostatic pressure. Moreover, in GN theory, the flow is rotational. The present study illustrates the application of GN equations to propagation of sloshing waves and solitary waves. For this purpose, GN equations solver is verified for the benchmark tests of Gaussian hump sloshing and solitary wave propagation in shallow basins. Analysis of the free surface sloshing of even harmonic components of an initial Gaussian hump demonstrates that the GN model gives predictions in satisfactory agreement with the linear analytical solutions. Discrepancies between the GN predictions and the linear analytical solutions arise from the effect of wave nonlinearities arising from the wave amplitude itself and wave-wave interactions. Numerically predicted solitary wave propagation indicates that the GN model produces simulations in good agreement with the analytical solution of the linearised wave theory. Comparison between the GN model numerical prediction and the result from perturbation analysis confirms that nonlinear interaction between solitary wave and a solid wall is satisfactorilly modelled. Moreover, solitary wave propagation at an angle to the x-axis and the interaction of solitary waves with each other are conducted to validate the developed model.

Keywords: Green–Naghdi equations, nonlinearity, numerical prediction, sloshing waves, solitary waves

Procedia PDF Downloads 276
376 Advanced Techniques in Semiconductor Defect Detection: An Overview of Current Technologies and Future Trends

Authors: Zheng Yuxun

Abstract:

This review critically assesses the advancements and prospective developments in defect detection methodologies within the semiconductor industry, an essential domain that significantly affects the operational efficiency and reliability of electronic components. As semiconductor devices continue to decrease in size and increase in complexity, the precision and efficacy of defect detection strategies become increasingly critical. Tracing the evolution from traditional manual inspections to the adoption of advanced technologies employing automated vision systems, artificial intelligence (AI), and machine learning (ML), the paper highlights the significance of precise defect detection in semiconductor manufacturing by discussing various defect types, such as crystallographic errors, surface anomalies, and chemical impurities, which profoundly influence the functionality and durability of semiconductor devices, underscoring the necessity for their precise identification. The narrative transitions to the technological evolution in defect detection, depicting a shift from rudimentary methods like optical microscopy and basic electronic tests to more sophisticated techniques including electron microscopy, X-ray imaging, and infrared spectroscopy. The incorporation of AI and ML marks a pivotal advancement towards more adaptive, accurate, and expedited defect detection mechanisms. The paper addresses current challenges, particularly the constraints imposed by the diminutive scale of contemporary semiconductor devices, the elevated costs associated with advanced imaging technologies, and the demand for rapid processing that aligns with mass production standards. A critical gap is identified between the capabilities of existing technologies and the industry's requirements, especially concerning scalability and processing velocities. Future research directions are proposed to bridge these gaps, suggesting enhancements in the computational efficiency of AI algorithms, the development of novel materials to improve imaging contrast in defect detection, and the seamless integration of these systems into semiconductor production lines. By offering a synthesis of existing technologies and forecasting upcoming trends, this review aims to foster the dialogue and development of more effective defect detection methods, thereby facilitating the production of more dependable and robust semiconductor devices. This thorough analysis not only elucidates the current technological landscape but also paves the way for forthcoming innovations in semiconductor defect detection.

Keywords: semiconductor defect detection, artificial intelligence in semiconductor manufacturing, machine learning applications, technological evolution in defect analysis

Procedia PDF Downloads 29
375 University Clusters Using ICT for Teaching and Learning

Authors: M. Roberts Masillamani

Abstract:

There is a phenomenal difference, as regard to the teaching methodology adopted at the urban and the rural area colleges. However, bright and talented student may be from rural back ground even. But there is huge dearth of the digitization in the rural areas and lesser developed countries. Today’s students need new skills to compete and successful in the future. Education should be combination of practical, intellectual, and social skills. What does this mean for rural classrooms and how can it be achieved. Rural colleges are not able to hire the best resources, since the best teacher’s aim is to move towards the city. If city is provided everywhere, then there will be no rural area. This is possible by forming university clusters (UC). The University cluster is a group of renowned and accredited universities coming together to bridge this dearth. The UC will deliver the live lectures and allow the students’ from remote areas to actively participate in the classroom. This paper tries to present a plan of action of providing a better live classroom teaching and learning system from the city to the rural and the lesser developed countries. This paper titled “University Clusters using ICT for teaching and learning” provides a true concept of opening live digital classroom windows for rural colleges, where resources are not available, thus reducing the digital divide. This is different from pod casting a lecture or distance learning and eLearning. The live lecture can be streamed through digital equipment to another classroom. The rural students can collaborate with their peers and critiques, be assessed, collect information, acquire different techniques in assessment and learning process. This system will benefit rural students and teachers and develop socio economic status. This will also will increase the degree of confidence of the Rural students and teachers. Thus bringing about the concept of ‘Train the Trainee’ in reality. An educational university cloud for each cluster will be built remote infrastructure facilities (RIF) for the above program. The users may be informed, about the available lecture schedules, through the RIF service. RIF with an educational cloud can be set by the universities under one cluster. This paper talks a little more about University clusters and the methodology to be adopted as well as some extended features like, tutorial classes, library grids, remote laboratory login, research and development.

Keywords: lesser developed countries, digital divide, digital learning, education, e-learning, ICT, library grids, live classroom windows, RIF, rural, university clusters and urban

Procedia PDF Downloads 460