Search results for: condition monitoring
548 Subjective Probability and the Intertemporal Dimension of Probability to Correct the Misrelation Between Risk and Return of a Financial Asset as Perceived by Investors. Extension of Prospect Theory to Better Describe Risk Aversion
Authors: Roberta Martino, Viviana Ventre
Abstract:
From a theoretical point of view, the relationship between the risk associated with an investment and the expected value are directly proportional, in the sense that the market allows a greater result to those who are willing to take a greater risk. However, empirical evidence proves that this relationship is distorted in the minds of investors and is perceived exactly the opposite. To deepen and understand the discrepancy between the actual actions of the investor and the theoretical predictions, this paper analyzes the essential parameters used for the valuation of financial assets with greater attention to two elements: probability and the passage of time. Although these may seem at first glance to be two distinct elements, they are closely related. In particular, the error in the theoretical description of the relationship between risk and return lies in the failure to consider the impatience that is generated in the decision-maker when events that have not yet happened occur in the decision-making context. In this context, probability loses its objective meaning and in relation to the psychological aspects of the investor, it can only be understood as the degree of confidence that the investor has in the occurrence or non-occurrence of an event. Moreover, the concept of objective probability does not consider the inter-temporality that characterizes financial activities and does not consider the condition of limited cognitive capacity of the decision maker. Cognitive psychology has made it possible to understand that the mind acts with a compromise between quality and effort when faced with very complex choices. To evaluate an event that has not yet happened, it is necessary to imagine that it happens in your head. This projection into the future requires a cognitive effort and is what differentiates choices under conditions of risk and choices under conditions of uncertainty. In fact, since the receipt of the outcome in choices under risk conditions is imminent, the mechanism of self-projection into the future is not necessary to imagine the consequence of the choice and the decision makers dwell on the objective analysis of possibilities. Financial activities, on the other hand, develop over time and the objective probability is too static to consider the anticipatory emotions that the self-projection mechanism generates in the investor. Assuming that uncertainty is inherent in valuations of events that have not yet occurred, the focus must shift from risk management to uncertainty management. Only in this way the intertemporal dimension of the decision-making environment and the haste generated by the financial market can be cautioned and considered. The work considers an extension of the prospectus theory with the temporal component with the aim of providing a description of the attitude towards risk with respect to the passage of time.Keywords: impatience, risk aversion, subjective probability, uncertainty
Procedia PDF Downloads 106547 Recycling of Sintered NdFeB Magnet Waste Via Oxidative Roasting and Selective Leaching
Authors: W. Kritsarikan, T. Patcharawit, T. Yingnakorn, S. Khumkoa
Abstract:
Neodymium-iron-boron (NdFeB) magnets classified as high-power magnets are widely used in various applications such as electrical and medical devices and account for 13.5 % of the permanent magnet’s market. Since its typical composition of 29 - 32 % Nd, 64.2 – 68.5 % Fe and 1 – 1.2 % B contains a significant amount of rare earth metals and will be subjected to shortages in the future. Domestic NdFeB magnet waste recycling should therefore be developed in order to reduce social, environmental impacts toward a circular economy. Most research works focus on recycling the magnet wastes, both from the manufacturing process and end of life. Each type of wastes has different characteristics and compositions. As a result, these directly affect recycling efficiency as well as the types and purity of the recyclable products. This research, therefore, focused on the recycling of manufacturing NdFeB magnet waste obtained from the sintering stage of magnet production and the waste contained 23.6% Nd, 60.3% Fe and 0.261% B in order to recover high purity neodymium oxide (Nd₂O₃) using hybrid metallurgical process via oxidative roasting and selective leaching techniques. The sintered NdFeB waste was first ground to under 70 mesh prior to oxidative roasting at 550 - 800 °C to enable selective leaching of neodymium in the subsequent leaching step using H₂SO₄ at 2.5 M over 24 h. The leachate was then subjected to drying and roasting at 700 – 800 °C prior to precipitation by oxalic acid and calcination to obtain neodymium oxide as the recycling product. According to XRD analyses, it was found that increasing oxidative roasting temperature led to an increasing amount of hematite (Fe₂O₃) as the main composition with a smaller amount of magnetite (Fe₃O₄) found. Peaks of neodymium oxide (Nd₂O₃) were also observed in a lesser amount. Furthermore, neodymium iron oxide (NdFeO₃) was present and its XRD peaks were pronounced at higher oxidative roasting temperatures. When proceeded to acid leaching and drying, iron sulfate and neodymium sulfate were mainly obtained. After the roasting step prior to water leaching, iron sulfate was converted to form hematite as the main compound, while neodymium sulfate remained in the ingredient. However, a small amount of magnetite was still detected by XRD. The higher roasting temperature at 800 °C resulted in a greater Fe₂O₃ to Nd₂(SO₄)₃ ratio, indicating a more effective roasting temperature. Iron oxides were subsequently water leached and filtered out while the solution contained mainly neodymium sulfate. Therefore, low oxidative roasting temperature not exceeding 600 °C followed by acid leaching and roasting at 800 °C gave the optimum condition for further steps of precipitation and calcination to finally achieve neodymium oxide.Keywords: NdFeB magnet waste, oxidative roasting, recycling, selective leaching
Procedia PDF Downloads 181546 Combating the Practice of Open Defecation through Appropriate Communication Strategies in Rural India
Authors: Santiagomani Alex Parimalam
Abstract:
Lack of awareness on the consequences of open defecation and myths and misconceptions related to use of toilets have led to the continued practice of open defecation in India. Government of India initiated a multi-pronged intensive communication campaign against the practice of open defecation in the last few years. The primary vision of this communication campaign was to provide increased demand for toilets and to ensure that all have access to safe sanitation. The campaign strategy included the use of mass media, group and folk media, and interpersonal communication to expedite achieving its objectives. The campaign included the use of various media such as posters, wall writings, slides in cinema theatres, kiosks, pamphlets, newsletters, flip charts and folk media to bring behavioural changes in the communities. The author did a concurrent monitoring and process documentation of the campaigns initiated by the state of Tamilnandu, India between 2013 and 2016 commissioned by UNICEF India. The study was carried out to assess the effectiveness of the communication campaigns in combating the practice of open defecation and promote construction of toilets in the state of Tamilnadu, India. Initial findings revealed the gap in understanding the audience and the use of appropriate media. The first phase of the communication campaign by name as Chi Chi Chollapa (bringing shame concept) also revealed that use of interpersonal communication, group and community media were the most effective strategy in reaching the rural masses. The failure of various other media used especially the print media (poster, handbills, newsletter, kiosks) provides insights as to where the government needs to invest its resources in bringing health-seeking behaviour in the community. The findings shared with the government enabled to strengthen the campaign resulting in improved response. Taking cues from the study, the government understood the potency of the women, school children, youth and community leaders as the effective carriers of the message. The government narrowed down its focus and invested on the voluntary workers (village poverty reduction committee workers VPRCs) in the community. The effectiveness of interpersonal communication and peer education by the credible community worker threw light on the need for localising the content and communicator. From this study, we could derive that only community and group media are preferred by the people in the rural community. Children, youth, women, and credible local leaders are proved to be ambassadors in behaviour change communication. This study discloses the lacunae involved in the communication campaign and points out that the state should have carried out a proper communication need analysis and piloting. The study used a survey method with random sampling. The study used both quantitative and qualitative tools such as interview schedules, in-depth interviews, and focus group discussions in rural areas of Tamilnadu in phases. The findings of the study would provide directions to future campaigns to any campaign concerning health and rural development.Keywords: appropriate, communication, combating, open defecation
Procedia PDF Downloads 125545 The Effects of Qigong Exercise Intervention on the Cognitive Function in Aging Adults
Authors: D. Y. Fong, C. Y. Kuo, Y. T. Chiang, W. C. Lin
Abstract:
Objectives: Qigong is an ancient Chinese practice in pursuit of a healthier body and a more peaceful mindset. It emphasizes on the restoration of vital energy (Qi) in body, mind, and spirit. The practice is the combination of gentle movements and mild breathing which help the doers reach the condition of tranquility. On account of the features of Qigong, first, we use cross-sectional methodology to compare the differences among the varied levels of Qigong practitioners on cognitive function with event-related potential (ERP) and electroencephalography (EEG). Second, we use the longitudinal methodology to explore the effects on the Qigong trainees for pretest and posttest on ERP and EEG. Current study adopts Attentional Network Test (ANT) task to examine the participants’ cognitive function, and aging-related researches demonstrated a declined tread on the cognition in older adults and exercise might ameliorate the deterioration. Qigong exercise integrates physical posture (muscle strength), breathing technique (aerobic ability) and focused intention (attention) that researchers hypothesize it might improve the cognitive function in aging adults. Method: Sixty participants were involved in this study, including 20 young adults (21.65±2.41 y) with normal physical activity (YA), 20 Qigong experts (60.69 ± 12.42 y) with over 7 years Qigong practice experience (QE), and 20 normal and healthy adults (52.90±12.37 y) with no Qigong practice experience as experimental group (EG). The EG participants took Qigong classes 2 times a week and 2 hours per time for 24 weeks with the purpose of examining the effect of Qigong intervention on cognitive function. ANT tasks (alert network, orient network, and executive control) were adopted to evaluate participants’ cognitive function via ERP’s P300 components and P300 amplitude topography. Results: Behavioral data: 1.The reaction time (RT) of YA is faster than the other two groups, and EG was faster than QE in the cue and flanker conditions of ANT task. 2. The RT of posttest was faster than pretest in EG in the cue and flanker conditions. 3. No difference among the three groups on orient, alert, and execute control networks. ERP data: 1. P300 amplitude detection in QE was larger than EG at Fz electrode in orient, alert, and execute control networks. 2. P300 amplitude in EG was larger at pretest than posttest on the orient network. 3. P300 Latency revealed no difference among the three groups in the three networks. Conclusion: Taken together these findings, they provide neuro-electrical evidence that older adults involved in Qigong practice may develop a more overall compensatory mechanism and also benefit the performance of behavior.Keywords: Qigong, cognitive function, aging, event-related potential (ERP)
Procedia PDF Downloads 393544 A Comparison of Proxemics and Postural Head Movements during Pop Music versus Matched Music Videos
Authors: Harry J. Witchel, James Ackah, Carlos P. Santos, Nachiappan Chockalingam, Carina E. I. Westling
Abstract:
Introduction: Proxemics is the study of how people perceive and use space. It is commonly proposed that when people like or engage with a person/object, they will move slightly closer to it, often quite subtly and subconsciously. Music videos are known to add entertainment value to a pop song. Our hypothesis was that by adding appropriately matched video to a pop song, it would lead to a net approach of the head to the monitor screen compared to simply listening to an audio-only version of the song. Methods: We presented to 27 participants (ages 21.00 ± 2.89, 15 female) seated in front of 47.5 x 27 cm monitor two musical stimuli in a counterbalanced order; all stimuli were based on music videos by the band OK Go: Here It Goes Again (HIGA, boredom ratings (0-100) = 15.00 ± 4.76, mean ± SEM, standard-error-of-the-mean) and Do What You Want (DWYW, boredom ratings = 23.93 ± 5.98), which did not differ in boredom elicited (P = 0.21, rank-sum test). Each participant experienced each song only once, and one song (counterbalanced) as audio-only versus the other song as a music video. The movement was measured by video-tracking using Kinovea 0.8, based on recording from a lateral aspect; before beginning, each participant had a reflective motion tracking marker placed on the outer canthus of the left eye. Analysis of the Kinovea X-Y coordinate output in comma-separated-variables format was performed in Matlab, as were non-parametric statistical tests. Results: We found that the audio-only stimuli (combined for both HIGA and DWYW, mean ± SEM, 35.71 ± 5.36) were significantly more boring than the music video versions (19.46 ± 3.83, P = 0.0066 Wilcoxon Signed Rank Test (WSRT), Cohen's d = 0.658, N = 28). We also found that participants' heads moved around twice as much during the audio-only versions (speed = 0.590 ± 0.095 mm/sec) compared to the video versions (0.301 ± 0.063 mm/sec, P = 0.00077, WSRT). However, the participants' mean head-to-screen distances were not detectably smaller (i.e. head closer to the screen) during the music videos (74.4 ± 1.8 cm) compared to the audio-only stimuli (73.9 ± 1.8 cm, P = 0.37, WSRT). If anything, during the audio-only condition, they were slightly closer. Interestingly, the ranges of the head-to-screen distances were smaller during the music video (8.6 ± 1.4 cm) compared to the audio-only (12.9 ± 1.7 cm, P = 0.0057, WSRT), the standard deviations were also smaller (P = 0.0027, WSRT), and their heads were held 7 mm higher (video 116.1 ± 0.8 vs. audio-only 116.8 ± 0.8 cm above floor, P = 0.049, WSRT). Discussion: As predicted, sitting and listening to experimenter-selected pop music was more boring than when the music was accompanied by a matched, professionally-made video. However, we did not find that the proxemics of the situation led to approaching the screen. Instead, adding video led to efforts to control the head to a more central and upright viewing position and to suppress head fidgeting.Keywords: boredom, engagement, music videos, posture, proxemics
Procedia PDF Downloads 165543 Controlled Drug Delivery System for Delivery of Poor Water Soluble Drugs
Authors: Raj Kumar, Prem Felix Siril
Abstract:
The poor aqueous solubility of many pharmaceutical drugs and potential drug candidates is a big challenge in drug development. Nanoformulation of such candidates is one of the major solutions for the delivery of such drugs. We initially developed the evaporation assisted solvent-antisolvent interaction (EASAI) method. EASAI method is use full to prepared nanoparticles of poor water soluble drugs with spherical morphology and particles size below 100 nm. However, to further improve the effect formulation to reduce number of dose and side effect it is important to control the delivery of drugs. However, many drug delivery systems are available. Among the many nano-drug carrier systems, solid lipid nanoparticles (SLNs) have many advantages over the others such as high biocompatibility, stability, non-toxicity and ability to achieve controlled release of drugs and drug targeting. SLNs can be administered through all existing routes due to high biocompatibility of lipids. SLNs are usually composed of lipid, surfactant and drug were encapsulated in lipid matrix. A number of non-steroidal anti-inflammatory drugs (NSAIDs) have poor bioavailability resulting from their poor aqueous solubility. In the present work, SLNs loaded with NSAIDs such as Nabumetone (NBT), Ketoprofen (KP) and Ibuprofen (IBP) were successfully prepared using different lipids and surfactants. We studied and optimized experimental parameters using a number of lipids, surfactants and NSAIDs. The effect of different experimental parameters such as lipid to surfactant ratio, volume of water, temperature, drug concentration and sonication time on the particles size of SLNs during the preparation using hot-melt sonication was studied. It was found that particles size was directly proportional to drug concentration and inversely proportional to surfactant concentration, volume of water added and temperature of water. SLNs prepared at optimized condition were characterized thoroughly by using different techniques such as dynamic light scattering (DLS), field emission scanning electron microscopy (FESEM), transmission electron microscopy (TEM), atomic force microscopy (AFM), X-ray diffraction (XRD) and differential scanning calorimetry and Fourier transform infrared spectroscopy (FTIR). We successfully prepared the SLN of below 220 nm using different lipids and surfactants combination. The drugs KP, NBT and IBP showed 74%, 69% and 53% percentage of entrapment efficiency with drug loading of 2%, 7% and 6% respectively in SLNs of Campul GMS 50K and Gelucire 50/13. In-vitro drug release profile of drug loaded SLNs is shown that nearly 100% of drug was release in 6 h.Keywords: nanoparticles, delivery, solid lipid nanoparticles, hot-melt sonication, poor water soluble drugs, solubility, bioavailability
Procedia PDF Downloads 311542 Developing and Shake Table Testing of Semi-Active Hydraulic Damper as Active Interaction Control Device
Authors: Ming-Hsiang Shih, Wen-Pei Sung, Shih-Heng Tung
Abstract:
Semi-active control system for structure under excitation of earthquake provides with the characteristics of being adaptable and requiring low energy. DSHD (Displacement Semi-Active Hydraulic Damper) was developed by our research team. Shake table test results of this DSHD installed in full scale test structure demonstrated that this device brought its energy-dissipating performance into full play for test structure under excitation of earthquake. The objective of this research is to develop a new AIC (Active Interaction Control Device) and apply shake table test to perform its dissipation of energy capability. This new proposed AIC is converting an improved DSHD (Displacement Semi-Active Hydraulic Damper) to AIC with the addition of an accumulator. The main concept of this energy-dissipating AIC is to apply the interaction function of affiliated structure (sub-structure) and protected structure (main structure) to transfer the input seismic force into sub-structure to reduce the structural deformation of main structure. This concept is tested using full-scale multi-degree of freedoms test structure, installed with this proposed AIC subjected to external forces of various magnitudes, for examining the shock absorption influence of predictive control, stiffness of sub-structure, synchronous control, non-synchronous control and insufficient control position. The test results confirm: (1) this developed device is capable of diminishing the structural displacement and acceleration response effectively; (2) the shock absorption of low precision of semi-active control method did twice as much seismic proof efficacy as that of passive control method; (3) active control method may not exert a negative influence of amplifying acceleration response of structure; (4) this AIC comes into being time-delay problem. It is the same problem of ordinary active control method. The proposed predictive control method can overcome this defect; (5) condition switch is an important characteristics of control type. The test results show that synchronism control is very easy to control and avoid stirring high frequency response. This laboratory results confirm that the device developed in this research is capable of applying the mutual interaction between the subordinate structure and the main structure to be protected is capable of transforming the quake energy applied to the main structure to the subordinate structure so that the objective of minimizing the deformation of main structural can be achieved.Keywords: DSHD (Displacement Semi-Active Hydraulic Damper), AIC (Active Interaction Control Device), shake table test, full scale structure test, sub-structure, main-structure
Procedia PDF Downloads 518541 Developing Methodology of Constructing the Unified Action Plan for External and Internal Risks in University
Authors: Keiko Tamura, Munenari Inoguchi, Michiyo Tsuji
Abstract:
When disasters occur, in order to raise the speed of each decision making and response, it is common that delegation of authority is carried out. This tendency is particularly evident when the department or branch of the organization are separated by the physical distance from the main body; however, there are some issues to think about. If the department or branch is too dependent on the head office in the usual condition, they might feel lost in the disaster response operation when they are face to the situation. Avoiding this problem, an organization should decide how to delegate the authority and also who accept the responsibility for what before the disaster. This paper will discuss about the method which presents an approach for executing the delegation of authority process, implementing authorities, management by objectives, and preparedness plans and agreement. The paper will introduce the examples of efforts for the three research centers of Niigata University, Japan to arrange organizations capable of taking necessary actions for disaster response. Each center has a quality all its own. One is the center for carrying out the research in order to conserve the crested ibis (or Toki birds in Japanese), the endangered species. The another is the marine biological laboratory. The third one is very unique because of the old growth forests maintained as the experimental field. Those research centers are in the Sado Island, located off the coast of Niigata Prefecture, is Japan's second largest island after Okinawa and is known for possessing a rich history and culture. It takes 65 minutes jetfoil (high-speed ferry) ride to get to Sado Island from the mainland. The three centers are expected to be easily isolated at the time of a disaster. A sense of urgency encourages 3 centers in the process of organizational restructuring for enhancing resilience. The research team from the risk management headquarters offer those procedures; Step 1: Offer the hazard scenario based on the scientific evidence, Step 2: Design a risk management organization for disaster response function, Step 3: Conduct the participatory approach to make consensus about the overarching objectives, Step 4: Construct the unified operational action plan for 3 centers, Step 5: Simulate how to respond in each phase based on the understanding the various phases of the timeline of a disaster. Step 6: Document results to measure performance and facilitate corrective action. This paper shows the result of verifying the output and effects.Keywords: delegation of authority, disaster response, risk management, unified command
Procedia PDF Downloads 124540 ReactorDesign App: An Interactive Software for Self-Directed Explorative Learning
Authors: Chia Wei Lim, Ning Yan
Abstract:
The subject of reactor design, dealing with the transformation of chemical feedstocks into more valuable products, constitutes the central idea of chemical engineering. Despite its importance, the way it is taught to chemical engineering undergraduates has stayed virtually the same over the past several decades, even as the chemical industry increasingly leans towards the use of software for the design and daily monitoring of chemical plants. As such, there has been a widening learning gap as chemical engineering graduates transition from university to the industry since they are not exposed to effective platforms that relate the fundamental concepts taught during lectures to industrial applications. While the success of technology enhanced learning (TEL) has been demonstrated in various chemical engineering subjects, TELs in the teaching of reactor design appears to focus on the simulation of reactor processes, as opposed to arguably more important ideas such as the selection and optimization of reactor configuration for different types of reactions. This presents an opportunity for us to utilize the readily available easy-to-use MATLAB App platform to create an educational tool to aid the learning of fundamental concepts of reactor design and to link these concepts to the industrial context. Here, interactive software for the learning of reactor design has been developed to narrow the learning gap experienced by chemical engineering undergraduates. Dubbed the ReactorDesign App, it enables students to design reactors involving complex design equations for industrial applications without being overly focused on the tedious mathematical steps. With the aid of extensive visualization features, the concepts covered during lectures are explicitly utilized, allowing students to understand how these fundamental concepts are applied in the industrial context and equipping them for their careers. In addition, the software leverages the easily accessible MATLAB App platform to encourage self-directed learning. It is useful for reinforcing concepts taught, complementing homework assignments, and aiding exam revision. Accordingly, students are able to identify any lapses in understanding and clarify them accordingly. In terms of the topics covered, the app incorporates the design of different types of isothermal and non-isothermal reactors, in line with the lecture content and industrial relevance. The main features include the design of single reactors, such as batch reactors (BR), continuously stirred tank reactors (CSTR), plug flow reactors (PFR), and recycle reactors (RR), as well as multiple reactors consisting of any combination of ideal reactors. A version of the app, together with some guiding questions to aid explorative learning, was released to the undergraduates taking the reactor design module. A survey was conducted to assess its effectiveness, and an overwhelmingly positive response was received, with 89% of the respondents agreeing or strongly agreeing that the app has “helped [them] with understanding the unit” and 87% of the respondents agreeing or strongly agreeing that the app “offers learning flexibility”, compared to the conventional lecture-tutorial learning framework. In conclusion, the interactive ReactorDesign App has been developed to encourage self-directed explorative learning of the subject and demonstrate the industrial applications of the taught design concepts.Keywords: explorative learning, reactor design, self-directed learning, technology enhanced learning
Procedia PDF Downloads 91539 Recycling of Sintered Neodymium-Iron-Boron (NdFeB) Magnet Waste via Oxidative Roasting and Selective Leaching
Authors: Woranittha Kritsarikan
Abstract:
Neodymium-iron-boron (NdFeB) magnets classified as high-power magnets are widely used in various applications such as electrical and medical devices and account for 13.5 % of the permanent magnet’s market. Since its typical composition of 29 - 32 % Nd, 64.2 – 68.5 % Fe and 1 – 1.2 % B contains a significant amount of rare earth metals and will be subjected to shortages in the future. Domestic NdFeB magnet waste recycling should therefore be developed in order to reduce social, environmental impacts toward the circular economy. Most research works focus on recycling the magnet wastes, both from the manufacturing process and end of life. Each type of wastes has different characteristics and compositions. As a result, these directly affect recycling efficiency as well as the types and purity of the recyclable products. This research, therefore, focused on the recycling of manufacturing NdFeB magnet waste obtained from the sintering stage of magnet production and the waste contained 23.6% Nd, 60.3% Fe and 0.261% B in order to recover high purity neodymium oxide (Nd₂O₃) using hybrid metallurgical process via oxidative roasting and selective leaching techniques. The sintered NdFeB waste was first ground to under 70 mesh prior to oxidative roasting at 550 - 800 ᵒC to enable selective leaching of neodymium in the subsequent leaching step using H₂SO₄ at 2.5 M over 24 hours. The leachate was then subjected to drying and roasting at 700 – 800 ᵒC prior to precipitation by oxalic acid and calcination to obtain neodymium oxide as the recycling product. According to XRD analyses, it was found that increasing oxidative roasting temperature led to the increasing amount of hematite (Fe₂O₃) as the main composition with a smaller amount of magnetite (Fe3O4) found. Peaks of neodymium oxide (Nd₂O₃) were also observed in a lesser amount. Furthermore, neodymium iron oxide (NdFeO₃) was present and its XRD peaks were pronounced at higher oxidative roasting temperature. When proceeded to acid leaching and drying, iron sulfate and neodymium sulfate were mainly obtained. After the roasting step prior to water leaching, iron sulfate was converted to form hematite as the main compound, while neodymium sulfate remained in the ingredient. However, a small amount of magnetite was still detected by XRD. The higher roasting temperature at 800 ᵒC resulted in a greater Fe2O3 to Nd2(SO4)3 ratio, indicating a more effective roasting temperature. Iron oxides were subsequently water leached and filtered out while the solution contained mainly neodymium sulfate. Therefore, low oxidative roasting temperature not exceeding 600 ᵒC followed by acid leaching and roasting at 800 ᵒC gave the optimum condition for further steps of precipitation and calcination to finally achieve neodymium oxide.Keywords: NdFeB magnet waste, oxidative roasting, recycling, selective leaching
Procedia PDF Downloads 176538 The Historical Background of Physical Changing Towards Ancient Mosques in Aceh, Indonesia
Authors: Karima Adilla
Abstract:
Aceh province, into which Islam convinced to have entered Indonesia in the 12th Century before spreading throughout the archipelago and the rest of Southeast Asia, has several early Islamic mosques that still exist until today. However, due to some circumstances, the restoration and rehabilitation towards those mosques have been made in some periods, while the background was diverse. Concerning this, the research will examine the physical changing aspects of 3 prominent historical mosques in Aceh Besar and Banda Aceh; those are, Indrapuri Mosque, Baiturrahman Grand Mosque, and Baiturrahim Mosque built coincided with the beginning of Islam’s development in Aceh and regarded as eventful mosques. The existence of Indrapuri Mosque built on the remains of the Lamuri Kingdom’s temple is a historical trace that there was Hindu-Buddhist civilization in Aceh before Islam entered and became the majority religion about 98% from Aceh total population. Also, there was the Dutch who colonialized Aceh behind the existence of two famous mosques in Aceh, namely Baiturrahman Grand Mosque and Baiturrahim Mosque, as the colonizer also assisted to rebuild those 2 sacred Mosques to quell the anger of the Acehnese people because their mosque was burnt by the Dutch. Interestingly, despite underwent a long history successively since the rise of Islam after the Hindu-Buddhist kingdom had collapsed, colonialization, conflict, in Aceh, and even experienced the earthquake and tsunami disaster in 2004, those mosques still exist. Therefore, those mosques have been considered as historical silent witnesses. However, it was not merely those reasons that led the mosques underwent several physical changes, otherwise economic, political, social, cultural and religious factors were also highly influential. Instead of directly illustrating the physical changing of those three mosques, this research intends to identify under what condition the physical appearance continuously changing during the sultanate era, the colonial period until post-independent in terms of the architectural style, detail elements, design philosophy, and how the remnants buildings act as medium to bridge the history. A framework will use qualitative research methods by collecting actual data of the mosque's physical change figures through field studies, investigations, library studies and interviews. This research aims to define every trace of historical issues embedded in the physical changing of those mosques as they are intertwined in collecting historical proof. Thus, the result will reveal the characteristic interrelation between history, the mosque architectural style in a certain period, the physical changes background and its impact. Eventually, this research will also explicate a clear inference of each mosque’s role in representing history in Aceh Besar and Banda Aceh specifically, as well as Aceh generally through architectural design concepts.Keywords: Aceh ancient mosques, Aceh history, Islamic architecture, physical changing
Procedia PDF Downloads 134537 Structural Invertibility and Optimal Sensor Node Placement for Error and Input Reconstruction in Dynamic Systems
Authors: Maik Kschischo, Dominik Kahl, Philipp Wendland, Andreas Weber
Abstract:
Understanding and modelling of real-world complex dynamic systems in biology, engineering and other fields is often made difficult by incomplete knowledge about the interactions between systems states and by unknown disturbances to the system. In fact, most real-world dynamic networks are open systems receiving unknown inputs from their environment. To understand a system and to estimate the state dynamics, these inputs need to be reconstructed from output measurements. Reconstructing the input of a dynamic system from its measured outputs is an ill-posed problem if only a limited number of states is directly measurable. A first requirement for solving this problem is the invertibility of the input-output map. In our work, we exploit the fact that invertibility of a dynamic system is a structural property, which depends only on the network topology. Therefore, it is possible to check for invertibility using a structural invertibility algorithm which counts the number of node disjoint paths linking inputs and outputs. The algorithm is efficient enough, even for large networks up to a million nodes. To understand structural features influencing the invertibility of a complex dynamic network, we analyze synthetic and real networks using the structural invertibility algorithm. We find that invertibility largely depends on the degree distribution and that dense random networks are easier to invert than sparse inhomogeneous networks. We show that real networks are often very difficult to invert unless the sensor nodes are carefully chosen. To overcome this problem, we present a sensor node placement algorithm to achieve invertibility with a minimum set of measured states. This greedy algorithm is very fast and also guaranteed to find an optimal sensor node-set if it exists. Our results provide a practical approach to experimental design for open, dynamic systems. Since invertibility is a necessary condition for unknown input observers and data assimilation filters to work, it can be used as a preprocessing step to check, whether these input reconstruction algorithms can be successful. If not, we can suggest additional measurements providing sufficient information for input reconstruction. Invertibility is also important for systems design and model building. Dynamic models are always incomplete, and synthetic systems act in an environment, where they receive inputs or even attack signals from their exterior. Being able to monitor these inputs is an important design requirement, which can be achieved by our algorithms for invertibility analysis and sensor node placement.Keywords: data-driven dynamic systems, inversion of dynamic systems, observability, experimental design, sensor node placement
Procedia PDF Downloads 150536 Tests for Zero Inflation in Count Data with Measurement Error in Covariates
Authors: Man-Yu Wong, Siyu Zhou, Zhiqiang Cao
Abstract:
In quality of life, health service utilization is an important determinant of medical resource expenditures on Colorectal cancer (CRC) care, a better understanding of the increased utilization of health services is essential for optimizing the allocation of healthcare resources to services and thus for enhancing the service quality, especially for high expenditure on CRC care like Hong Kong region. In assessing the association between the health-related quality of life (HRQOL) and health service utilization in patients with colorectal neoplasm, count data models can be used, which account for over dispersion or extra zero counts. In our data, the HRQOL evaluation is a self-reported measure obtained from a questionnaire completed by the patients, misreports and variations in the data are inevitable. Besides, there are more zero counts from the observed number of clinical consultations (observed frequency of zero counts = 206) than those from a Poisson distribution with mean equal to 1.33 (expected frequency of zero counts = 156). This suggests that excess of zero counts may exist. Therefore, we study tests for detecting zero-inflation in models with measurement error in covariates. Method: Under classical measurement error model, the approximate likelihood function for zero-inflation Poisson regression model can be obtained, then Approximate Maximum Likelihood Estimation(AMLE) can be derived accordingly, which is consistent and asymptotically normally distributed. By calculating score function and Fisher information based on AMLE, a score test is proposed to detect zero-inflation effect in ZIP model with measurement error. The proposed test follows asymptotically standard normal distribution under H0, and it is consistent with the test proposed for zero-inflation effect when there is no measurement error. Results: Simulation results show that empirical power of our proposed test is the highest among existing tests for zero-inflation in ZIP model with measurement error. In real data analysis, with or without considering measurement error in covariates, existing tests, and our proposed test all imply H0 should be rejected with P-value less than 0.001, i.e., zero-inflation effect is very significant, ZIP model is superior to Poisson model for analyzing this data. However, if measurement error in covariates is not considered, only one covariate is significant; if measurement error in covariates is considered, only another covariate is significant. Moreover, the direction of coefficient estimations for these two covariates is different in ZIP regression model with or without considering measurement error. Conclusion: In our study, compared to Poisson model, ZIP model should be chosen when assessing the association between condition-specific HRQOL and health service utilization in patients with colorectal neoplasm. and models taking measurement error into account will result in statistically more reliable and precise information.Keywords: count data, measurement error, score test, zero inflation
Procedia PDF Downloads 286535 Influence of a High-Resolution Land Cover Classification on Air Quality Modelling
Authors: C. Silveira, A. Ascenso, J. Ferreira, A. I. Miranda, P. Tuccella, G. Curci
Abstract:
Poor air quality is one of the main environmental causes of premature deaths worldwide, and mainly in cities, where the majority of the population lives. It is a consequence of successive land cover (LC) and use changes, as a result of the intensification of human activities. Knowing these landscape modifications in a comprehensive spatiotemporal dimension is, therefore, essential for understanding variations in air pollutant concentrations. In this sense, the use of air quality models is very useful to simulate the physical and chemical processes that affect the dispersion and reaction of chemical species into the atmosphere. However, the modelling performance should always be evaluated since the resolution of the input datasets largely dictates the reliability of the air quality outcomes. Among these data, the updated LC is an important parameter to be considered in atmospheric models, since it takes into account the Earth’s surface changes due to natural and anthropic actions, and regulates the exchanges of fluxes (emissions, heat, moisture, etc.) between the soil and the air. This work aims to evaluate the performance of the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem), when different LC classifications are used as an input. The influence of two LC classifications was tested: i) the 24-classes USGS (United States Geological Survey) LC database included by default in the model, and the ii) CLC (Corine Land Cover) and specific high-resolution LC data for Portugal, reclassified according to the new USGS nomenclature (33-classes). Two distinct WRF-Chem simulations were carried out to assess the influence of the LC on air quality over Europe and Portugal, as a case study, for the year 2015, using the nesting technique over three simulation domains (25 km2, 5 km2 and 1 km2 horizontal resolution). Based on the 33-classes LC approach, particular emphasis was attributed to Portugal, given the detail and higher LC spatial resolution (100 m x 100 m) than the CLC data (5000 m x 5000 m). As regards to the air quality, only the LC impacts on tropospheric ozone concentrations were evaluated, because ozone pollution episodes typically occur in Portugal, in particular during the spring/summer, and there are few research works relating to this pollutant with LC changes. The WRF-Chem results were validated by season and station typology using background measurements from the Portuguese air quality monitoring network. As expected, a better model performance was achieved in rural stations: moderate correlation (0.4 – 0.7), BIAS (10 – 21µg.m-3) and RMSE (20 – 30 µg.m-3), and where higher average ozone concentrations were estimated. Comparing both simulations, small differences grounded on the Leaf Area Index and air temperature values were found, although the high-resolution LC approach shows a slight enhancement in the model evaluation. This highlights the role of the LC on the exchange of atmospheric fluxes, and stresses the need to consider a high-resolution LC characterization combined with other detailed model inputs, such as the emission inventory, to improve air quality assessment.Keywords: land use, spatial resolution, WRF-Chem, air quality assessment
Procedia PDF Downloads 152534 Geospatial Analysis of Spatio-Temporal Dynamic and Environmental Impact of Informal Settlement: A Case of Adama City, Ethiopia
Authors: Zenebu Adere Tola
Abstract:
Informal settlements behave dynamically over space and time and the number of people living in such housing areas is growing worldwide. In the cities of developing countries especially in sub-Saharan Africa, poverty, unemployment rate, poor living condition, lack transparency and accountability, lack of good governance are the major factors to contribute for the people to hold land informally and built houses for residential or other purposes. In most of Ethiopian cities informal settlement is highly seen in peripheral areas this is because people can easily to hold land for housing from local farmers, brokers, speculators without permission from concerning bodies. In Adama informal settlement has created risky living conditions and led to environmental problems in natural areas the main reason for this was the lack of sufficient knowledge about informal settlement development. On the other side there is a strong need to transform informal into formal settlements and to gain more control about the actual spatial development of informal settlements. In another hand to tackle the issue it is at least very important to understand the scale of the problem. To understand the scale of the problem it is important to use up-to-date technology. For this specific problem, it is good to use high-resolution imagery to detect informal settlement in Adama city. The main objective of this study is to assess the spatiotemporal dynamics and environmental impacts of informal settlement using OBIA. Specifically, the objective of this study is to; identify informal settlement in the study area, determine the change in the extent and pattern of informal settlement and to assess the environmental and social impacts of informal settlement in the study area. The methods to be used to detect the informal settlement is object-oriented image analysis. Consequently, reliable procedures for detecting the spatial behavior of informal settlements are required in order to react at an early stage to changing housing situations. Thus, obtaining spatial information about informal settlement areas which is up to date is vital for any actions of enhancement in terms of urban or regional planning. Using data for this study aerial photography for growth and change of informal settlements in Adama city. Software ECognition software for classy to built-up and non-built areas. Thus, obtaining spatial information about informal settlement areas which is up to date is vital for any actions of enhancement in terms of urban or regional planning.Keywords: informal settlement, change detection, environmental impact, object based analysis
Procedia PDF Downloads 83533 Experimental Study of Impregnated Diamond Bit Wear During Sharpening
Authors: Rui Huang, Thomas Richard, Masood Mostofi
Abstract:
The lifetime of impregnated diamond bits and their drilling efficiency are in part governed by the bit wear conditions, not only the extent of the diamonds’ wear but also their exposure or protrusion out of the matrix bonding. As much as individual diamonds wear, the bonding matrix does also wear through two-body abrasion (direct matrix-rock contact) and three-body erosion (cuttings trapped in the space between rock and matrix). Although there is some work dedicated to the study of diamond bit wear, there is still a lack of understanding on how matrix erosion and diamond exposure relate to the bit drilling response and drilling efficiency, as well as no literature on the process that governs bit sharpening a procedure commonly implemented by drillers when the extent of diamond polishing yield extremely low rate of penetration. The aim of this research is (i) to derive a correlation between the wear state of the bit and the drilling performance but also (ii) to gain a better understanding of the process associated with tool sharpening. The research effort combines specific drilling experiments and precise mapping of the tool-cutting face (impregnated diamond bits and segments). Bit wear is produced by drilling through a rock sample at a fixed rate of penetration for a given period of time. Before and after each wear test, the bit drilling response and thus efficiency is mapped out using a tailored design experimental protocol. After each drilling test, the bit or segment cutting face is scanned with an optical microscope. The test results show that, under the fixed rate of penetration, diamond exposure increases with drilling distance but at a decreasing rate, up to a threshold exposure that corresponds to the optimum drilling condition for this feed rate. The data further shows that the threshold exposure scale with the rate of penetration up to a point where exposure reaches a maximum beyond which no more matrix can be eroded under normal drilling conditions. The second phase of this research focuses on the wear process referred as bit sharpening. Drillers rely on different approaches (increase feed rate or decrease flow rate) with the aim of tearing worn diamonds away from the bit matrix, wearing out some of the matrix, and thus exposing fresh sharp diamonds and recovering a higher rate of penetration. Although a common procedure, there is no rigorous methodology to sharpen the bit and avoid excessive wear or bit damage. This paper aims to gain some insight into the mechanisms that accompany bit sharpening by carefully tracking diamond fracturing, matrix wear, and erosion and how they relate to drilling parameters recorded while sharpening the tool. The results show that there exist optimal conditions (operating parameters and duration of the procedure) for sharpening that minimize overall bit wear and that the extent of bit sharpening can be monitored in real-time.Keywords: bit sharpening, diamond exposure, drilling response, impregnated diamond bit, matrix erosion, wear rate
Procedia PDF Downloads 98532 Seafloor and Sea Surface Modelling in the East Coast Region of North America
Authors: Magdalena Idzikowska, Katarzyna Pająk, Kamil Kowalczyk
Abstract:
Seafloor topography is a fundamental issue in geological, geophysical, and oceanographic studies. Single-beam or multibeam sonars attached to the hulls of ships are used to emit a hydroacoustic signal from transducers and reproduce the topography of the seabed. This solution provides relevant accuracy and spatial resolution. Bathymetric data from ships surveys provides National Centers for Environmental Information – National Oceanic and Atmospheric Administration. Unfortunately, most of the seabed is still unidentified, as there are still many gaps to be explored between ship survey tracks. Moreover, such measurements are very expensive and time-consuming. The solution is raster bathymetric models shared by The General Bathymetric Chart of the Oceans. The offered products are a compilation of different sets of data - raw or processed. Indirect data for the development of bathymetric models are also measurements of gravity anomalies. Some forms of seafloor relief (e.g. seamounts) increase the force of the Earth's pull, leading to changes in the sea surface. Based on satellite altimetry data, Sea Surface Height and marine gravity anomalies can be estimated, and based on the anomalies, it’s possible to infer the structure of the seabed. The main goal of the work is to create regional bathymetric models and models of the sea surface in the area of the east coast of North America – a region of seamounts and undulating seafloor. The research includes an analysis of the methods and techniques used, an evaluation of the interpolation algorithms used, model thickening, and the creation of grid models. Obtained data are raster bathymetric models in NetCDF format, survey data from multibeam soundings in MB-System format, and satellite altimetry data from Copernicus Marine Environment Monitoring Service. The methodology includes data extraction, processing, mapping, and spatial analysis. Visualization of the obtained results was carried out with Geographic Information System tools. The result is an extension of the state of the knowledge of the quality and usefulness of the data used for seabed and sea surface modeling and knowledge of the accuracy of the generated models. Sea level is averaged over time and space (excluding waves, tides, etc.). Its changes, along with knowledge of the topography of the ocean floor - inform us indirectly about the volume of the entire water ocean. The true shape of the ocean surface is further varied by such phenomena as tides, differences in atmospheric pressure, wind systems, thermal expansion of water, or phases of ocean circulation. Depending on the location of the point, the higher the depth, the lower the trend of sea level change. Studies show that combining data sets, from different sources, with different accuracies can affect the quality of sea surface and seafloor topography models.Keywords: seafloor, sea surface height, bathymetry, satellite altimetry
Procedia PDF Downloads 78531 The Impact of Sensory Overload on Students on the Autism Spectrum in Italian Inclusive Classrooms: Teachers' Perspectives and Training Needs
Authors: Paola Molteni, Luigi d’Alonzo
Abstract:
Background: Sensory issues are now considered one of the key aspects in defining and diagnosing autism, changing the perspectives on behavioural analysis and intervention in mainstream educational services. However, Italian teachers’ training is yet not specific on the topic of autism and its sensory-related effects and this research investigates the teacher’s capability in understanding the student’s needs and his/her challenging behaviours considering sensory perceptions. Objectives: The research aims to analyse mainstream schools teachers’ awareness on students’ sensory perceptions and how this affects classroom inclusion and learning process. The research questions are: i) Are teachers able to identify student’s sensory issues?; ii) Are trained teachers more able to identify sensory problems then untrained ones?; iii) What is the impact of sensory issues on inclusion in mainstream classrooms?; iv) What should teachers know about autistic sensory dimensions? Methods: This research was designed as a pilot study that involves a multi-methods approach, including action and collaborative research methodology. The designed research allows the researcher to catch the complexity of a province school district (from kindergarten to high school) through a deep detailed analysis of selected aspects. The researcher explored the questions described above through 133 questionnaires and 6 focus groups. The qualitative and quantitative data collected during the research were analysed using the Interpretative Phenomenological Analysis (IPA). Results: Mainstream schools teachers are not able to confidently recognise sensory issues of children included in the classroom. The research underlines: how professionals with no specific training on autism are not able to recognise sensory problems in students on the spectrum; how hearing and sight issues have higher impact on classroom inclusion and student’s learning process; how a lack of understanding is often followed by misinterpretations of the impact of sensory issues and challenging behaviours. Conclusions: As this research has shown, promoting and enhancing the importance of understanding sensory issues related to autism is fundamental to enable mainstream schools teachers to define educational and life-long plans able to properly answer the student’s needs and support his/her real inclusion in the classroom. This study is a good example of how the educational research can meet and help the daily practice in working with people on the autism spectrum and support the training design for mainstream school teachers: the emerging need of designed preparation on sensory issues is fundamental to be considered when planning school district in-service training programmes, specifically declined for inclusive services.Keywords: autism spectrum condition, scholastic inclusion, sensory overload, teacher's training
Procedia PDF Downloads 316530 A Comparative Study in Acute Pancreatitis to Find out the Effectiveness of Early Addition of Ulinastatin to Current Standard Care in Indian Subjects
Authors: Dr. Jenit Gandhi, Dr. Manojith SS, Dr. Nakul GV, Dr. Sharath Honnani, Dr. Shaurav Ghosh, Dr. Neel Shetty, Dr. Nagabhushan JS, Dr. Manish Joshi
Abstract:
Introduction: Acute pancreatitis is an inflammatory condition of the pancreas which begins in pancreatic acinar cells and triggers local inflammation that may progress to systemic inflammatory response (SIRS) and causing distant organ involvement and its function and ending up with multiple organ dysfunction syndromes (MODS). Aim: A comparative study in acute pancreatitis to find out the effectiveness of early addition of Ulinastatin to current standard care in Indian subjects . Methodology: A current prospective observational study is done during study period of 1year (Dec 2018 –Dec 2019) duration to evaluate the effect of early addition of Ulinastatin to the current standard treatment and its efficacy to reduce the early complication, analgesic requirement and duration of hospital stay in patients with Acute Pancreatitis. Results: In the control group 25 were males and 05 were females. In the test group 18 were males and 12 females. Majority was in the age group between 30 - 70 yrs of age with >50% in the 30-50yrs age group in both test and control groups. The VAS was median grade 3 in control group as compared to median grade 2 in test group , the pain was more in the initial 2 days in test group compared to 4 days in test group , the analgesic requirement was used for more in control group (median 6) to test group( median 3 days ). On follow up after 5 days for a period of 2 weeks none of the patients in the test group developed any complication. Where as in the control group 8 patients developed pleural effusion, 04-Pseudopancreatic cyst, 02 – patient developed portal vein and splenic vein thrombosis, 02 patients – ventilator with ARDS which were treated symptomatically whereas in test group 02 patient developed pleural effusions and 01 pseudo pancreatic cyst with splenic artery aneurysm, 01 – patient with AKI and MODS symptomatically treated. The duration of hospital stay for a median period of 4 days (2 – 7 days) in test group and 7 days (4 -10 days) in control group. All patients were able to return to normal work on an average of 5days compared 8days in control group, the difference was significant. Conclusion:The study concluded that early addition of Ulinastatin to current standard treatment of acute Pancreatitis is effective in reducing pain, early complication and duration of hospital stay in Indian subjectKeywords: Ulinastatin, VAS – visual analogue score , AKI – acute kidney injury , ARDS – acute respiratory distress syndrome
Procedia PDF Downloads 120529 Quantitative Analysis of the High-Value Bioactive Components of Pre-Germinated and Germinated Pigmented Rice (Oryza sativa L. Cv. Superjami and Superhongmi)
Authors: Lara Marie Pangan Lo, Soo Im Chung, Yao Cheng Zhang, Xingyue Jin, Mi Young Kang
Abstract:
Being the world’s most consumed grain crop, rice (Oryza sativa L.) demands’ have increase and this prompted the development of new rice cultivars with high bio-functional properties than the commonly used white rice. Ordinary rice variety is already known to be a potential source for a number of nutritional as well as bioactive compounds. To further enhance the rice’s nutritive values, germination is done in addition to making it more tasty and palatable when cooked. Pigmented rice, on the other hand, has become increasingly popular in the recent years for their greater antioxidant potential and other nutraceutical properties which can help alleviate the occurrence of the increasing incidence of metabolic diseases. Combining these two (2) parameters, this research study is sought to quantitatively determine the pre-germinated and germinated quantities of the major bioactive compounds of South Korea’s newly developed purplish pigmented rice grain cultivar Superjami (SJ) and red pigmented rice grain Superhongmi (SH) and compare them against the non-pigmented Normal Brown (NB) rice variety. Powdered rice grain cultivars were subjected to 72-hour germination period and the quantities of GABA, γ-oryzanol, ferulic acid, tocopherol and tocotrienol homologues were compared against their pre-germinated condition using γ- amino butyric acid (GABA) analysis and High Performance Liquid Chromatography (HPLC). Results revealed the effectiveness of germination in enhancing the bioactive components in all rice samples. GABA contents in germinated rice cultivars increased by more than 10-fold following the order: SJ >SH >NB. In addition, purple rice variety (SJ) has higher total γ-oryzanol and ferulic acid contents which increased by > 2-fold after germination followed by the red cultivar SH then the control, NB. Germinated varieties also possess higher total tocotrienol content than their pre-germinated state. As for the total tocopherol content, SJ has higher quantity, but the red-pigmented SH (0.16 mg/kg) is shown to have lower total tocopherol content than the control rice NB (0.86 mg/kg). However, all tocopherol and tocotrienol homologues were present only in small amounts ( < 3.0 mg/kg) in all pre-germinated and germinated samples. In general, all of the analyzed pigmented rice cultivars were found to possess higher bioactive compounds than the control NB rice variety. Also, regardless of their strain, germinated rice samples have higher bioactive compounds than their pre-germinated counterparts. This only shows the effectiveness of germinating rice in enhancing bioactive constituents. Overall, these results suggest the potential of the pigmented rice varieties as natural source of nutraceuticals in bio-functional food development.Keywords: bioactive compounds, germinated rice, superhongmi, superjami
Procedia PDF Downloads 397528 Simulation of the Flow in a Circular Vertical Spillway Using a Numerical Model
Authors: Mohammad Zamani, Ramin Mansouri
Abstract:
Spillways are one of the most important hydraulic structures of dams that provide the stability of the dam and downstream areas at the time of flood. A circular vertical spillway with various inlet forms is very effective when there is not enough space for the other spillway. Hydraulic flow in a vertical circular spillway is divided into three groups: free, orifice, and under pressure (submerged). In this research, the hydraulic flow characteristics of a Circular Vertical Spillway are investigated with the CFD model. Two-dimensional unsteady RANS equations were solved numerically using Finite Volume Method. The PISO scheme was applied for the velocity-pressure coupling. The mostly used two-equation turbulence models, k-ε and k-ω, were chosen to model Reynolds shear stress term. The power law scheme was used for the discretization of momentum, k, ε, and ω equations. The VOF method (geometrically reconstruction algorithm) was adopted for interface simulation. In this study, three types of computational grids (coarse, intermediate, and fine) were used to discriminate the simulation environment. In order to simulate the flow, the k-ε (Standard, RNG, Realizable) and k-ω (standard and SST) models were used. Also, in order to find the best wall function, two types, standard wall, and non-equilibrium wall function, were investigated. The laminar model did not produce satisfactory flow depth and velocity along the Morning-Glory spillway. The results of the most commonly used two-equation turbulence models (k-ε and k-ω) were identical. Furthermore, the standard wall function produced better results compared to the non-equilibrium wall function. Thus, for other simulations, the standard k-ε with the standard wall function was preferred. The comparison criterion in this study is also the trajectory profile of jet water. The results show that the fine computational grid, the input speed condition for the flow input boundary, and the output pressure for the boundaries that are in contact with the air provide the best possible results. Also, the standard wall function is chosen for the effect of the wall function, and the turbulent model k-ε (Standard) has the most consistent results with experimental results. When the jet gets closer to the end of the basin, the computational results increase with the numerical results of their differences. The mesh with 10602 nodes, turbulent model k-ε standard and the standard wall function, provide the best results for modeling the flow in a vertical circular Spillway. There was a good agreement between numerical and experimental results in the upper and lower nappe profiles. In the study of water level over crest and discharge, in low water levels, the results of numerical modeling are good agreement with the experimental, but with the increasing water level, the difference between the numerical and experimental discharge is more. In the study of the flow coefficient, by decreasing in P/R ratio, the difference between the numerical and experimental result increases.Keywords: circular vertical, spillway, numerical model, boundary conditions
Procedia PDF Downloads 84527 Socio-Economic and Psychological Factors of Moscow Population Deviant Behavior: Sociological and Statistical Research
Authors: V. Bezverbny
Abstract:
The actuality of the project deals with stable growing of deviant behavior’ statistics among Moscow citizens. During the recent years the socioeconomic health, wealth and life expectation of Moscow residents is regularly growing up, but the limits of crime and drug addiction have grown up seriously. Another serious Moscow problem has been economical stratification of population. The cost of identical residential areas differs at 2.5 times. The project is aimed at complex research and the development of methodology for main factors and reasons evaluation of deviant behavior growing in Moscow. The main project objective is finding out the links between the urban environment quality and dynamics of citizens’ deviant behavior in regional and municipal aspect using the statistical research methods and GIS modeling. The conducted research allowed: 1) to evaluate the dynamics of deviant behavior in Moscow different administrative districts; 2) to describe the reasons of crime increasing, drugs addiction, alcoholism, suicides tendencies among the city population; 3) to develop the city districts classification based on the level of the crime rate; 4) to create the statistical database containing the main indicators of Moscow population deviant behavior in 2010-2015 including information regarding crime level, alcoholism, drug addiction, suicides; 5) to present statistical indicators that characterize the dynamics of Moscow population deviant behavior in condition of expanding the city territory; 6) to analyze the main sociological theories and factors of deviant behavior for concretization the deviation types; 7) to consider the main theoretical statements of the city sociology devoted to the reasons for deviant behavior in megalopolis conditions. To explore the level of deviant behavior’ factors differentiation, the questionnaire was worked out, and sociological survey involved more than 1000 people from different districts of the city was conducted. Sociological survey allowed to study the socio-economical and psychological factors of deviant behavior. It also included the Moscow residents’ open-ended answers regarding the most actual problems in their districts and reasons of wish to leave their place. The results of sociological survey lead to the conclusion that the main factors of deviant behavior in Moscow are high level of social inequality, large number of illegal migrants and bums, nearness of large transport hubs and stations on the territory, ineffective work of police, alcohol availability and drug accessibility, low level of psychological comfort for Moscow citizens, large number of building projects.Keywords: deviant behavior, megapolis, Moscow, urban environment, social stratification
Procedia PDF Downloads 191526 An Approach on Intelligent Tolerancing of Car Body Parts Based on Historical Measurement Data
Authors: Kai Warsoenke, Maik Mackiewicz
Abstract:
To achieve a high quality of assembled car body structures, tolerancing is used to ensure a geometric accuracy of the single car body parts. There are two main techniques to determine the required tolerances. The first is tolerance analysis which describes the influence of individually tolerated input values on a required target value. Second is tolerance synthesis to determine the location of individual tolerances to achieve a target value. Both techniques are based on classical statistical methods, which assume certain probability distributions. To ensure competitiveness in both saturated and dynamic markets, production processes in vehicle manufacturing must be flexible and efficient. The dimensional specifications selected for the individual body components and the resulting assemblies have a major influence of the quality of the process. For example, in the manufacturing of forming tools as operating equipment or in the higher level of car body assembly. As part of the metrological process monitoring, manufactured individual parts and assemblies are recorded and the measurement results are stored in databases. They serve as information for the temporary adjustment of the production processes and are interpreted by experts in order to derive suitable adjustments measures. In the production of forming tools, this means that time-consuming and costly changes of the tool surface have to be made, while in the body shop, uncertainties that are difficult to control result in cost-intensive rework. The stored measurement results are not used to intelligently design tolerances in future processes or to support temporary decisions based on real-world geometric data. They offer potential to extend the tolerancing methods through data analysis and machine learning models. The purpose of this paper is to examine real-world measurement data from individual car body components, as well as assemblies, in order to develop an approach for using the data in short-term actions and future projects. For this reason, the measurement data will be analyzed descriptively in the first step in order to characterize their behavior and to determine possible correlations. In the following, a database is created that is suitable for developing machine learning models. The objective is to create an intelligent way to determine the position and number of measurement points as well as the local tolerance range. For this a number of different model types are compared and evaluated. The models with the best result are used to optimize equally distributed measuring points on unknown car body part geometries and to assign tolerance ranges to them. The current results of this investigation are still in progress. However, there are areas of the car body parts which behave more sensitively compared to the overall part and indicate that intelligent tolerancing is useful here in order to design and control preceding and succeeding processes more efficiently.Keywords: automotive production, machine learning, process optimization, smart tolerancing
Procedia PDF Downloads 114525 Evaluation of Magnificent Event of India with Special Reference to Maha Kumbha Mela (Fair) 2013-A Congregation of Millions
Authors: Sharad Kumar Kulshreshtha
Abstract:
India is a great land of cultural and traditional diversity. Its spectrums create a unique ambiance in all over the country. Specially, fairs and festivals are ancient phenomena in Indian culture. In India, there are thousands of such religious, spiritual, cultural fairs organized on auspicious occasions. These fairs reflect the effective and efficient role of social governance and responsibility of Indian society. In this context a mega event known as ‘Kumbha Mela’ literally mean ‘Kumbha Fair’ which is organize after every twelve years at (Prayaag) Allahabad an ancient city of India, now is in the state of Uttar Pradesh. Kumbh Mela is one of the largest human congregations on the Earth. The Kumbha Mela that is held here is considered to be the largest and holiest city among the four cities where Kubha fair organize. According to the Hindu religious scripture a dip for possessing the holy confluence, known as Triveni Sangam, which is a meeting point of the three sacred rivers of India i.e., –Ganges, Yamuna and Saraswati (mythical). During the Kumbha fair the River Ganges is believed to turn to nectar, bringing great blessing to everyone who bathes in it. Other activities include religious discussions, devotional singings and mass feedings pilgrims and poor. The venue for Kumbh Mela (fair) depends on the position Sun, Moon, and Jupiter which holds in that period in different zodiac signs. More than 120 Millions (12 Crore) people visited in the Kumbha Fair-2013 in Allahabad. A temporary tented city was set up for the pilgrims over an area of 2 hectares of the land along the river of Ganges. As many as 5 power substations, temporary police stations, hospitals, bus terminals, stalls were set up for providing various facilities to the visitors and thousands of volunteers participated for assistance of this event. All efforts made by fair administration to provide facility to visitors, such security and sanitation, medical care and frequent water and power supply. The efficient and timely arrangement at the Kumbha Mela attracted the attention of many government and institutions. The Harvard University of USA conducted research to find out how it was made possible. This paper will focuses on effective and efficient planning and preparation of Kumbha Fair which includes facilitation process, role of various coordinating agencies. risk management crisis management strategies Prevention, Preparedness, Response, and Recovery (PPRR Approach), emergency response plan (ERP), safety and security issues, various environmental aspects along with health hazards and hygiene crowd management, evacuation, monitoring, control and evaluation.Keywords: event planning and facility arrangement, risk management, crowd management, India
Procedia PDF Downloads 304524 Online-Scaffolding-Learning Tools to Improve First-Year Undergraduate Engineering Students’ Self-Regulated Learning Abilities
Authors: Chen Wang, Gerard Rowe
Abstract:
The number of undergraduate engineering students enrolled in university has been increasing rapidly recently, leading to challenges associated with increased student-instructor ratios and increased diversity in academic preparedness of the entrants. An increased student-instructor ratio makes the interaction between teachers and students more difficult, with the resulting student ‘anonymity’ known to be a risk to academic success. With increasing student numbers, there is also an increasing diversity in the academic preparedness of the students at entry to university. Conceptual understanding of the entrants has been quantified via diagnostic testing, with the results for the first-year course in electrical engineering showing significant conceptual misunderstandings amongst the entry cohort. The solution is clearly multi-faceted, but part of the solution likely involves greater demands being placed on students to be masters of their own learning. In consequence, it is highly desirable that instructors help students to develop better self-regulated learning skills. A self-regulated learner is one who is capable of setting up their own learning goals, monitoring their study processes, adopting and adjusting learning strategies, and reflecting on their own study achievements. The methods by which instructors might cultivate students’ self-regulated learning abilities is receiving increasing attention from instructors and researchers. The aim of this study was to help students understand fully their self-regulated learning skill levels and provide targeted instructions to help them improve particular learning abilities in order to meet the curriculum requirements. As a survey tool, this research applied the questionnaire ‘Motivated Strategies for Learning Questionnaire’ (MSLQ) to collect first year engineering student’s self-reported data of their cognitive abilities, motivational orientations and learning strategies. MSLQ is a widely-used questionnaire for assessment of university student’s self-regulated learning skills. The questionnaire was offered online as a part of the online-scaffolding-learning tools to develop student understanding of self-regulated learning theories and learning strategies. The online tools, which have been under development since 2015, are designed to help first-year students understand their self-regulated learning skill levels by providing prompt feedback after they complete the questionnaire. In addition, the online tool also supplies corresponding learning strategies to students if they want to improve specific learning skills. A total of 866 first year engineering students who enrolled in the first-year electrical engineering course were invited to participate in this research project. By the end of the course 857 students responded and 738 of their questionnaires were considered as valid questionnaires. Analysis of these surveys showed that 66% of the students thought the online-scaffolding-learning tools helped significantly to improve their self-regulated learning abilities. It was particularly pleasing that 16.4% of the respondents thought the online-scaffolding-learning tools were extremely effective. A current thrust of our research is to investigate the relationships between students’ self-regulated learning abilities and their academic performance. Our results are being used by the course instructors as they revise the curriculum and pedagogy for this fundamental first-year engineering course, but the general principles we have identified are applicable to most first-year STEM courses.Keywords: academic preparedness, online-scaffolding-learning tool, self-regulated learning, STEM education
Procedia PDF Downloads 108523 Evaluation of Different Cropping Systems under Organic, Inorganic and Integrated Production Systems
Authors: Sidramappa Gaddnakeri, Lokanath Malligawad
Abstract:
Any kind of research on production technology of individual crop / commodity /breed has not brought sustainability or stability in crop production. The sustainability of the system over years depends on the maintenance of the soil health. Organic production system includes use of organic manures, biofertilizers, green manuring for nutrient supply and biopesticides for plant protection helps to sustain the productivity even under adverse climatic condition. The study was initiated to evaluate the performance of different cropping systems under organic, inorganic and integrated production systems at The Institute of Organic Farming, University of Agricultural Sciences, Dharwad (Karnataka-India) under ICAR Network Project on Organic Farming. The trial was conducted for four years (2013-14 to 2016-17) on fixed site. Five cropping systems viz., sequence cropping of cowpea – safflower, greengram– rabi sorghum, maize-bengalgram, sole cropping of pigeonpea and intercropping of groundnut + cotton were evaluated under six nutrient management practices. The nutrient management practices are NM1 (100% Organic farming (Organic manures equivalent to 100% N (Cereals/cotton) or 100% P2O5 (Legumes), NM2 (75% Organic farming (Organic manures equivalent to 75% N (Cereals/cotton) or 100% P2O5 (Legumes) + Cow urine and Vermi-wash application), NM3 (Integrated farming (50% Organic + 50% Inorganic nutrients, NM4 (Integrated farming (75% Organic + 25% Inorganic nutrients, NM5 (100% Inorganic farming (Recommended dose of inorganic fertilizers)) and NM6 (Recommended dose of inorganic fertilizers + Recommended rate of farm yard manure (FYM). Among the cropping systems evaluated for different production systems indicated that the Groundnut + Hybrid cotton (2:1) intercropping system found more remunerative as compared to Sole pigeonpea cropping system, Greengram-Sorghum sequence cropping system, Maize-Chickpea sequence cropping system and Cowpea-Safflower sequence cropping system irrespective of the production systems. Production practices involving application of recommended rates of fertilizers + recommended rates of organic manures (Farmyard manure) produced higher net monetary returns and higher B:C ratio as compared to integrated production system involving application of 50 % organics + 50 % inorganic and application of 75 % organics + 25 % inorganic and organic production system only Both the two organic production systems viz., 100 % Organic production system (Organic manures equivalent to 100 % N (Cereals/cotton) or 100 % P2O5 (Legumes) and 75 % Organic production system (Organic manures equivalent to 75 % N (Cereals) or 100 % P2O5 (Legumes) + Cow urine and Vermi-wash application) are found to be on par. Further, integrated production system involving application of organic manures and inorganic fertilizers found more beneficial over organic production systems.Keywords: cropping systems, production systems, cowpea, safflower, greengram, pigeonpea, groundnut, cotton
Procedia PDF Downloads 198522 Coastal Resources Spatial Planning and Potential Oil Risk Analysis: Case Study of Misratah’s Coastal Resources, Libya
Authors: Abduladim Maitieg, Kevin Lynch, Mark Johnson
Abstract:
The goal of the Libyan Environmental General Authority (EGA) and National Oil Corporation (Department of Health, Safety & Environment) during the last 5 years has been to adopt a common approach to coastal and marine spatial planning. Protection and planning of the coastal zone is a significant for Libya, due to the length of coast and, the high rate of oil export, and spills’ potential negative impacts on coastal and marine habitats. Coastal resource scenarios constitute an important tool for exploring the long-term and short-term consequences of oil spill impact and available response options that would provide an integrated perspective on mitigation. To investigate that, this paper reviews the Misratah coastal parameters to present the physical and human controls and attributes of coastal habitats as the first step in understanding how they may be damaged by an oil spill. This paper also investigates costal resources, providing a better understanding of the resources and factors that impact the integrity of the ecosystem. Therefore, the study described the potential spatial distribution of oil spill risk and the coastal resources value, and also created spatial maps of coastal resources and their vulnerability to oil spills along the coast. This study proposes an analysis of coastal resources condition at a local level in the Misratah region of the Mediterranean Sea, considering the implementation of coastal and marine spatial planning over time as an indication of the will to manage urban development. Oil spill contamination analysis and their impact on the coastal resources depend on (1) oil spill sequence, (2) oil spill location, (3) oil spill movement near the coastal area. The resulting maps show natural, socio-economic activity, environmental resources along of the coast, and oil spill location. Moreover, the study provides significant geodatabase information which is required for coastal sensitivity index mapping and coastal management studies. The outcome of study provides the information necessary to set an Environmental Sensitivity Index (ESI) for the Misratah shoreline, which can be used for management of coastal resources and setting boundaries for each coastal sensitivity sectors, as well as to help planners measure the impact of oil spills on coastal resources. Geographic Information System (GIS) tools were used in order to store and illustrate the spatial convergence of existing socio-economic activities such as fishing, tourism, and the salt industry, and ecosystem components such as sea turtle nesting area, Sabkha habitats, and migratory birds feeding sites. These geodatabases help planners investigate the vulnerability of coastal resources to an oil spill.Keywords: coastal and marine spatial planning advancement training, GIS mapping, human uses, ecosystem components, Misratah coast, Libyan, oil spill
Procedia PDF Downloads 360521 Quantitative Evaluation of Efficiency of Surface Plasmon Excitation with Grating-Assisted Metallic Nanoantenna
Authors: Almaz R. Gazizov, Sergey S. Kharintsev, Myakzyum Kh. Salakhov
Abstract:
This work deals with background signal suppression in tip-enhanced near-field optical microscopy (TENOM). The background appears because an optical signal is detected not only from the subwavelength area beneath the tip but also from a wider diffraction-limited area of laser’s waist that might contain another substance. The background can be reduced by using a taper probe with a grating on its lateral surface where an external illumination causes surface plasmon excitation. It requires the grating with parameters perfectly matched with a given incident light for effective light coupling. This work is devoted to an analysis of the light-grating coupling and a quest of grating parameters to enhance a near-field light beneath the tip apex. The aim of this work is to find the figure of merit of plasmon excitation depending on grating period and location of grating in respect to the apex. In our consideration the metallic grating on the lateral surface of the tapered plasmonic probe is illuminated by a plane wave, the electric field is perpendicular to the sample surface. Theoretical model of efficiency of plasmon excitation and propagation toward the apex is tested by fdtd-based numerical simulation. An electric field of the incident light is enhanced on the grating by every single slit due to lightning rod effect. Hence, grating causes amplitude and phase modulation of the incident field in various ways depending on geometry and material of grating. The phase-modulating grating on the probe is a sort of metasurface that provides manipulation by spatial frequencies of the incident field. The spatial frequency-dependent electric field is found from the angular spectrum decomposition. If one of the components satisfies the phase-matching condition then one can readily calculate the figure of merit of plasmon excitation, defined as a ratio of the intensities of the surface mode and the incident light. During propagation towards the apex, surface wave undergoes losses in probe material, radiation losses, and mode compression. There is an optimal location of the grating in respect to the apex. One finds the value by matching quadratic law of mode compression and the exponential law of light extinction. Finally, performed theoretical analysis and numerical simulations of plasmon excitation demonstrate that various surface waves can be effectively excited by using the overtones of a period of the grating or by phase modulation of the incident field. The gratings with such periods are easy to fabricate. Tapered probe with the grating effectively enhances and localizes the incident field at the sample.Keywords: angular spectrum decomposition, efficiency, grating, surface plasmon, taper nanoantenna
Procedia PDF Downloads 282520 Amorphous Aluminophosphates: An Insight to the Changes in Structural Properties and Catalytic Activity by the Incorporation of Transition Metals
Authors: A. Hamza, H. Kathyayini, N. Nagaraju
Abstract:
Aluminophosphates, both amorphous and crystalline materials find applications as adsorbents, ceramics, and pigments and as catalysts/catalyst supports in organic fine chemical synthesis. Most of the applications are varied depending on the type of metal incorporated, particle size, surface area, porosity and morphology of aluminophosphate. The porous and surface properties of these materials are normally fine-tuned by adopting various preparation methodologies. Numerous crystalline microporous and mesoporous aluminophosphates and metal-aluminophosphates have been reported in literature, in which the synthesis has been carried out by using structure directing organic molecules/surfactants. In present work, amorphous aluminophosphate (AlP) and metal-aluminophosphates MAlP (M = Cu, Zn, Cr, Fe, Ce and Zr) and their mixed forms M-1M2AlP are prepared under a typical precipitation condition, i.e. at low temperature in order to keep the Von-Weirmann relative super saturation of the precipitating medium and obtain small size precipitate particles. These materials are prepared without using any surfactants. All materials are thoroughly characterised for surface and bulk properties by N2 adsorption-desorption technique, XRD, FT-IR, TG and SEM. The materials are also analysed for the amount and the strength of their surface acid sites, by NH3-TPD and CO2-TPD techniques respectively. All the materials prepared in the work are investigated for their catalytic activity in following applications in the synthesis of industrially important Jasminaldehyde via, aldol condensation of n-heptanal and benzaldehyde, in the synthesis of biologically important chalcones by Claisen-shmidth condensation of benzaldehyde and substituted chalcones. The effect of the amount of the catalysts, duration of the reaction, temperature of the reaction, molar ratio of the reactants has been studied. The porosity of pure aluminophosphate is found to be changed significantly by the incorporation of transition metals during preparation of aluminophosphate. The pore size increased from microporous to mesoporous and finally to macroporous by following order of metals Cu = Zn < Cr < Ce < Fe = Zr. The change in surface area and porosity of double metal-aluminophosphates depended on the concentration of both the metals. The acidity of aluminophosphate is either increased or decreased which depended on the type and valence of metals loaded. A good number of basic sites are created in metal-aluminophosphates irrespective of the metals used. A maximum catalytic activity for synthesis of both jasminaldehyde and chalcone is obtained by FeAlP as catalysts; these materials are characterized by decreased strength and concentration of acidic sites with optimum level basic sites.Keywords: amorphous metal-aluminophosphates, surface properties, acidic-basic properties, Aldol, Claisen-Shmidth condensation, jasminaldehyde, chalcone
Procedia PDF Downloads 303519 Machine Learning Techniques for Estimating Ground Motion Parameters
Authors: Farid Khosravikia, Patricia Clayton
Abstract:
The main objective of this study is to evaluate the advantages and disadvantages of various machine learning techniques in forecasting ground-motion intensity measures given source characteristics, source-to-site distance, and local site condition. Intensity measures such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Estimating these variables for future earthquake events is a key step in seismic hazard assessment and potentially subsequent risk assessment of different types of structures. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as a statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The algorithms are adjusted to quantify event-to-event and site-to-site variability of the ground motions by implementing them as random effects in the proposed models to reduce the aleatory uncertainty. All the algorithms are trained using a selected database of 4,528 ground-motions, including 376 seismic events with magnitude 3 to 5.8, recorded over the hypocentral distance range of 4 to 500 km in Oklahoma, Kansas, and Texas since 2005. The main reason of the considered database stems from the recent increase in the seismicity rate of these states attributed to petroleum production and wastewater disposal activities, which necessities further investigation in the ground motion models developed for these states. Accuracy of the models in predicting intensity measures, generalization capability of the models for future data, as well as usability of the models are discussed in the evaluation process. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available.Keywords: artificial neural network, ground-motion models, machine learning, random forest, support vector machine
Procedia PDF Downloads 121