Search results for: computer applications
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8253

Search results for: computer applications

573 Contribution to the Study of Automatic Epileptiform Pattern Recognition in Long Term EEG Signals

Authors: Christine F. Boos, Fernando M. Azevedo

Abstract:

Electroencephalogram (EEG) is a record of the electrical activity of the brain that has many applications, such as monitoring alertness, coma and brain death; locating damaged areas of the brain after head injury, stroke and tumor; monitoring anesthesia depth; researching physiology and sleep disorders; researching epilepsy and localizing the seizure focus. Epilepsy is a chronic condition, or a group of diseases of high prevalence, still poorly explained by science and whose diagnosis is still predominantly clinical. The EEG recording is considered an important test for epilepsy investigation and its visual analysis is very often applied for clinical confirmation of epilepsy diagnosis. Moreover, this EEG analysis can also be used to help define the types of epileptic syndrome, determine epileptiform zone, assist in the planning of drug treatment and provide additional information about the feasibility of surgical intervention. In the context of diagnosis confirmation the analysis is made using long term EEG recordings with at least 24 hours long and acquired by a minimum of 24 electrodes in which the neurophysiologists perform a thorough visual evaluation of EEG screens in search of specific electrographic patterns called epileptiform discharges. Considering that the EEG screens usually display 10 seconds of the recording, the neurophysiologist has to evaluate 360 screens per hour of EEG or a minimum of 8,640 screens per long term EEG recording. Analyzing thousands of EEG screens in search patterns that have a maximum duration of 200 ms is a very time consuming, complex and exhaustive task. Because of this, over the years several studies have proposed automated methodologies that could facilitate the neurophysiologists’ task of identifying epileptiform discharges and a large number of methodologies used neural networks for the pattern classification. One of the differences between all of these methodologies is the type of input stimuli presented to the networks, i.e., how the EEG signal is introduced in the network. Five types of input stimuli have been commonly found in literature: raw EEG signal, morphological descriptors (i.e. parameters related to the signal’s morphology), Fast Fourier Transform (FFT) spectrum, Short-Time Fourier Transform (STFT) spectrograms and Wavelet Transform features. This study evaluates the application of these five types of input stimuli and compares the classification results of neural networks that were implemented using each of these inputs. The performance of using raw signal varied between 43 and 84% efficiency. The results of FFT spectrum and STFT spectrograms were quite similar with average efficiency being 73 and 77%, respectively. The efficiency of Wavelet Transform features varied between 57 and 81% while the descriptors presented efficiency values between 62 and 93%. After simulations we could observe that the best results were achieved when either morphological descriptors or Wavelet features were used as input stimuli.

Keywords: Artificial neural network, electroencephalogram signal, pattern recognition, signal processing

Procedia PDF Downloads 520
572 Synthesis of Carbon Nanotubes from Coconut Oil and Fabrication of a Non Enzymatic Cholesterol Biosensor

Authors: Mitali Saha, Soma Das

Abstract:

The fabrication of nanoscale materials for use in chemical sensing, biosensing and biological analyses has proven a promising avenue in the last few years. Cholesterol has aroused considerable interest in recent years on account of its being an important parameter in clinical diagnosis. There is a strong positive correlation between high serum cholesterol level and arteriosclerosis, hypertension, and myocardial infarction. Enzyme-based electrochemical biosensors have shown high selectivity and excellent sensitivity, but the enzyme is easily denatured during its immobilization procedure and its activity is also affected by temperature, pH, and toxic chemicals. Besides, the reproducibility of enzyme-based sensors is not very good which further restrict the application of cholesterol biosensor. It has been demonstrated that carbon nanotubes could promote electron transfer with various redox active proteins, ranging from cytochrome c to glucose oxidase with a deeply embedded redox center. In continuation of our earlier work on the synthesis and applications of carbon and metal based nanoparticles, we have reported here the synthesis of carbon nanotubes (CCNT) by burning coconut oil under insufficient flow of air using an oil lamp. The soot was collected from the top portion of the flame, where the temperature was around 6500C which was purified, functionalized and then characterized by SEM, p-XRD and Raman spectroscopy. The SEM micrographs showed the formation of tubular structure of CCNT having diameter below 100 nm. The XRD pattern indicated the presence of two predominant peaks at 25.20 and 43.80, which corresponded to (002) and (100) planes of CCNT respectively. The Raman spectrum (514 nm excitation) showed the presence of 1600 cm-1 (G-band) related to the vibration of sp2-bonded carbon and at 1350 cm-1 (D-band) responsible for the vibrations of sp3-bonded carbon. A nonenzymatic cholesterol biosensor was then fabricated on an insulating Teflon material containing three silver wires at the surface, covered by CCNT, obtained from coconut oil. Here, CCNTs worked as working as well as counter electrodes whereas reference electrode and electric contacts were made of silver. The dimensions of the electrode was 3.5 cm×1.0 cm×0.5 cm (length× width × height) and it is ideal for working with 50 µL volume like the standard screen printed electrodes. The voltammetric behavior of cholesterol at CCNT electrode was investigated by cyclic voltammeter and differential pulse voltammeter using 0.001 M H2SO4 as electrolyte. The influence of the experimental parameters on the peak currents of cholesterol like pH, accumulation time, and scan rates were optimized. Under optimum conditions, the peak current was found to be linear in the cholesterol concentration range from 1 µM to 50 µM with a sensitivity of ~15.31 μAμM−1cm−2 with lower detection limit of 0.017 µM and response time of about 6s. The long-term storage stability of the sensor was tested for 30 days and the current response was found to be ~85% of its initial response after 30 days.

Keywords: coconut oil, CCNT, cholesterol, biosensor

Procedia PDF Downloads 279
571 Rheological Characterization of Polysaccharide Extracted from Camelina Meal as a New Source of Thickening Agent

Authors: Mohammad Anvari, Helen S. Joyner (Melito)

Abstract:

Camelina sativa (L.) Crantz is an oilseed crop currently used for the production of biofuels. However, the low price of diesel and gasoline has made camelina an unprofitable crop for farmers, leading to declining camelina production in the US. Hence, the ability to utilize camelina byproduct (defatted meal) after oil extraction would be a pivotal factor for promoting the economic value of the plant. Camelina defatted meal is rich in proteins and polysaccharides. The great diversity in the polysaccharide structural features provides a unique opportunity for use in food formulations as thickeners, gelling agents, emulsifiers, and stabilizers. There is currently a great degree of interest in the study of novel plant polysaccharides, as they can be derived from readily accessible sources and have potential application in a wide range of food formulations. However, there are no published studies on the polysaccharide extracted from camelina meal, and its potential industrial applications remain largely underexploited. Rheological properties are a key functional feature of polysaccharides and are highly dependent on the material composition and molecular structure. Therefore, the objective of this study was to evaluate the rheological properties of the polysaccharide extracted from camelina meal at different conditions to obtain insight on the molecular characteristics of the polysaccharide. Flow and dynamic mechanical behaviors were determined under different temperatures (5-50°C) and concentrations (1-6% w/v). Additionally, the zeta potential of the polysaccharide dispersion was measured at different pHs (2-11) and a biopolymer concentration of 0.05% (w/v). Shear rate sweep data revealed that the camelina polysaccharide displayed shear thinning (pseudoplastic) behavior, which is typical of polymer systems. The polysaccharide dispersion (1% w/v) showed no significant changes in viscosity with temperature, which makes it a promising ingredient in products requiring texture stability over a range of temperatures. However, the viscosity increased significantly with increased concentration, indicating that camelina polysaccharide can be used in food products at different concentrations to produce a range of textures. Dynamic mechanical spectra showed similar trends. The temperature had little effect on viscoelastic moduli. However, moduli were strongly affected by concentration: samples exhibited concentrated solution behavior at low concentrations (1-2% w/v) and weak gel behavior at higher concentrations (4-6% w/v). These rheological properties can be used for designing and modeling of liquid and semisolid products. Zeta potential affects the intensity of molecular interactions and molecular conformation and can alter solubility, stability, and eventually, the functionality of the materials as their environment changes. In this study, the zeta potential value significantly decreased from 0.0 to -62.5 as pH increased from 2 to 11, indicating that pH may affect the functional properties of the polysaccharide. The results obtained in the current study showed that camelina polysaccharide has significant potential for application in various food systems and can be introduced as a novel anionic thickening agent with unique properties.

Keywords: Camelina meal, polysaccharide, rheology, zeta potential

Procedia PDF Downloads 240
570 Anti-lipidemic and Hematinic Potentials of Moringa Oleifera Leaves: A Clinical Trial on Type 2 Diabetic Subjects in a Rural Nigerian Community

Authors: Ifeoma C. Afiaenyi, Elizabeth K. Ngwu, Rufina N. B. Ayogu

Abstract:

Diabetes has crept into the rural areas of Nigeria, causing devastating effects on its sufferers; most of them could not afford diabetic medications. Moringa oleifera has been used extensively in animal models to demonstrate its antilipidaemic and haematinic qualities; however, there is a scarcity of data on the effect of graded levels of Moringa oleifera leaves on the lipid profile and hematological parameters in human diabetic subjects. The study determined the effect of Moringa oleifera leaves on the lipid profile and hematological parameters of type 2 diabetic subjects in Ukehe, a rural Nigerian community. Twenty-four adult male and female diabetic subjects were purposively selected for the study. These subjects were shared into four groups of six subjects each. The diets used in the study were isocaloric. A control group (diabetics, group 1) was fed diets without Moringa oleifera leaves. Experimental groups 2, 3 and 4 received 20g, 40g and 60g of Moringa oleifera leaves daily, respectively, in addition to the diets. The subjects' lipid profile and hematological parameters were measured prior to the feeding trial and at the end of the feeding trial. The feeding trial lasted for fourteen days. The data obtained were analyzed using the computer program Statistical Product for Service Solution (SPSS) for windows version 21. A Paired-samples t-test was used to compare the means of values collected before and after the feeding trial within the groups and significance was accepted at p < 0.05. There was a non-significant (p > 0.05) decrease in the mean total cholesterol of the subjects in groups 1, 2 and 3 after the feeding trial. There was a non-significant (p > 0.05) decrease in the mean triglyceride levels of the subjects in group 1 after the feeding trial. Groups 1 and 3 subjects had a non-significant (p > 0.05) decrease in their mean low-density lipoprotein (LDL) cholesterol after the feeding trial. Groups 1, 2 and 4 had a significant (p < 0.05) increase in their mean high-density lipoprotein (HDL) cholesterol after the feeding trial. A significant (p < 0.05) decrease in the mean hemoglobin level was observed only in group 4 subjects. Similarly, there was a significant (p < 0.05) decrease in the mean packed cell volume of group 4 subjects. It was only in group 4 that a significant (p < 0.05) decrease in the mean white blood cells of the subjects was also observed. The changes observed in the parameters assessed were not dose-dependent. Therefore, a similar study of longer duration and more samples is imperative to authenticate these results.

Keywords: anemia, diabetic subjects, lipid profile, moringa oleifera

Procedia PDF Downloads 191
569 A development of Innovator Teachers Training Curriculum to Create Instructional Innovation According to Active Learning Approach to Enhance learning Achievement of Private School in Phayao Province

Authors: Palita Sooksamran, Katcharin Mahawong

Abstract:

This research aims to offer the development of innovator teachers training curriculum to create instructional innovation according to active learning approach to enhance learning achievement. The research and development process is carried out in 3 steps: Step 1 The study of the needs necessary to develop a training curriculum: the inquiry was conducted by a sample of teachers in private schools in Phayao province that provide basic education at the level of education. Using a questionnaire of 176 people, the sample was defined using a table of random numbers and stratified samples, using the school as a random layer. Step 2 Training curriculum development: the tools used are developed training curriculum and curriculum assessments, with nine experts checking the appropriateness of the draft curriculum. The statistic used in data analysis is the average ( ) and standard deviation (S.D.) Step 3 study on effectiveness of training curriculum: one group pretest/posttest design applied in this study. The sample consisted of 35 teachers from private schools in Phayao province. The participants volunteered to attend on their own. The results of the research showed that: 1.The essential demand index needed with the list of essential needs in descending order is the choice and create of multimedia media, videos, application for learning management at the highest level ,Developed of multimedia, video and applications for learning management and selection of innovative learning management techniques and methods of solve the problem Learning , respectively. 2. The components of the training curriculum include principles, aims, scope of content, training activities, learning materials and resources, supervision evaluation. The scope of the curriculum consists of basic knowledge about learning management innovation, active learning, lesson plan design, learning materials and resources, learning measurement and evaluation, implementation of lesson plans into classroom and supervision and motoring. The results of the evaluation of quality of the draft training curriculum at the highest level. The Experts suggestion is that the purpose of the course should be used words that convey the results. 3. The effectiveness of training curriculum 1) Cognitive outcomes of the teachers in creating innovative learning management was at a high level of relative gain score. 2) The assessment results of learning management ability according to the active learning approach to enhance learning achievement by assessing from 2 education supervisor as a whole were very high , 3) Quality of innovation learning management based on active learning approach to enhance learning achievement of the teachers, 7 instructional Innovations were evaluated as outstanding works and 26 instructional Innovations passed the standard 4) Overall learning achievement of students who learned from 35 the sample teachers was at a high level of relative gain score 5) teachers' satisfaction towards the training curriculum was at the highest level.

Keywords: training curriculum, innovator teachers, active learning approach, learning achievement

Procedia PDF Downloads 45
568 Factors Influencing Telehealth Services for Diabetes Care in Nepal: A Mixed Method Study

Authors: Sumitra Sharma, Christina Parker, Kathleen Finlayson, Clint Douglas, Niall Higgins

Abstract:

Background: Telehealth services have potential to increase accessibility, utilization, and effectiveness of healthcare services. As the telehealth services are yet to integrate within regular hospital services in Nepal, the use of the telehealth services among adults with diabetes is scarce. Prior to implementation of telehealth services for adults with diabetes, it is necessary to examine influencing factors of telehealth services. Objective: This study aimed to investigate factors influencing telehealth services for diabetes care in Nepal. Methods: This study used a mixed-method study design which included a cross-sectional survey among adults with diabetes and semi-structured interviews among key healthcare professionals of Nepal. The study was conducted in a medical out-patient department of a tertiary hospital of Nepal. The survey adapted a previously validated questionnaire, while semi-structured questions for interviews were developed from literature review and experts consultation. All interviews were audio-recorded, and inductive content analysis was used to code transcripts and develop themes. For a survey, a descriptive analysis, chi-square test, and Mann Whitney U test were used to analyze the data. Results: One hundred adults with diabetes were participated in a survey, and seven healthcare professionals were recruited for interviews. In a survey, just over half of the participants (53%) were male, and others were female. Almost all participants (98%) owned a mobile phone, and 67% of them had a computer with internet access at home. Majority of participants had experience in using Facebook messenger (95%), followed by Viber (60%) and Zoom (26%). Almost all of the participants (96%) were willing to use telehealth services. There were significant associations between female sex and participants living 10 km away from the hospital with their willingness to use telehealth services. There was a significant association between participants' self-perception of good health status with their willingness to use video-conference calls and phone calls to use telehealth services. Seven themes were developed from interview data which are related to predisposing, reinforcing, and enabling factors influencing telehealth services for diabetes care in Nepal. Conclusion: In summary, several factors were found to influence the use of telehealth services for diabetes care in Nepal. For effective implementation of a sustainable telehealth services for adults with diabetes in Nepal, these factors need to be considered.

Keywords: contributing factors, diabetes mellitus, developing countries, telemedicine, telecare

Procedia PDF Downloads 67
567 Cyber Warfare and Cyber Terrorism: An Analysis of Global Cooperation and Cyber Security Counter Measures

Authors: Mastoor Qubra

Abstract:

Cyber-attacks have frequently disrupted the critical infrastructures of the major global states and now, cyber threat has become one of the dire security risks for the states across the globe. Recently, ransomware cyber-attacks, wannacry and petya, have affected hundreds of thousands of computer servers and individuals’ private machines in more than hundred countries across Europe, Middle East, Asia, United States and Australia. Although, states are rapidly becoming aware of the destructive nature of this new security threat and counter measures are being taken but states’ isolated efforts would be inadequate to deal with this heinous security challenge, rather a global coordination and cooperation is inevitable in order to develop a credible cyber deterrence policy. Hence, the paper focuses that coordinated global approach is required to deter posed cyber threat. This paper intends to analyze the cyber security counter measures in four dimensions i.e. evaluation of prevalent strategies at bilateral level, initiatives and limitations for cooperation at global level, obstacles to combat cyber terrorism and finally, recommendations to deter the threat by applying tools of deterrence theory. Firstly, it focuses on states’ efforts to combat the cyber threat and in this regard, US-Australia Cyber Security Dialogue is comprehensively illustrated and investigated. Secondly, global partnerships and strategic and analytic role of multinational organizations, particularly United Nations (UN), to deal with the heinous threat, is critically analyzed and flaws are highlighted, for instance; less significance of cyber laws within international law as compared to other conflict prone issues. In addition to this, there are certain obstacles and limitations at national, regional and global level to implement the cyber terrorism counter strategies which are presented in the third section. Lastly, by underlining the gaps and grey areas in the current cyber security counter measures, it aims to apply tools of deterrence theory, i.e. defense, attribution and retaliation, in the cyber realm to contribute towards formulating a credible cyber deterrence strategy at global level. Thus, this study is significant in understanding and determining the inevitable necessity of counter cyber terrorism strategies.

Keywords: attribution, critical infrastructure, cyber terrorism, global cooperation

Procedia PDF Downloads 263
566 Development of a Real-Time Simulink Based Robotic System to Study Force Feedback Mechanism during Instrument-Object Interaction

Authors: Jaydip M. Desai, Antonio Valdevit, Arthur Ritter

Abstract:

Robotic surgery is used to enhance minimally invasive surgical procedure. It provides greater degree of freedom for surgical tools but lacks of haptic feedback system to provide sense of touch to the surgeon. Surgical robots work on master-slave operation, where user is a master and robotic arms are the slaves. Current, surgical robots provide precise control of the surgical tools, but heavily rely on visual feedback, which sometimes cause damage to the inner organs. The goal of this research was to design and develop a real-time simulink based robotic system to study force feedback mechanism during instrument-object interaction. Setup includes three Velmex XSlide assembly (XYZ Stage) for three dimensional movement, an end effector assembly for forceps, electronic circuit for four strain gages, two Novint Falcon 3D gaming controllers, microcontroller board with linear actuators, MATLAB and Simulink toolboxes. Strain gages were calibrated using Imada Digital Force Gauge device and tested with a hard-core wire to measure instrument-object interaction in the range of 0-35N. Designed simulink model successfully acquires 3D coordinates from two Novint Falcon controllers and transfer coordinates to the XYZ stage and forceps. Simulink model also reads strain gages signal through 10-bit analog to digital converter resolution of a microcontroller assembly in real time, converts voltage into force and feedback the output signals to the Novint Falcon controller for force feedback mechanism. Experimental setup allows user to change forward kinematics algorithms to achieve the best-desired movement of the XYZ stage and forceps. This project combines haptic technology with surgical robot to provide sense of touch to the user controlling forceps through machine-computer interface.

Keywords: surgical robot, haptic feedback, MATLAB, strain gage, simulink

Procedia PDF Downloads 527
565 The Appropriate Number of Test Items That a Classroom-Based Reading Assessment Should Include: A Generalizability Analysis

Authors: Jui-Teng Liao

Abstract:

The selected-response (SR) format has been commonly adopted to assess academic reading in both formal and informal testing (i.e., standardized assessment and classroom assessment) because of its strengths in content validity, construct validity, as well as scoring objectivity and efficiency. When developing a second language (L2) reading test, researchers indicate that the longer the test (e.g., more test items) is, the higher reliability and validity the test is likely to produce. However, previous studies have not provided specific guidelines regarding the optimal length of a test or the most suitable number of test items or reading passages. Additionally, reading tests often include different question types (e.g., factual, vocabulary, inferential) that require varying degrees of reading comprehension and cognitive processes. Therefore, it is important to investigate the impact of question types on the number of items in relation to the score reliability of L2 reading tests. Given the popularity of the SR question format and its impact on assessment results on teaching and learning, it is necessary to investigate the degree to which such a question format can reliably measure learners’ L2 reading comprehension. The present study, therefore, adopted the generalizability (G) theory to investigate the score reliability of the SR format in L2 reading tests focusing on how many test items a reading test should include. Specifically, this study aimed to investigate the interaction between question types and the number of items, providing insights into the appropriate item count for different types of questions. G theory is a comprehensive statistical framework used for estimating the score reliability of tests and validating their results. Data were collected from 108 English as a second language student who completed an English reading test comprising factual, vocabulary, and inferential questions in the SR format. The computer program mGENOVA was utilized to analyze the data using multivariate designs (i.e., scenarios). Based on the results of G theory analyses, the findings indicated that the number of test items had a critical impact on the score reliability of an L2 reading test. Furthermore, the findings revealed that different types of reading questions required varying numbers of test items for reliable assessment of learners’ L2 reading proficiency. Further implications for teaching practice and classroom-based assessments are discussed.

Keywords: second language reading assessment, validity and reliability, Generalizability theory, Academic reading, Question format

Procedia PDF Downloads 77
564 The Effect of Emotional Intelligence on Physiological Stress of Managers

Authors: Mikko Salminen, Simo Järvelä, Niklas Ravaja

Abstract:

One of the central models of emotional intelligence (EI) is that of Mayer and Salovey’s, which includes ability to monitor own feelings and emotions and those of others, ability to discriminate different emotions, and to use this information to guide thinking and actions. There is vast amount of previous research where positive links between EI and, for example, leadership successfulness, work outcomes, work wellbeing and organizational climate have been reported. EI has also a role in the effectiveness of work teams, and the effects of EI are especially prominent in jobs requiring emotional labor. Thus, also the organizational context must be taken into account when considering the effects of EI on work outcomes. Based on previous research, it is suggested that EI can also protect managers from the negative consequences of stress. Stress may have many detrimental effects on the manager’s performance in essential work tasks. Previous studies have highlighted the effects of stress on, not only health, but also, for example, on cognitive tasks such as decision-making, which is important in managerial work. The motivation for the current study came from the notion that, unfortunately, many stressed individuals may not be aware of the circumstance; periods of stress-induced physiological arousal may be prolonged if there is not enough time for recovery. To tackle this problem, physiological stress levels of managers were collected using recording of heart rate variability (HRV). The goal was to use this data to provide the managers with feedback on their stress levels. The managers could access this feedback using a www-based learning environment. In the learning environment, in addition to the feedback on stress level and other collected data, also developmental tasks were provided. For example, those with high stress levels were sent instructions for mindfulness exercises. The current study focuses on the relation between the measured physiological stress levels and EI of the managers. In a pilot study, 33 managers from various fields wore the Firstbeat Bodyguard HRV measurement devices for three consecutive days and nights. From the collected HRV data periods (minutes) of stress and recovery were detected using dedicated software. The effects of EI on HRV-calculated stress indexes were studied using Linear Mixed Models procedure in SPSS. There was a statistically significant effect of total EI, defined as an average score of Schutte’s emotional intelligence test, on the percentage of stress minutes during the whole measurement period (p=.025). More stress minutes were detected on those managers who had lower emotional intelligence. It is suggested, that high EI provided managers with better tools to cope with stress. Managing of own emotions helps the manager in controlling possible negative emotions evoked by, e.g., critical feedback or increasing workload. High EI managers may also be more competent in detecting emotions of others, which would lead to smoother interactions and less conflicts. Given the recent trend to different quantified-self applications, it is suggested that monitoring of bio-signals would prove to be a fruitful direction to further develop new tools for managerial and leadership coaching.

Keywords: emotional intelligence, leadership, heart rate variability, personality, stress

Procedia PDF Downloads 219
563 The Evolution and Driving Forces Analysis of Urban Spatial Pattern in Tibet Based on Archetype Theory

Authors: Qiuyu Chen, Bin Long, Junxi Yang

Abstract:

Located in the southwest of the "roof of the world", Tibet is the origin center of Tibetan Culture.Lhasa, Shigatse and Gyantse are three famous historical and cultural cities in Tibet. They have always been prominent political, economic and cultural cities, and have accumulated the unique aesthetic orientation and value consciousness of Tibet's urban construction. "Archetype" usually refers to the theoretical origin of things, which is the collective unconscious precipitation. The archetype theory fundamentally explores the dialectical relationship between image expression, original form and behavior mode. By abstracting and describing typical phenomena or imagery of the archetype object can observe the essence of objects, explore ways in which object phenomena arise. Applying archetype theory to the field of urban planning helps to gain insight, evaluation, and restructuring of the complex and ever-changing internal structural units of cities. According to existing field investigations, it has been found that Dzong, Temple, Linka and traditional residential systems are important structural units that constitute the urban space of Lhasa, Shigatse and Gyantse. This article applies the thinking method of archetype theory, starting from the imagery expression of urban spatial pattern, using technologies such as ArcGIS, Depthmap, and Computer Vision to descriptively identify the spatial representation and plane relationship of three cities through remote sensing images and historical maps. Based on historical records, the spatial characteristics of cities in different historical periods are interpreted in a hierarchical manner, attempting to clarify the origin of the formation and evolution of urban pattern imagery from the perspectives of geopolitical environment, social structure, religious theory, etc, and expose the growth laws and key driving forces of cities. The research results can provide technical and material support for important behaviors such as urban restoration, spatial intervention, and promoting transformation in the region.

Keywords: archetype theory, urban spatial imagery, original form and pattern, behavioral driving force, Tibet

Procedia PDF Downloads 58
562 Highly Automated Trucks In Intermodal Logistics: Findings From a Field Test in Railport and Container Depot Operations in Germany

Authors: Dustin Schöder

Abstract:

The potential benefits of the utilization of highly automated and autonomous trucks in logistics operations are the subject of interest to the entire logistics industry. The benefits of the use of these new technologies were scientifically investigated and implemented in roadmaps. So far, reliable data and experiences from real life use cases are still limited. A German research consortium of both academics and industry developed a highly automated (SAE level 4) vehicle for yard operations at railports and container depots. After development and testing, a several month field test at the DUSS Terminal in Ulm-Dornstadt (Germany) and the nearby DB Intermodal Services Container Depot in Ulm-Dornstadt was conducted. The truck was piloted in a shuttle service between both sites. In a holistic automation approach, the vehicle was integrated into a digital communication platform so that the truck could move autonomously without a driver and his manual interactions with a wide variety of stakeholders. The main goal is to investigate the effects of highly automated trucks in the key processes of container loading, unloading and container relocation on holistic railport yard operation. The field test data were used to investigate changes in process efficiency of key processes of railport and container yard operations. Moreover, effects on the capacity utilization and potentials for smothering peak workloads were analyzed. The results state that process efficiency in the piloted use case was significantly higher. The reason for that could be found in the digitalized data exchange and automated dispatch. However, the field test has shown that the effect is greatly varying depending on the ratio of highly automated and manual trucks in the yard as well as on the congestion level in the loading area. Furthermore, the data confirmed that under the right conditions, the capacity utilization of highly automated trucks could be increased. In regard to the potential for smothering peak workloads, no significant findings could be made based on the limited requirements and regulations of railway operation in Germany. In addition, an empirical survey among railport managers, operational supervisors, innovation managers and strategists (n=15) within the logistics industry in Germany was conducted. The goal was to identify key characteristics of future railports and terminals as well as requirements that railports will have to meet in the future. Furthermore, the railport processes where automation and autonomization make the greatest impact, as well as hurdles and challenges in the introduction of new technologies, have been surveyed. Hence, further potential use cases of highly automated and autonomous applications could be identified, and expectations have been mapped. As a result, a highly detailed and practice-based roadmap towards a ‘terminal 4.0’ was developed.

Keywords: highly automated driving, autonomous driving, SAE level 4, railport operations, container depot, intermodal logistics, potentials of autonomization

Procedia PDF Downloads 73
561 Earthquake Risk Assessment Using Out-of-Sequence Thrust Movement

Authors: Rajkumar Ghosh

Abstract:

Earthquakes are natural disasters that pose a significant risk to human life and infrastructure. Effective earthquake mitigation measures require a thorough understanding of the dynamics of seismic occurrences, including thrust movement. Traditionally, estimating thrust movement has relied on typical techniques that may not capture the full complexity of these events. Therefore, investigating alternative approaches, such as incorporating out-of-sequence thrust movement data, could enhance earthquake mitigation strategies. This review aims to provide an overview of the applications of out-of-sequence thrust movement in earthquake mitigation. By examining existing research and studies, the objective is to understand how precise estimation of thrust movement can contribute to improving structural design, analyzing infrastructure risk, and developing early warning systems. The study demonstrates how to estimate out-of-sequence thrust movement using multiple data sources, including GPS measurements, satellite imagery, and seismic recordings. By analyzing and synthesizing these diverse datasets, researchers can gain a more comprehensive understanding of thrust movement dynamics during seismic occurrences. The review identifies potential advantages of incorporating out-of-sequence data in earthquake mitigation techniques. These include improving the efficiency of structural design, enhancing infrastructure risk analysis, and developing more accurate early warning systems. By considering out-of-sequence thrust movement estimates, researchers and policymakers can make informed decisions to mitigate the impact of earthquakes. This study contributes to the field of seismic monitoring and earthquake risk assessment by highlighting the benefits of incorporating out-of-sequence thrust movement data. By broadening the scope of analysis beyond traditional techniques, researchers can enhance their knowledge of earthquake dynamics and improve the effectiveness of mitigation measures. The study collects data from various sources, including GPS measurements, satellite imagery, and seismic recordings. These datasets are then analyzed using appropriate statistical and computational techniques to estimate out-of-sequence thrust movement. The review integrates findings from multiple studies to provide a comprehensive assessment of the topic. The study concludes that incorporating out-of-sequence thrust movement data can significantly enhance earthquake mitigation measures. By utilizing diverse data sources, researchers and policymakers can gain a more comprehensive understanding of seismic dynamics and make informed decisions. However, challenges exist, such as data quality difficulties, modelling uncertainties, and computational complications. To address these obstacles and improve the accuracy of estimates, further research and advancements in methodology are recommended. Overall, this review serves as a valuable resource for researchers, engineers, and policymakers involved in earthquake mitigation, as it encourages the development of innovative strategies based on a better understanding of thrust movement dynamics.

Keywords: earthquake, out-of-sequence thrust, disaster, human life

Procedia PDF Downloads 70
560 Impact of Displacements Durations and Monetary Costs on the Labour Market within a City Consisting on Four Areas a Theoretical Approach

Authors: Aboulkacem El Mehdi

Abstract:

We develop a theoretical model at the crossroads of labour and urban economics, used for explaining the mechanism through which the duration of home-workplace trips and their monetary costs impact the labour demand and supply in a spatially scattered labour market and how they are impacted by a change in passenger transport infrastructures and services. The spatial disconnection between home and job opportunities is referred to as the spatial mismatch hypothesis (SMH). Its harmful impact on employment has been subject to numerous theoretical propositions. However, all the theoretical models proposed so far are patterned around the American context, which is particular as it is marked by racial discrimination against blacks in the housing and the labour markets. Therefore, it is only natural that most of these models are developed in order to reproduce a steady state characterized by agents carrying out their economic activities in a mono-centric city in which most unskilled jobs being created in the suburbs, far from the Blacks who dwell in the city-centre, generating a high unemployment rates for blacks, while the White population resides in the suburbs and has a low unemployment rate. Our model doesn't rely on any racial discrimination and doesn't aim at reproducing a steady state in which these stylized facts are replicated; it takes the main principle of the SMH -the spatial disconnection between homes and workplaces- as a starting point. One of the innovative aspects of the model consists in dealing with a SMH related issue at an aggregate level. We link the parameters of the passengers transport system to employment in the whole area of a city. We consider here a city that consists of four areas: two of them are residential areas with unemployed workers, the other two host firms looking for labour force. The workers compare the indirect utility of working in each area with the utility of unemployment and choose between submitting an application for the job that generate the highest indirect utility or not submitting. This arbitration takes account of the monetary and the time expenditures generated by the trips between the residency areas and the working areas. Each of these expenditures is clearly and explicitly formulated so that the impact of each of them can be studied separately than the impact of the other. The first findings show that the unemployed workers living in an area benefiting from good transport infrastructures and services have a better chance to prefer activity to unemployment and are more likely to supply a higher 'quantity' of labour than those who live in an area where the transport infrastructures and services are poorer. We also show that the firms located in the most accessible area receive much more applications and are more likely to hire the workers who provide the highest quantity of labour than the firms located in the less accessible area. Currently, we are working on the matching process between firms and job seekers and on how the equilibrium between the labour demand and supply occurs.

Keywords: labour market, passenger transport infrastructure, spatial mismatch hypothesis, urban economics

Procedia PDF Downloads 285
559 A Smart Sensor Network Approach Using Affordable River Water Level Sensors

Authors: Dian Zhang, Brendan Heery, Maria O’Neill, Ciprian Briciu-Burghina, Noel E. O’Connor, Fiona Regan

Abstract:

Recent developments in sensors, wireless data communication and the cloud computing have brought the sensor web to a whole new generation. The introduction of the concept of ‘Internet of Thing (IoT)’ has brought the sensor research into a new level, which involves the developing of long lasting, low cost, environment friendly and smart sensors; new wireless data communication technologies; big data analytics algorithms and cloud based solutions that are tailored to large scale smart sensor network. The next generation of smart sensor network consists of several layers: physical layer, where all the smart sensors resident and data pre-processes occur, either on the sensor itself or field gateway; data transmission layer, where data and instructions exchanges happen; the data process layer, where meaningful information is extracted and organized from the pre-process data stream. There are many definitions of smart sensor, however, to summarize all these definitions, a smart sensor must be Intelligent and Adaptable. In future large scale sensor network, collected data are far too large for traditional applications to send, store or process. The sensor unit must be intelligent that pre-processes collected data locally on board (this process may occur on field gateway depends on the sensor network structure). In this case study, three smart sensing methods, corresponding to simple thresholding, statistical model and machine learning based MoPBAS method, are introduced and their strength and weakness are discussed as an introduction to the smart sensing concept. Data fusion, the integration of data and knowledge from multiple sources, are key components of the next generation smart sensor network. For example, in the water level monitoring system, weather forecast can be extracted from external sources and if a heavy rainfall is expected, the server can send instructions to the sensor notes to, for instance, increase the sampling rate or switch on the sleeping mode vice versa. In this paper, we describe the deployment of 11 affordable water level sensors in the Dublin catchment. The objective of this paper is to use the deployed river level sensor network at the Dodder catchment in Dublin, Ireland as a case study to give a vision of the next generation of a smart sensor network for flood monitoring to assist agencies in making decisions about deploying resources in the case of a severe flood event. Some of the deployed sensors are located alongside traditional water level sensors for validation purposes. Using the 11 deployed river level sensors in a network as a case study, a vision of the next generation of smart sensor network is proposed. Each key component of the smart sensor network is discussed, which hopefully inspires the researchers who are working in the sensor research domain.

Keywords: smart sensing, internet of things, water level sensor, flooding

Procedia PDF Downloads 374
558 Utilizing Literature Review and Shared Decision-Making to Support a Patient Make the Decision: A Case Study of Virtual Reality for Postoperative Pain

Authors: Pei-Ru Yang, Yu-Chen Lin, Jia-Min Wu

Abstract:

Background: A 58-year-old man with a history of osteoporosis and diabetes presented with chronic pain in his left knee due to severe knee joint degeneration. The knee replacement surgery was recommended by the doctor. But the patient suffered from low pain tolerance and wondered if virtual reality could relieve acute postoperative wound pain. Methods: We used the PICO (patient, intervention, comparison, and outcome) approach to generate indexed keywords and searched systematic review articles from 2017 to 2021 on the Cochran Library, PubMed, and Clinical Key databases. Results: The initial literature results included 38 articles, including 12 Cochrane library articles and 26 PubMed articles. One article was selected for further analysis after removing duplicates and off-topic articles. The eight trials included in this article were published between 2013 and 2019 and recruited a total of 723 participants. The studies, conducted in India, Lebanon, Iran, South Korea, Spain, and China, included adults who underwent hemorrhoidectomy, dental surgery, craniotomy or spine surgery, episiotomy repair, and knee surgery, with a mean age (24.1 ± 4.1 to 73.3 ± 6.5). Virtual reality is an emerging non-drug postoperative analgesia method. The findings showed that pain control was reduced by a mean of 1.48 points (95% CI: -2.02 to -0.95, p-value < 0.0001) in minor surgery and 0.32 points in major surgery (95% CI: -0.53 to -0.11, p-value < 0.03), and the overall postoperative satisfaction has improved. Discussion: Postoperative pain is a common clinical problem in surgical patients. Research has confirmed that virtual reality can create an immersive interactive environment, communicate with patients, and effectively relieve postoperative pain. However, virtual reality requires the purchase of hardware and software and other related computer equipment, and its high cost is a disadvantage. We selected the best literature based on clinical questions to answer the patient's question and used share decision making (SDM) to help the patient make decisions based on the clinical situation after knee replacement surgery to improve the quality of patient-centered care.

Keywords: knee replacement surgery, postoperative pain, share decision making, virtual reality

Procedia PDF Downloads 58
557 Abilitest Battery: Presentation of Tests and Psychometric Properties

Authors: Sylwia Sumińska, Łukasz Kapica, Grzegorz Szczepański

Abstract:

Introduction: Cognitive skills are a crucial part of everyday functioning. Cognitive skills include perception, attention, language, memory, executive functions, and higher cognitive skills. With the aging of societies, there is an increasing percentage of people whose cognitive skills decline. Cognitive skills affect work performance. The appropriate diagnosis of a worker’s cognitive skills reduces the risk of errors and accidents at work which is also important for senior workers. The study aimed to prepare new cognitive tests for adults aged 20-60 and assess the psychometric properties of the tests. The project responds to the need for reliable and accurate methods of assessing cognitive performance. Computer tests were developed to assess psychomotor performance, attention, and working memory. Method: Two hundred eighty people aged 20-60 will participate in the study in 4 age groups. Inclusion criteria for the study were: no subjective cognitive impairment, no history of severe head injuries, chronic diseases, psychiatric and neurological diseases. The research will be conducted from February - to June 2022. Cognitive tests: 1) Measurement of psychomotor performance: Reaction time, Reaction time with selective attention component; 2) Measurement of sustained attention: Visual search (dots), Visual search (numbers); 3) Measurement of working memory: Remembering words, Remembering letters. To assess the validity and the reliability subjects will perform the Vienna Test System, i.e., “Reaction Test” (reaction time), “Signal Detection” (sustained attention), “Corsi Block-Tapping Test” (working memory), and Perception and Attention Test (TUS), Colour Trails Test (CTT), Digit Span – subtest from The Wechsler Adult Intelligence Scale. Eighty people will be invited to a session after three months aimed to assess the consistency over time. Results: Due to ongoing research, the detailed results from 280 people will be shown at the conference separately in each age group. The results of correlation analysis with the Vienna Test System will be demonstrated as well.

Keywords: aging, attention, cognitive skills, cognitive tests, psychomotor performance, working memory

Procedia PDF Downloads 98
556 Hyperthyroidism in a Private Medical Services Center, Addis Ababa: A 5-Year Experience

Authors: Ersumo Tessema, Bogale Girmaye Tamrat, Mohammed Burka

Abstract:

Background: Hyperthyroidism is a common thyroid disorder especially in women and characterized by increased thyroid hormone synthesis and secretion. The disorder manifests predominantly as Graves’ disease in iodine-sufficient areas and has increasing prevalence in iodine-deficient countries in patients with nodular thyroid disease and following iodine fortification. In Ethiopia, the magnitude of the disorder is unknown and, in Africa, due to scarcity of resources, its management remains suboptimal. Objective: The aim of this study was to analyze the pattern and management of patients with hyperthyroidism at the United Vision Medical Services Center, Addis Ababa between August 30, 2013, and February 1, 2018. Patients and methods: The study was a retrospective analysis of medical records of all patients with hyperthyroidism at the United Vision Private Medical Services Center, Addis Ababa. A questionnaire was filled out; the collected data entered into a computer and statistically analyzed using the SPSS package. The results were tabulated and discussed with literature review. Results: A total of 589 patients were included in this study. The median age was 40 years, and the male to female ratio was 1.0:7.9. Most patients (93%) presented with goiter and the associated features of toxic goiter except weight loss, sweating and tachycardia were uncommon. Majority of patients presented more than two years after the onset of their presenting symptoms. The most common physical finding (91%), as well as diagnosis, was toxic nodular goiter. The most frequent (83%) derangement in the thyroid function tests was a low thyroid-stimulating hormone, and the most commonly (94%) used antithyroid drug was a propylthiouracil. The most common (96%) surgical procedure in 213 patients was a near-total thyroidectomy with a postoperative course without incident in 92% of all the patients. Conclusion: The incidence and prevalence of hyperthyroidism are apparently on the increase in Addis Ababa, which may be related to the existing severe iodine-deficiency and or the salt iodation program (iodine-induced hyperthyroidism). Hyperthyroidism predominantly affects women and, in surgical services, toxic nodular goiter is more common than diffuse goiter, and the treatment of choice in experienced hands is a near-total thyroidectomy.

Keywords: Ethiopia, grave’s disease, hyperthyroidism, toxic nodular goiter

Procedia PDF Downloads 170
555 Automated System: Managing the Production and Distribution of Radiopharmaceuticals

Authors: Shayma Mohammed, Adel Trabelsi

Abstract:

Radiopharmacy is the art of preparing high-quality, radioactive, medicinal products for use in diagnosis and therapy. Radiopharmaceuticals unlike normal medicines, this dual aspect (radioactive, medical) makes their management highly critical. One of the most convincing applications of modern technologies is the ability to delegate the execution of repetitive tasks to programming scripts. Automation has found its way to the most skilled jobs, to improve the company's overall performance by allowing human workers to focus on more important tasks than document filling. This project aims to contribute to implement a comprehensive system to insure rigorous management of radiopharmaceuticals through the use of a platform that links the Nuclear Medicine Service Management System to the Nuclear Radio-pharmacy Management System in accordance with the recommendations of World Health Organization (WHO) and International Atomic Energy Agency (IAEA). In this project we attempt to build a web application that targets radiopharmacies, the platform is built atop the inherently compatible web stack which allows it to work in virtually any environment. Different technologies are used in this project (PHP, Symfony, MySQL Workbench, Bootstrap, Angular 7, Visual Studio Code and TypeScript). The operating principle of the platform is mainly based on two parts: Radiopharmaceutical Backoffice for the Radiopharmacian, who is responsible for the realization of radiopharmaceutical preparations and their delivery and Medical Backoffice for the Doctor, who holds the authorization for the possession and use of radionuclides and he/she is responsible for ordering radioactive products. The application consists of sven modules: Production, Quality Control/Quality Assurance, Release, General Management, References, Transport and Stock Management. It allows 8 classes of users: The Production Manager (PM), Quality Control Manager (QCM), Stock Manager (SM), General Manager (GM), Client (Doctor), Parking and Transport Manager (PTM), Qualified Person (QP) and Technical and Production Staff. Digital platform bringing together all players involved in the use of radiopharmaceuticals and integrating the stages of preparation, production and distribution, Web technologies, in particular, promise to offer all the benefits of automation while requiring no more than a web browser to act as a user client, which is a strength because the web stack is by nature multi-platform. This platform will provide a traceability system for radiopharmaceuticals products to ensure the safety and radioprotection of actors and of patients. The new integrated platform is an alternative to write all the boilerplate paperwork manually, which is a tedious and error-prone task. It would minimize manual human manipulation, which has proven to be the main source of error in nuclear medicine. A codified electronic transfer of information from radiopharmaceutical preparation to delivery will further reduce the risk of maladministration.

Keywords: automated system, management, radiopharmacy, technical papers

Procedia PDF Downloads 151
554 Understanding the Lithiation/Delithiation Mechanism of Si₁₋ₓGeₓ Alloys

Authors: Laura C. Loaiza, Elodie Salager, Nicolas Louvain, Athmane Boulaoued, Antonella Iadecola, Patrik Johansson, Lorenzo Stievano, Vincent Seznec, Laure Monconduit

Abstract:

Lithium-ion batteries (LIBs) have an important place among energy storage devices due to their high capacity and good cyclability. However, the advancements in portable and transportation applications have extended the research towards new horizons, and today the development is hampered, e.g., by the capacity of the electrodes employed. Silicon and germanium are among the considered modern anode materials as they can undergo alloying reactions with lithium while delivering high capacities. It has been demonstrated that silicon in its highest lithiated state can deliver up to ten times more capacity than graphite (372 mAh/g): 4200 mAh/g for Li₂₂Si₅ and 3579 mAh/g for Li₁₅Si₄, respectively. On the other hand, germanium presents a capacity of 1384 mAh/g for Li₁₅Ge₄, and a better electronic conductivity and Li ion diffusivity as compared to Si. Nonetheless, the commercialization potential of Ge is limited by its cost. The synergetic effect of Si₁₋ₓGeₓ alloys has been proven, the capacity is increased compared to Ge-rich electrodes and the capacity retention is increased compared to Si-rich electrodes, but the exact performance of this type of electrodes will depend on factors like specific capacity, C-rates, cost, etc. There are several reports on various formulations of Si₁₋ₓGeₓ alloys with promising LIB anode performance with most work performed on complex nanostructures resulting from synthesis efforts implying high cost. In the present work, we studied the electrochemical mechanism of the Si₀.₅Ge₀.₅ alloy as a realistic micron-sized electrode formulation using carboxymethyl cellulose (CMC) as the binder. A combination of a large set of in situ and operando techniques were employed to investigate the structural evolution of Si₀.₅Ge₀.₅ during lithiation and delithiation processes: powder X-ray diffraction (XRD), X-ray absorption spectroscopy (XAS), Raman spectroscopy, and 7Li solid state nuclear magnetic resonance spectroscopy (NMR). The results have presented a whole view of the structural modifications induced by the lithiation/delithiation processes. The Si₀.₅Ge₀.₅ amorphization was observed at the beginning of discharge. Further lithiation induces the formation of a-Liₓ(Si/Ge) intermediates and the crystallization of Li₁₅(Si₀.₅Ge₀.₅)₄ at the end of the discharge. At really low voltages a reversible process of overlithiation and formation of Li₁₅₊δ(Si₀.₅Ge₀.₅)₄ was identified and related with a structural evolution of Li₁₅(Si₀.₅Ge₀.₅)₄. Upon charge, the c-Li₁₅(Si₀.₅Ge₀.₅)₄ was transformed into a-Liₓ(Si/Ge) intermediates. At the end of the process an amorphous phase assigned to a-SiₓGey was recovered. Thereby, it was demonstrated that Si and Ge are collectively active along the cycling process, upon discharge with the formation of a ternary Li₁₅(Si₀.₅Ge₀.₅)₄ phase (with a step of overlithiation) and upon charge with the rebuilding of the a-Si-Ge phase. This process is undoubtedly behind the enhanced performance of Si₀.₅Ge₀.₅ compared to a physical mixture of Si and Ge.

Keywords: lithium ion battery, silicon germanium anode, in situ characterization, X-Ray diffraction

Procedia PDF Downloads 276
553 K-12 Students’ Digital Life: Activities and Attitudes

Authors: Meital Amzalag, Sharon Hardof-Jaffe

Abstract:

In the last few decades, children and youth have been immersed in digital technologies. Indeed, recent studies explored the implication of technology use in their leisure and learning activities. Educators face an essential need to utilize technology and implement them into the curriculum. To do that, educators need to understand how young people use digital technology. This study aims to explore K12 students' digital lives from their point of view, to reveal their digital activities, age and gender differences with respect to digital activities, and to present the students' attitudes towards technologies in learning. The study approach is quantitative and includes354 students ages 6-16 from three schools in Israel. The online questionnaire was based on self-reports and consists of four parts: Digital activities: leisure time activities (such as social networks, gaming types), search activities (information types and platforms), and digital application use (e.g., calendar, notes); Digital skills (requisite digital platform skills such as evaluation and creativity); Social and emotional aspects of digital use (conducting digital activities alone and with friends, feelings, and emotions during digital use such as happiness, bullying); Digital attitudes towards digital integration in learning. An academic ethics board approved the study. The main findings reveal the most popular K12digital activities: Navigating social network sites, watching TV, playing mobile games, seeking information on the internet, and playing computer games. In addition, the findings reveal age differences in digital activities, such as significant differences in the use of social network sites. Moreover, the finding raises gender differences as girls use more social network sites and boys use more digital games, which are characterized by high complexity and challenges. Additionally, we found positive attitudes towards technology integration in school. Students perceive technology as enhancing creativity, promoting active learning, encouraging self-learning, and helping students with learning difficulties. The presentation will provide an up-to-date, accurate picture of the use of various digital technologies by k12 students. In addition, it will discuss the learning potentials of such use and how to implement digital technologies in the curriculum. Acknowledgments: This study is a part of a broader study about K-12 digital life in Israel and is supported by Mofet-the Israel Institute for Teachers'Development.

Keywords: technology and learning, K-12, digital life, gender differences

Procedia PDF Downloads 124
552 Influence of Mandrel’s Surface on the Properties of Joints Produced by Magnetic Pulse Welding

Authors: Ines Oliveira, Ana Reis

Abstract:

Magnetic Pulse Welding (MPW) is a cold solid-state welding process, accomplished by the electromagnetically driven, high-speed and low-angle impact between two metallic surfaces. It has the same working principle of Explosive Welding (EXW), i.e. is based on the collision of two parts at high impact speed, in this case, propelled by electromagnetic force. Under proper conditions, i.e., flyer velocity and collision point angle, a permanent metallurgical bond can be achieved between widely dissimilar metals. MPW has been considered a promising alternative to the conventional welding processes and advantageous when compared to other impact processes. Nevertheless, MPW current applications are mostly academic. Despite the existing knowledge, the lack of consensus regarding several aspects of the process calls for further investigation. As a result, the mechanical resistance, morphology and structure of the weld interface in MPW of Al/Cu dissimilar pair were investigated. The effect of process parameters, namely gap, standoff distance and energy, were studied. It was shown that welding only takes place if the process parameters are within an optimal range. Additionally, the formation of intermetallic phases cannot be completely avoided in the weld of Al/Cu dissimilar pair by MPW. Depending on the process parameters, the intermetallic compounds can appear as continuous layer or small pockets. The thickness and the composition of the intermetallic layer depend on the processing parameters. Different intermetallic phases can be identified, meaning that different temperature-time regimes can occur during the process. It is also found that lower pulse energies are preferred. The relationship between energy increase and melting is possibly related to multiple sources of heating. Higher values of pulse energy are associated with higher induced currents in the part, meaning that more Joule heating will be generated. In addition, more energy means higher flyer velocity, the air existing in the gap between the parts to be welded is expelled, and this aerodynamic drag (fluid friction) is proportional to the square of the velocity, further contributing to the generation of heat. As the kinetic energy also increases with the square of velocity, the dissipation of this energy through plastic work and jet generation will also contribute to an increase in temperature. To reduce intermetallic phases, porosity, and melt pockets, pulse energy should be minimized. The bond formation is affected not only by the gap, standoff distance, and energy but also by the mandrel’s surface conditions. No correlation was clearly identified between surface roughness/scratch orientation and joint strength. Nevertheless, the aspect of the interface (thickness of the intermetallic layer, porosity, presence of macro/microcracks) is clearly affected by the surface topology. Welding was not established on oil contaminated surfaces, meaning that the jet action is not enough to completely clean the surface.

Keywords: bonding mechanisms, impact welding, intermetallic compounds, magnetic pulse welding, wave formation

Procedia PDF Downloads 204
551 42CrMo4 Steel Flow Behavior Characterization for High Temperature Closed Dies Hot Forging in Automotive Components Applications

Authors: O. Bilbao, I. Loizaga, F. A. Girot, A. Torregaray

Abstract:

The current energetical situation and the high competitiveness in industrial sectors as the automotive one have become the development of new manufacturing processes with less energy and raw material consumption a real necessity. As consequence, new forming processes related with high temperature hot forging in closed dies have emerged in the last years as new solutions to expand the possibilities of hot forging and iron casting in the automotive industry. These technologies are mid-way between hot forging and semi-solid metal processes, working at temperatures higher than the hot forging but below the solidus temperature or the semi solid range, where no liquid phase is expected. This represents an advantage comparing with semi-solid forming processes as thixoforging, by the reason that no so high temperatures need to be reached in the case of high melting point alloys as steels, reducing the manufacturing costs and the difficulties associated to semi-solid processing of them. Comparing with hot forging, this kind of technologies allow the production of parts with as forged properties and more complex and near-net shapes (thinner sidewalls), enhancing the possibility of designing lightweight components. From the process viewpoint, the forging forces are significantly decreased, and a significant reduction of the raw material, energy consumption, and the forging steps have been demonstrated. Despite the mentioned advantages, from the material behavior point of view, the expansion of these technologies has shown the necessity of developing new material flow behavior models in the process working temperature range to make the simulation or the prediction of these new forming processes feasible. Moreover, the knowledge of the material flow behavior at the working temperature range also allows the design of the new closed dies concept required. In this work, the flow behavior characterization in the mentioned temperature range of the widely used in automotive commercial components 42CrMo4 steel has been studied. For that, hot compression tests have been carried out in a thermomechanical tester in a temperature range that covers the material behavior from the hot forging until the NDT (Nil Ductility Temperature) temperature (1250 ºC, 1275 ºC, 1300 ºC, 1325 ºC, 1350ºC, and 1375 ºC). As for the strain rates, three different orders of magnitudes have been considered (0,1 s-1, 1s-1, and 10s-1). Then, results obtained from the hot compression tests have been treated in order to adapt or re-write the Spittel model, widely used in automotive commercial softwares as FORGE® that restrict the current existing models up to 1250ºC. Finally, the obtained new flow behavior model has been validated by the process simulation in a commercial automotive component and the comparison of the results of the simulation with the already made experimental tests in a laboratory cellule of the new technology. So as a conclusion of the study, a new flow behavior model for the 42CrMo4 steel in the new working temperature range and the new process simulation in its application in automotive commercial components has been achieved and will be shown.

Keywords: 42CrMo4 high temperature flow behavior, high temperature hot forging in closed dies, simulation of automotive commercial components, spittel flow behavior model

Procedia PDF Downloads 120
550 Use of Pragmatic Cues for Word Learning in Bilingual and Monolingual Children

Authors: Isabelle Lorge, Napoleon Katsos

Abstract:

BACKGROUND: Children growing up in a multilingual environment face challenges related to the need to monitor the speaker’s linguistic abilities, more frequent communication failures, and having to acquire a large number of words in a limited amount of time compared to monolinguals. As a result, bilingual learners may develop different word learning strategies, rely more on some strategies than others, and engage cognitive resources such as theory of mind and attention skills in different ways. HYPOTHESIS: The goal of our study is to investigate whether multilingual exposure leads to improvements in the ability to use pragmatic inference for word learning, i.e., to use speaker cues to derive their referring intentions, often by overcoming lower level salience effects. The speaker cues we identified as relevant are (a) use of a modifier with or without stress (‘the WET dax’ prompting the choice of the referent which has a dry counterpart), (b) referent extension (‘this is a kitten with a fep’ prompting the choice of the unique rather than shared object), (c) referent novelty (choosing novel action rather than novel object which has been manipulated already), (d) teacher versus random sampling (assuming the choice of specific examples for a novel word to be relevant to the extension of that new category), and finally (e) emotional affect (‘look at the figoo’ uttered in a sad or happy voice) . METHOD: To this end, we implemented on a touchscreen computer a task corresponding to each of the cues above, where the child had to pick the referent of a novel word. These word learning tasks (a), (b), (c), (d) and (e) were adapted from previous word learning studies. 113 children have been tested (54 reception and 59 year 1, ranging from 4 to 6 years old) in a London primary school. Bilingual or monolingual status and other relevant information (age of onset, proficiency, literacy for bilinguals) is ascertained through language questionnaires from parents (34 out of 113 received to date). While we do not yet have the data that will allow us to test for effect of bilingualism, we can already see that performances are far from approaching ceiling in any of the tasks. In some cases the children’s performances radically differ from adults’ in a qualitative way, which means that there is scope for quantitative and qualitative effects to arise between language groups. The findings should contribute to explain the puzzling speed and efficiency that bilinguals demonstrate in acquiring competence in two languages.

Keywords: bilingualism, pragmatics, word learning, attention

Procedia PDF Downloads 130
549 Platform Virtual for Joint Amplitude Measurement Based in MEMS

Authors: Mauro Callejas-Cuervo, Andrea C. Alarcon-Aldana, Andres F. Ruiz-Olaya, Juan C. Alvarez

Abstract:

Motion capture (MC) is the construction of a precise and accurate digital representation of a real motion. Systems have been used in the last years in a wide range of applications, from films special effects and animation, interactive entertainment, medicine, to high competitive sport where a maximum performance and low injury risk during training and competition is seeking. This paper presents an inertial and magnetic sensor based technological platform, intended for particular amplitude monitoring and telerehabilitation processes considering an efficient cost/technical considerations compromise. Our platform particularities offer high social impact possibilities by making telerehabilitation accessible to large population sectors in marginal socio-economic sector, especially in underdeveloped countries that in opposition to developed countries specialist are scarce, and high technology is not available or inexistent. This platform integrates high-resolution low-cost inertial and magnetic sensors with adequate user interfaces and communication protocols to perform a web or other communication networks available diagnosis service. The amplitude information is generated by sensors then transferred to a computing device with adequate interfaces to make it accessible to inexperienced personnel, providing a high social value. Amplitude measurements of the platform virtual system presented a good fit to its respective reference system. Analyzing the robotic arm results (estimation error RMSE 1=2.12° and estimation error RMSE 2=2.28°), it can be observed that during arm motion in any sense, the estimation error is negligible; in fact, error appears only during sense inversion what can easily be explained by the nature of inertial sensors and its relation to acceleration. Inertial sensors present a time constant delay which acts as a first order filter attenuating signals at large acceleration values as is the case for a change of sense in motion. It can be seen a damped response of platform virtual in other images where error analysis show that at maximum amplitude an underestimation of amplitude is present whereas at minimum amplitude estimations an overestimation of amplitude is observed. This work presents and describes the platform virtual as a motion capture system suitable for telerehabilitation with the cost - quality and precision - accessibility relations optimized. These particular characteristics achieved by efficiently using the state of the art of accessible generic technology in sensors and hardware, and adequate software for capture, transmission analysis and visualization, provides the capacity to offer good telerehabilitation services, reaching large more or less marginal populations where technologies and specialists are not available but accessible with basic communication networks.

Keywords: inertial sensors, joint amplitude measurement, MEMS, telerehabilitation

Procedia PDF Downloads 254
548 Employing Remotely Sensed Soil and Vegetation Indices and Predicting ‎by Long ‎Short-Term Memory to Irrigation Scheduling Analysis

Authors: Elham Koohikerade, Silvio Jose Gumiere

Abstract:

In this research, irrigation is highlighted as crucial for improving both the yield and quality of ‎potatoes due to their high sensitivity to soil moisture changes. The study presents a hybrid Long ‎Short-Term Memory (LSTM) model aimed at optimizing irrigation scheduling in potato fields in ‎Quebec City, Canada. This model integrates model-based and satellite-derived datasets to simulate ‎soil moisture content, addressing the limitations of field data. Developed under the guidance of the ‎Food and Agriculture Organization (FAO), the simulation approach compensates for the lack of direct ‎soil sensor data, enhancing the LSTM model's predictions. The model was calibrated using indices ‎like Surface Soil Moisture (SSM), Normalized Vegetation Difference Index (NDVI), Enhanced ‎Vegetation Index (EVI), and Normalized Multi-band Drought Index (NMDI) to effectively forecast ‎soil moisture reductions. Understanding soil moisture and plant development is crucial for assessing ‎drought conditions and determining irrigation needs. This study validated the spectral characteristics ‎of vegetation and soil using ECMWF Reanalysis v5 (ERA5) and Moderate Resolution Imaging ‎Spectrometer (MODIS) data from 2019 to 2023, collected from agricultural areas in Dolbeau and ‎Peribonka, Quebec. Parameters such as surface volumetric soil moisture (0-7 cm), NDVI, EVI, and ‎NMDI were extracted from these images. A regional four-year dataset of soil and vegetation moisture ‎was developed using a machine learning approach combining model-based and satellite-based ‎datasets. The LSTM model predicts soil moisture dynamics hourly across different locations and ‎times, with its accuracy verified through cross-validation and comparison with existing soil moisture ‎datasets. The model effectively captures temporal dynamics, making it valuable for applications ‎requiring soil moisture monitoring over time, such as anomaly detection and memory analysis. By ‎identifying typical peak soil moisture values and observing distribution shapes, irrigation can be ‎scheduled to maintain soil moisture within Volumetric Soil Moisture (VSM) values of 0.25 to 0.30 ‎m²/m², avoiding under and over-watering. The strong correlations between parcels suggest that a ‎uniform irrigation strategy might be effective across multiple parcels, with adjustments based on ‎specific parcel characteristics and historical data trends. The application of the LSTM model to ‎predict soil moisture and vegetation indices yielded mixed results. While the model effectively ‎captures the central tendency and temporal dynamics of soil moisture, it struggles with accurately ‎predicting EVI, NDVI, and NMDI.‎

Keywords: irrigation scheduling, LSTM neural network, remotely sensed indices, soil and vegetation ‎monitoring

Procedia PDF Downloads 35
547 Application of Shore Protective Structures in Optimum Land Using of Defense Sites Located in Coastal Cities

Authors: Mir Ahmad Lashteh Neshaei, Hamed Afsoos Biria, Ata Ghabraei, Mir Abdolhamid Mehrdad

Abstract:

Awareness of effective land using issues in coastal area including protection of natural ecosystems and coastal environment due to the increasing of human life along the coast is of great importance. There are numerous valuable structures and heritages which are located in defence sites and waterfront area. Marine structures such as groins, sea walls and detached breakwaters are constructed in coast to improve the coast stability against bed erosion due to changing wave and climate pattern. Marine mechanisms and interaction with the shore protection structures need to be intensively studied. Groins are one of the most prominent structures that are used in shore protection to create a safe environment for coastal area by maintaining the land against progressive coastal erosion. The main structural function of a groin is to control the long shore current and littoral sediment transport. This structure can be submerged and provide the necessary beach protection without negative environmental impact. However, for submerged structures adopted for beach protection, the shoreline response to these structures is not well understood at present. Nowadays, modelling and computer simulation are used to assess beach morphology in the vicinity of marine structures to reduce their environmental impact. The objective of this study is to predict the beach morphology in the vicinity of submerged groins and comparison with non-submerged groins with focus on a part of the coast located in Dahane sar Sefidrood, Guilan province, Iran where serious coast erosion has occurred recently. The simulations were obtained using a one-line model which can be used as a first approximation of shoreline prediction in the vicinity of groins. The results of the proposed model are compared with field measurements to determine the shape of the coast. Finally, the results of the present study show that using submerged groins can have a good efficiency to control the beach erosion without causing severe environmental impact to the coast. The important outcome from this study can be employed in optimum designing of defence sites in the coastal cities to improve their efficiency in terms of re-using the heritage lands.

Keywords: submerged structures, groin, shore protective structures, coastal cities

Procedia PDF Downloads 311
546 Geometric Optimisation of Piezoelectric Fan Arrays for Low Energy Cooling

Authors: Alastair Hales, Xi Jiang

Abstract:

Numerical methods are used to evaluate the operation of confined face-to-face piezoelectric fan arrays as pitch, P, between the blades is varied. Both in-phase and counter-phase oscillation are considered. A piezoelectric fan consists of a fan blade, which is clamped at one end, and an extremely low powered actuator. This drives the blade tip’s oscillation at its first natural frequency. Sufficient blade tip speed, created by the high oscillation frequency and amplitude, is required to induce vortices and downstream volume flow in the surrounding air. A single piezoelectric fan may provide the ideal solution for low powered hot spot cooling in an electronic device, but is unable to induce sufficient downstream airflow to replace a conventional air mover, such as a convection fan, in power electronics. Piezoelectric fan arrays, which are assemblies including multiple fan blades usually in face-to-face orientation, must be developed to widen the field of feasible applications for the technology. The potential energy saving is significant, with a 50% power demand reduction compared to convection fans even in an unoptimised state. A numerical model of a typical piezoelectric fan blade is derived and validated against experimental data. Numerical error is found to be 5.4% and 9.8% using two data comparison methods. The model is used to explore the variation of pitch as a function of amplitude, A, for a confined two-blade piezoelectric fan array in face-to-face orientation, with the blades oscillating both in-phase and counter-phase. It has been reported that in-phase oscillation is optimal for generating maximum downstream velocity and flow rate in unconfined conditions, due at least in part to the beneficial coupling between the adjacent blades that leads to an increased oscillation amplitude. The present model demonstrates that confinement has a significant detrimental effect on in-phase oscillation. Even at low pitch, counter-phase oscillation produces enhanced downstream air velocities and flow rates. Downstream air velocity from counter-phase oscillation can be maximally enhanced, relative to that generated from a single blade, by 17.7% at P = 8A. Flow rate enhancement at the same pitch is found to be 18.6%. By comparison, in-phase oscillation at the same pitch outputs 23.9% and 24.8% reductions in peak downstream air velocity and flow rate, relative to that generated from a single blade. This optimal pitch, equivalent to those reported in the literature, suggests that counter-phase oscillation is less affected by confinement. The optimal pitch for generating bulk airflow from counter-phase oscillation is large, P > 16A, due to the small but significant downstream velocity across the span between adjacent blades. However, by considering design in a confined space, counterphase pitch should be minimised to maximise the bulk airflow generated from a certain cross-sectional area within a channel flow application. Quantitative values are found to deviate to a small degree as other geometric and operational parameters are varied, but the established relationships are maintained.

Keywords: piezoelectric fans, low energy cooling, power electronics, computational fluid dynamics

Procedia PDF Downloads 215
545 Kinetic Evaluation of Sterically Hindered Amines under Partial Oxy-Combustion Conditions

Authors: Sara Camino, Fernando Vega, Mercedes Cano, Benito Navarrete, José A. Camino

Abstract:

Carbon capture and storage (CCS) technologies should play a relevant role towards low-carbon systems in the European Union by 2030. Partial oxy-combustion emerges as a promising CCS approach to mitigate anthropogenic CO₂ emissions. Its advantages respect to other CCS technologies rely on the production of a higher CO₂ concentrated flue gas than these provided by conventional air-firing processes. The presence of more CO₂ in the flue gas increases the driving force in the separation process and hence it might lead to further reductions of the energy requirements of the overall CO₂ capture process. A higher CO₂ concentrated flue gas should enhance the CO₂ capture by chemical absorption in solvent kinetic and CO₂ cyclic capacity. They have impact on the performance of the overall CO₂ absorption process by reducing the solvent flow-rate required for a specific CO₂ removal efficiency. Lower solvent flow-rates decreases the reboiler duty during the regeneration stage and also reduces the equipment size and pumping costs. Moreover, R&D activities in this field are focused on novel solvents and blends that provide lower CO₂ absorption enthalpies and therefore lower energy penalties associated to the solvent regeneration. In this respect, sterically hindered amines are considered potential solvents for CO₂ capture. They provide a low energy requirement during the regeneration process due to its molecular structure. However, its absorption kinetics are slow and they must be promoted by blending with faster solvents such as monoethanolamine (MEA) and piperazine (PZ). In this work, the kinetic behavior of two sterically hindered amines were studied under partial oxy-combustion conditions and compared with MEA. A lab-scale semi-batch reactor was used. The CO₂ composition of the synthetic flue gas varied from 15%v/v – conventional coal combustion – to 60%v/v – maximum CO₂ concentration allowable for an optimal partial oxy-combustion operation. Firstly, 2-amino-2-methyl-1-propanol (AMP) showed a hybrid behavior with fast kinetics and a low enthalpy of CO₂ absorption. The second solvent was Isophrondiamine (IF), which has a steric hindrance in one of the amino groups. Its free amino group increases its cyclic capacity. In general, the presence of higher CO₂ concentration in the flue gas accelerated the CO₂ absorption phenomena, producing higher CO₂ absorption rates. In addition, the evolution of the CO2 loading also exhibited higher values in the experiments using higher CO₂ concentrated flue gas. The steric hindrance causes a hybrid behavior in this solvent, between both fast and slow kinetic solvents. The kinetics rates observed in all the experiments carried out using AMP were higher than MEA, but lower than the IF. The kinetic enhancement experienced by AMP at a high CO2 concentration is slightly over 60%, instead of 70% – 80% for IF. AMP also improved its CO₂ absorption capacity by 24.7%, from 15%v/v to 60%v/v, almost double the improvements achieved by MEA. In IF experiments, the CO₂ loading increased around 10% from 15%v/v to 60%v/v CO₂ and it changed from 1.10 to 1.34 mole CO₂ per mole solvent, more than 20% of increase. This hybrid kinetic behavior makes AMP and IF promising solvents for partial oxy–combustion applications.

Keywords: absorption, carbon capture, partial oxy-combustion, solvent

Procedia PDF Downloads 182
544 Graphene Supported Nano Cerium Oxides Hybrid as an Electrocatalyst for Oxygen Reduction Reactions

Authors: Siba Soren, Purnendu Parhi

Abstract:

Today, the world is facing a severe challenge due to depletion of traditional fossil fuels. Scientists across the globe are working for a solution that involves a dramatic shift to practical and environmentally sustainable energy sources. High-capacity energy systems, such as metal-air batteries, fuel cells, are highly desirable to meet the urgent requirement of sustainable energies. Among the fuel cells, Direct methanol fuel cells (DMFCs) are recognized as an ideal power source for mobile applications and have received considerable attention in recent past. In this advanced electrochemical energy conversion technologies, Oxygen Reduction Reaction (ORR) is of utmost importance. However, the poor kinetics of cathodic ORR in DMFCs significantly hampers their possibilities of commercialization. The oxygen is reduced in alkaline medium either through a 4-electron (equation i) or a 2-electron (equation ii) reduction pathway at the cathode ((i) O₂ + 2H₂O + 4e⁻ → 4OH⁻, (ii) O₂ + H₂O + 2e⁻ → OH⁻ + HO₂⁻ ). Due to sluggish ORR kinetics the ability to control the reduction of molecular oxygen electrocatalytically is still limited. The electrocatalytic ORR starts with adsorption of O₂ on the electrode surface followed by O–O bond activation/cleavage and oxide removal. The reaction further involves transfer of 4 electrons and 4 protons. The sluggish kinetics of ORR, on the one hand, demands high loading of precious metal-containing catalysts (e.g., Pt), which unfavorably increases the cost of these electrochemical energy conversion devices. Therefore, synthesis of active electrocatalyst with an increase in ORR performance is need of the hour. In the recent literature, there are many reports on transition metal oxide (TMO) based ORR catalysts for their high activity TMOs are also having drawbacks like low electrical conductivity, which seriously affects the electron transfer process during ORR. It was found that 2D graphene layer is having high electrical conductivity, large surface area, and excellent chemical stability, appeared to be an ultimate choice as support material to enhance the catalytic performance of bare metal oxide. g-C₃N₄ is also another candidate that has been used by the researcher for improving the ORR performance of metal oxides. This material provides more active reaction sites than other N containing carbon materials. Rare earth oxide like CeO₂ is also a good candidate for studying the ORR activity as the metal oxide not only possess unique electronic properties but also possess catalytically active sites. Here we will discuss the ORR performance (in alkaline medium) of N-rGO/C₃N₄ supported nano Cerium Oxides hybrid synthesized by microwave assisted Solvothermal method. These materials exhibit superior electrochemical stability and methanol tolerance capability to that of commercial Pt/C.

Keywords: oxygen reduction reaction, electrocatalyst, cerium oxide, graphene

Procedia PDF Downloads 177