Search results for: artificial air storage reservoir
322 The Effects of Computer Game-Based Pedagogy on Graduate Students Statistics Performance
Authors: Clement Yeboah, Eva Laryea
Abstract:
A pretest-posttest within subjects experimental design was employed to examine the effects of a computerized basic statistics learning game on achievement and statistics-related anxiety of students enrolled in introductory graduate statistics course. Participants (N = 34) were graduate students in a variety of programs at state-funded research university in the Southeast United States. We analyzed pre-test posttest differences using paired samples t-tests for achievement and for statistics anxiety. The results of the t-test for knowledge in statistics were found to be statistically significant, indicating significant mean gains for statistical knowledge as a function of the game-based intervention. Likewise, the results of the t-test for statistics-related anxiety were also statistically significant, indicating a decrease in anxiety from pretest to posttest. The implications of the present study are significant for both teachers and students. For teachers, using computer games developed by the researchers can help to create a more dynamic and engaging classroom environment, as well as improve student learning outcomes. For students, playing these educational games can help to develop important skills such as problem solving, critical thinking, and collaboration. Students can develop an interest in the subject matter and spend quality time to learn the course as they play the game without knowing that they are even learning the presupposed hard course. The future directions of the present study are promising as technology continues to advance and become more widely available. Some potential future developments include the integration of virtual and augmented reality into educational games, the use of machine learning and artificial intelligence to create personalized learning experiences, and the development of new and innovative game-based assessment tools. It is also important to consider the ethical implications of computer game-based pedagogy, such as the potential for games to perpetuate harmful stereotypes and biases. As the field continues to evolve, it will be crucial to address these issues and work towards creating inclusive and equitable learning experiences for all students. This study has the potential to revolutionize the way basic statistics graduate students learn and offers exciting opportunities for future development and research. It is an important area of inquiry for educators, researchers, and policymakers and will continue to be a dynamic and rapidly evolving field for years to come.Keywords: pretest-posttest within subjects, computer game-based learning, statistics achievement, statistics anxiety
Procedia PDF Downloads 77321 Exploring the Benefits of Hiring Individuals with Disabilities in the Workplace
Authors: Rosilyn Sanders
Abstract:
This qualitative study examined the impact of hiring people with intellectual disabilities (ID). The research questions were: What defines a disability? What accommodations are needed to ensure the success of a person with a disability? As a leader, what benefits do people with intellectual disabilities bring to the organization? What are the benefits of hiring people with intellectual disabilities in retail organizations? Moreover, how might people with intellectual disabilities contribute to the organizational culture of retail organizations? A narrative strength approach was used as a theoretical framework to guide the discussion and uncover the benefits of hiring individuals with intellectual disabilities in various retail organizations. Using qualitative interviews, the following themes emerged: diversity and inclusion, accommodations, organizational culture, motivation, and customer service. These findings put to rest some negative stereotypes and perceptions of persons with ID as being unemployable or unable to perform tasks when employed, showing instead that persons with ID can work efficiently when given necessary work accommodations and support in an enabling organizational culture. All participants were recruited and selected through various forms of electronic communication via social media, email invitations, and phone; this was conducted through the methodology of snowball sampling with the following demographics: age, ethnicity, gender, number of years in retail, number of years in management, and number of direct reports. The sample population was employed in several retail organizations throughout Arkansas and Texas. The small sample size for qualitative research in this study helped the researcher develop, build, and maintain close relationships that encouraged participants to be forthcoming and honest with information (Clow & James, 2014 ). Participants were screened to ensure they met the researcher's study; and screened to ensure that they were over 18 years of age. Participants were asked if they recruit, interview, hire, and supervise individuals with intellectual disabilities. Individuals were given consent forms via email to indicate their interest in participating in this study. Due to COVID-19, all interviews were conducted via teleconferencing (Zoom or Microsoft Teams) that lasted approximately 1 hour, which were transcribed, coded for themes, and grouped based on similar responses. Further, the participants were not privy to the interview questions beforehand, and demographic questions were asked at the end, including questions concerning age, education level, and job status. Each participant was assigned random numbers using an app called ‘The Random Number Generator ‘to ensure that all personal or identifying information of participants were removed. Regarding data storage, all documentation was stored on a password-protected external drive, inclusive of consent forms, recordings, transcripts, and researcher notes.Keywords: diversity, positive psychology, organizational development, leadership
Procedia PDF Downloads 67320 Port Miami in the Caribbean and Mesoamerica: Data, Spatial Networks and Trends
Authors: Richard Grant, Landolf Rhode-Barbarigos, Shouraseni Sen Roy, Lucas Brittan, Change Li, Aiden Rowe
Abstract:
Ports are critical for the US economy, connecting farmers, manufacturers, retailers, consumers and an array of transport and storage operators. Port facilities vary widely in terms of their productivity, footprint, specializations, and governance. In this context, Port Miami is considered as one of the busiest ports providing both cargo and cruise services in connecting the wider region of the Caribbean and Mesoamerica to the global networks. It is considered as the “Cruise Capital of the World and Global Gateway of the Americas” and “leading container port in Florida.” Furthermore, it has also been ranked as one of the top container ports in the world and the second most efficient port in North America. In this regard, Port Miami has made significant investments in the strategic and capital infrastructure of about US$1 billion, including increasing the channel depth and other onshore infrastructural enhancements. Therefore, this study involves a detailed analysis of Port Miami’s network, using publicly available multiple years of data about marine vessel traffic, cargo, and connectivity and performance indices from 2015-2021. Through the analysis of cargo and cruise vessels to and from Port Miami and its relative performance at the global scale from 2015 to 2021, this study examines the port’s long-term resilience and future growth potential. The main results of the analyses indicate that the top category for both inbound and outbound cargo is manufactured products and textiles. In addition, there are a lot of fresh fruits, vegetables, and produce for inbound and processed food for outbound cargo. Furthermore, the top ten port connections for Port Miami are all located in the Caribbean region, the Gulf of Mexico, and the Southeast USA. About half of the inbound cargo comes from Savannah, Saint Thomas, and Puerto Plata, while outbound cargo is from Puerto Corte, Freeport, and Kingston. Additionally, for cruise vessels, a significantly large number of vessels originate from Nassau, followed by Freeport. The number of passenger's vessels pre-COVID was almost 1,000 per year, which dropped substantially in 2020 and 2021 to around 300 vessels. Finally, the resilience and competitiveness of Port Miami were also assessed in terms of its network connectivity by examining the inbound and outbound maritime vessel traffic. It is noteworthy that the most frequent port connections for Port Miami were Freeport and Savannah, followed by Kingston, Nassau, and New Orleans. However, several of these ports, Puerto Corte, Veracruz, Puerto Plata, and Santo Thomas, have low resilience and are highly vulnerable, which needs to be taken into consideration for the long-term resilience of Port Miami in the future.Keywords: port, Miami, network, cargo, cruise
Procedia PDF Downloads 79319 Evolution of Web Development Progress in Modern Information Technology
Authors: Abdul Basit Kiani
Abstract:
Web development, the art of creating and maintaining websites, has witnessed remarkable advancements. The aim is to provide an overview of some of the cutting-edge developments in the field. Firstly, the rise of responsive web design has revolutionized user experiences across devices. With the increasing prevalence of smartphones and tablets, web developers have adapted to ensure seamless browsing experiences, regardless of screen size. This progress has greatly enhanced accessibility and usability, catering to the diverse needs of users worldwide. Additionally, the evolution of web frameworks and libraries has significantly streamlined the development process. Tools such as React, Angular, and Vue.js have empowered developers to build dynamic and interactive web applications with ease. These frameworks not only enhance efficiency but also bolster scalability, allowing for the creation of complex and feature-rich web solutions. Furthermore, the emergence of progressive web applications (PWAs) has bridged the gap between native mobile apps and web development. PWAs leverage modern web technologies to deliver app-like experiences, including offline functionality, push notifications, and seamless installation. This innovation has transformed the way users interact with websites, blurring the boundaries between traditional web and mobile applications. Moreover, the integration of artificial intelligence (AI) and machine learning (ML) has opened new horizons in web development. Chatbots, intelligent recommendation systems, and personalization algorithms have become integral components of modern websites. These AI-powered features enhance user engagement, provide personalized experiences, and streamline customer support processes, revolutionizing the way businesses interact with their audiences. Lastly, the emphasis on web security and privacy has been a pivotal area of progress. With the increasing incidents of cyber threats, web developers have implemented robust security measures to safeguard user data and ensure secure transactions. Innovations such as HTTPS protocol, two-factor authentication, and advanced encryption techniques have bolstered the overall security of web applications, fostering trust and confidence among users. Hence, recent progress in web development has propelled the industry forward, enabling developers to craft innovative and immersive digital experiences. From responsive design to AI integration and enhanced security, the landscape of web development continues to evolve, promising a future filled with endless possibilities.Keywords: progressive web applications (PWAs), web security, machine learning (ML), web frameworks, advancement responsive web design
Procedia PDF Downloads 54318 Multidisciplinary Approach to Mio-Plio-Quaternary Aquifer Study in the Zarzis Region (Southeastern Tunisia)
Authors: Ghada Ben Brahim, Aicha El Rabia, Mohamed Hedi Inoubli
Abstract:
Climate change has exacerbated disparities in the distribution of water resources in Tunisia, resulting in significant degradation in quantity and quality over the past five decades. The Mio-Plio-Quaternary aquifer, the primary water source in the Zarzis region, is subject to climatic, geographical, and geological challenges, as well as human stress. The region is experiencing uneven distribution and growing threats from groundwater salinity and saltwater intrusion. Addressing this challenge is critical for the arid region’s socioeconomic development, and effective water resource management is required to combat climate change and reduce water deficits. This study uses a multidisciplinary approach to determine the groundwater potential of this aquifer, involving geophysics and hydrogeology data analysis. We used advanced techniques such as 3D Euler deconvolution and power spectrum analysis to generate detailed anomaly maps and estimate the depths of density sources, identifying significant Bouguer anomalies trending E-W, NW-SE, and NE-SW. Various techniques, such as wavelength filtering, upward continuation, and horizontal and vertical derivatives, were used to improve the gravity data, resulting in consistent results for anomaly shapes and amplitudes. The Euler deconvolution method revealed two prominent surface faults, trending NE-SW and NW-SE, that have a significant impact on the distribution of sedimentary facies and water quality within the Mio-Plio-Quaternary aquifer. Additionally, depth maxima greater than 1400 m to the North indicate the presence of a Cretaceous paleo-fault. Geoelectrical models and resistivity pseudo-sections were used to interpret the distribution of electrical facies in the Mio-Plio-Quaternary aquifer, highlighting lateral variation and depositional environment type. AI optimises the analysis and interpretation of exploration data, which is important to long-term management and water security. Machine learning algorithms and deep learning models analyse large datasets to provide precise interpretations of subsurface conditions, such as aquifer salinisation. However, AI has limitations, such as the requirement for large datasets, the risk of overfitting, and integration issues with traditional geological methods.Keywords: mio-plio-quaternary aquifer, Southeastern Tunisia, geophysical methods, hydrogeological analysis, artificial intelligence
Procedia PDF Downloads 14317 Evolution of Microstructure through Phase Separation via Spinodal Decomposition in Spinel Ferrite Thin Films
Authors: Nipa Debnath, Harinarayan Das, Takahiko Kawaguchi, Naonori Sakamoto, Kazuo Shinozaki, Hisao Suzuki, Naoki Wakiya
Abstract:
Nowadays spinel ferrite magnetic thin films have drawn considerable attention due to their interesting magnetic and electrical properties with enhanced chemical and thermal stability. Spinel ferrite magnetic films can be implemented in magnetic data storage, sensors, and spin filters or microwave devices. It is well established that the structural, magnetic and transport properties of the magnetic thin films are dependent on microstructure. Spinodal decomposition (SD) is a phase separation process, whereby a material system is spontaneously separated into two phases with distinct compositions. The periodic microstructure is the characteristic feature of SD. Thus, SD can be exploited to control the microstructure at the nanoscale level. In bulk spinel ferrites having general formula, MₓFe₃₋ₓ O₄ (M= Co, Mn, Ni, Zn), phase separation via SD has been reported only for cobalt ferrite (CFO); however, long time post-annealing is required to occur the spinodal decomposition. We have found that SD occurs in CoF thin film without using any post-deposition annealing process if we apply magnetic field during thin film growth. Dynamic Aurora pulsed laser deposition (PLD) is a specially designed PLD system through which in-situ magnetic field (up to 2000 G) can be applied during thin film growth. The in-situ magnetic field suppresses the recombination of ions in the plume. In addition, the peak’s intensity of the ions in the spectra of the plume also increases when magnetic field is applied to the plume. As a result, ions with high kinetic energy strike into the substrate. Thus, ion-impingement occurred under magnetic field during thin film growth. The driving force of SD is the ion-impingement towards the substrates that is induced by in-situ magnetic field. In this study, we report about the occurrence of phase separation through SD and evolution of microstructure after phase separation in spinel ferrite thin films. The surface morphology of the phase separated films show checkerboard like domain structure. The cross-sectional microstructure of the phase separated films reveal columnar type phase separation. Herein, the decomposition wave propagates in lateral direction which has been confirmed from the lateral composition modulations in spinodally decomposed films. Large magnetic anisotropy has been found in spinodally decomposed nickel ferrite (NFO) thin films. This approach approves that magnetic field is also an important thermodynamic parameter to induce phase separation by the enhancement of up-hill diffusion in thin films. This thin film deposition technique could be a more efficient alternative for the fabrication of self-organized phase separated thin films and employed in controlling of the microstructure at nanoscale level.Keywords: Dynamic Aurora PLD, magnetic anisotropy, spinodal decomposition, spinel ferrite thin film
Procedia PDF Downloads 366316 Performance Assessment Of An Existing Multi-effect Desalination System Driven By Solar Energy
Authors: B. Shahzamanian, S. Varga, D. C. Alarcón-Padilla
Abstract:
Desalination is considered the primary alternative to increase water supply for domestic, agricultural and industrial use. Sustainable desalination is only possible in places where renewable energy resources are available. Solar energy is the most relevant type of renewable energy to driving desalination systems since most of the areas suffering from water scarcity are characterized by a high amount of available solar radiation during the year. Multi-Effect Desalination (MED) technology integrated with solar thermal concentrators is a suitable combination for heat-driven desalination. It can also be coupled with thermal vapour compressors or absorption heat pumps to boost overall system performance. The most interesting advantage of MED is the suitability to be used with a transient source of energy like solar. An experimental study was carried out to assess the performance of the most important life-size multi-effect desalination plant driven by solar energy located in the Plataforma Solar de Almería (PSA). The MED plant is used as a reference in many studies regarding multi-effect distillation. The system consists of a 14-effect MED plant coupled with a double-effect absorption heat pump. The required thermal energy to run the desalination system is supplied by means of hot water generated from 60 static flat-plate solar collectors with a total aperture area of 606 m2. In order to compensate for the solar energy variation, a thermal storage system with two interconnected tanks and an overall volume of 40 m3 is coupled to the MED unit. The multi-effect distillation unit is built in a forward feed configuration, and the last effect is connected to a double-effect LiBr-H2O absorption heat pump. The heat pump requires steam at 180 ºC (10 bar a) that is supplied by a small-aperture parabolic trough solar field with a total aperture area of 230 m2. When needed, a gas boiler is used as an auxiliary heat source for operating the heat pump and the MED plant when solar energy is not available. A set of experiments was carried out for evaluating the impact of the heating water temperature (Th), top brine temperature (TBT) and temperature difference between effects (ΔT) on the performance ratio of the MED plant. The considered range for variation of Th, TBT and ΔT was 60-70°C, 54-63°C and 1.1-1.6°C, respectively. The performance ratio (PR), defined as kg of distillate produced for every 2326 kJ of thermal energy supplied to the MED system, was almost independent of the applied variables with a variation of less than 5% for all the cases. The maximum recorded PR was 12.4. The results indicated that the system demonstrated robustness for the whole range of operating conditions considered. Author gratitude is expressed to the PSA for providing access to its installations, the support of its scientific and technical staff, and the financial support of the SFERA-III project (Grant Agreement No 823802). Special thanks to the access provider staff members who ensured the access support.Keywords: multi-effect distillation, performance ratio, robustness, solar energy
Procedia PDF Downloads 188315 Cultural Innovation in Uruena: A Path Against Depopulation
Authors: S. Sansone-Casaburi
Abstract:
The pandemic that the world is going through is causing important changes in the daily life of all cities, which can translate into opportunities to rearrange pending situations. Among others: the town-city relationship and sustainability. On the one hand, the city continues to be the center of attention, and the countryside is assumed as the supplier of food. However, the temporary closure of cities highlighted the importance of the rural environment, and many people are reassessing this context as an alternative for life. Furthermore, the countryside is not simply the home and the center of activity of the people who inhabit it, but rather constitutes the active group of all citizens, both rural and urban. On the other hand, the pandemic is the opportunity to meet sustainable development goals. Sustainable development is understood as the capital to be transferred to future generations made up of three types of wealth: natural capital (environment), human capital (people, relationships, culture), and artificial or built capital, made up of buildings and infrastructure, or by cities and towns. The 'new normal' can mean going back to the countryside, but not to a merely agricultural place but to a sustainable, affordable, and healthy place, which, with the appropriate infrastructures, allows work from a distance, a new post-COVID-19 modality. The contribution of the research is towards the recovery of traditional villages from the perspective of populations that have managed to maintain their vitality with innovative solutions. It is assumed that innovation is a path for the recovery of traditional villages, so we ask: what conditions are necessary for innovation to be successful and sustainable? In the research, several variables were found, among which culture is named, so the objective of this article is to understand Uruena, a town in the province of Valladolid, which with only 182 inhabitants houses five museums and twelve bookstores that make up the first Villa del Libro in Spain. The methodology used is mixed: inductive and deductive and the results were specified in determining the formula of innovative peoples in culture: PIc = Pt + C [E (Aec) + S (pp) + A (T + s + t + enc)]. Where the innovative villages in culture PIc are the result of traditional villages Pt that from a cultural innovation C, integrates into the economic, economic and cultural activities E (Aec); in the social sphere, the public and private actors S (pp); and in the environmental (A), Territory (T), services (s), technology (t) and natural and built spaces (enc). The results of this analysis will focus on determining what makes the structure of innovative peoples sustainable and understanding what variables make up that structure to verify if they can be applied in other contexts and repower abandoned places to provide a solution for people who migrate to this context. That is, learn from what has been done to replicate it in similar cases.Keywords: culture as innovation, depopulation, sustainability, traditional villages
Procedia PDF Downloads 88314 Fuel Cells Not Only for Cars: Technological Development in Railways
Authors: Marita Pigłowska, Beata Kurc, Paweł Daszkiewicz
Abstract:
Railway vehicles are divided into two groups: traction (powered) vehicles and wagons. The traction vehicles include locomotives (line and shunting), railcars (sometimes referred to as railbuses), and multiple units (electric and diesel), consisting of several or a dozen carriages. In vehicles with diesel traction, fuel energy (petrol, diesel, or compressed gas) is converted into mechanical energy directly in the internal combustion engine or via electricity. In the latter case, the combustion engine generator produces electricity that is then used to drive the vehicle (diesel-electric drive or electric transmission). In Poland, such a solution dominates both in heavy linear and shunting locomotives. The classic diesel drive is available for the lightest shunting locomotives, railcars, and passenger diesel multiple units. Vehicles with electric traction do not have their own source of energy -they use pantographs to obtain electricity from the traction network. To determine the competitiveness of the hydrogen propulsion system, it is essential to understand how it works. The basic elements of the construction of a railway vehicle drive system that uses hydrogen as a source of traction force are fuel cells, batteries, fuel tanks, traction motors as well as main and auxiliary converters. The compressed hydrogen is stored in tanks usually located on the roof of the vehicle. This resource is supplemented with the use of specialized infrastructure while the vehicle is stationary. Hydrogen is supplied to the fuel cell, where it oxidizes. The effect of this chemical reaction is electricity and water (in two forms -liquid and water vapor). Electricity is stored in batteries (so far, lithium-ion batteries are used). Electricity stored in this way is used to drive traction motors and supply onboard equipment. The current generated by the fuel cell passes through the main converter, whose task is to adjust it to the values required by the consumers, i.e., batteries and the traction motor. The work will attempt to construct a fuel cell with unique electrodes. This research is a trend that connects industry with science. The first goal will be to obtain hydrogen on a large scale in tube furnaces, to thoroughly analyze the obtained structures (IR), and to apply the method in fuel cells. The second goal is to create low-energy energy storage and distribution station for hydrogen and electric vehicles. The scope of the research includes obtaining a carbon variety and obtaining oxide systems on a large scale using a tubular furnace and then supplying vehicles. Acknowledgments: This work is supported by the Polish Ministry of Science and Education, project "The best of the best! 4.0", number 0911/MNSW/4968 – M.P. and grant 0911/SBAD/2102—B.K.Keywords: railway, hydrogen, fuel cells, hybrid vehicles
Procedia PDF Downloads 189313 Missed Opportunities for Immunization of under Five Children in Calabar South County Cros River State, Nigeria, the Way Forward
Authors: Celestine Odigwe, Epoke Lincoln, Rhoda-Dara Ephraim
Abstract:
Background; Immunization against the childhood killer diseases is the cardinal strategy for the prevention of these diseases all over the world in under five children, these diseases include; Tuberculosis, Measles, Polio, Tetanus, Diphthria, Pertusis, Yellow Fever, Hepatitis B, Haemophilus Influenza type B. 6.9 million children die before their fifth birthday , 80% of the worlds death in children under 5 years occur in 25 countries most in Africa and Asia and 2 million children can be saved each year with routine immunization Therefore failure to achieve total immunization coverage puts several children at risk. Aim; The aim of the study was to ascertain the prevalence, Investigate the various reasons and causes why several under five children in a suburb of calabar municipal county fail to get the required immunizations as at and when due and possibly the consequences, so that efforts can be re-directed towards the solution of the problems so identified. Methods; the study was a community based cross sectional study. The respondents were the mothers/guardians of the sampled children who were all aged 0-59 months. To be eligible for recruitment into the study, the parent or guardian was required to give an informed consent, reside within the Calabar South County with his/her children aged 0-59 months. We calculated our sample size using the Leslie-Kish formula and we used a two-staged sampling method, first to ballot for the wards to be involved and then to select four of the most populated ones in the wards chosen. Data collection was by interviewer administered structured questionnaire (Appendix I), Data collected was entered and analyzed using Statistical Package for the Social Sciences (SPSS) Version 20. Percentages were calculated and represented using charts and tables Results; The number of children sampled was 159. We found that 150 were fully immunized and 9 were not, the prevalence of missed opportunity was 32% from the study. The reasons for missed opportunities were varied, ranging from false contraindications, logistical problems resulting in very poor access roads to health facilities and poor organization of health centers together with negative health worker attitudes. Some of the consequences of these missed opportunities were increased susceptibility to vaccine preventable diseases, resurgence of the above diseases and increased morbidity and mortality of children aged less than 5 years. Conclusion; We found that ignorance on the part of both parents/guardians and health care staff together with infrastructural inadequacies in the county such as- roads, poor electric power supply for storage of vaccines were hugely responsible for most missed opportunities for immunization. The details of these and suggestions for improvement and the way forward are discussed.Keywords: missed opportunity, immunization, under five, Calabar south
Procedia PDF Downloads 324312 Using Chatbots to Create Situational Content for Coursework
Authors: B. Bricklin Zeff
Abstract:
This research explores the development and application of a specialized chatbot tailored for a nursing English course, with a primary objective of augmenting student engagement through situational content and responsiveness to key expressions and vocabulary. Introducing the chatbot, elucidating its purpose, and outlining its functionality are crucial initial steps in the research study, as they provide a comprehensive foundation for understanding the design and objectives of the specialized chatbot developed for the nursing English course. These elements establish the context for subsequent evaluations and analyses, enabling a nuanced exploration of the chatbot's impact on student engagement and language learning within the nursing education domain. The subsequent exploration of the intricate language model development process underscores the fusion of scientific methodologies and artistic considerations in this application of artificial intelligence (AI). Tailored for educators and curriculum developers in nursing, practical principles extending beyond AI and education are considered. Some insights into leveraging technology for enhanced language learning in specialized fields are addressed, with potential applications of similar chatbots in other professional English courses. The overarching vision is to illuminate how AI can transform language learning, rendering it more interactive and contextually relevant. The presented chatbot is a tangible example, equipping educators with a practical tool to enhance their teaching practices. Methodologies employed in this research encompass surveys and discussions to gather feedback on the chatbot's usability, effectiveness, and potential improvements. The chatbot system was integrated into a nursing English course, facilitating the collection of valuable feedback from participants. Significant findings from the study underscore the chatbot's effectiveness in encouraging more verbal practice of target expressions and vocabulary necessary for performance in role-play assessment strategies. This outcome emphasizes the practical implications of integrating AI into language education in specialized fields. This research holds significance for educators and curriculum developers in the nursing field, offering insights into integrating technology for enhanced English language learning. The study's major findings contribute valuable perspectives on the practical impact of the chatbot on student interaction and verbal practice. Ultimately, the research sheds light on the transformative potential of AI in making language learning more interactive and contextually relevant, particularly within specialized domains like nursing.Keywords: chatbot, nursing, pragmatics, role-play, AI
Procedia PDF Downloads 65311 Acoustic Energy Harvesting Using Polyvinylidene Fluoride (PVDF) and PVDF-ZnO Piezoelectric Polymer
Authors: S. M. Giripunje, Mohit Kumar
Abstract:
Acoustic energy that exists in our everyday life and environment have been overlooked as a green energy that can be extracted, generated, and consumed without any significant negative impact to the environment. The harvested energy can be used to enable new technology like wireless sensor networks. Technological developments in the realization of truly autonomous MEMS devices and energy storage systems have made acoustic energy harvesting (AEH) an increasingly viable technology. AEH is the process of converting high and continuous acoustic waves from the environment into electrical energy by using an acoustic transducer or resonator. AEH is not popular as other types of energy harvesting methods since sound waves have lower energy density and such energy can only be harvested in very noisy environment. However, the energy requirements for certain applications are also correspondingly low and also there is a necessity to observe the noise to reduce noise pollution. So the ability to reclaim acoustic energy and store it in a usable electrical form enables a novel means of supplying power to relatively low power devices. A quarter-wavelength straight-tube acoustic resonator as an acoustic energy harvester is introduced with polyvinylidene fluoride (PVDF) and PVDF doped with ZnO nanoparticles, piezoelectric cantilever beams placed inside the resonator. When the resonator is excited by an incident acoustic wave at its first acoustic eigen frequency, an amplified acoustic resonant standing wave is developed inside the resonator. The acoustic pressure gradient of the amplified standing wave then drives the vibration motion of the PVDF piezoelectric beams, generating electricity due to the direct piezoelectric effect. In order to maximize the amount of the harvested energy, each PVDF and PVDF-ZnO piezoelectric beam has been designed to have the same structural eigen frequency as the acoustic eigen frequency of the resonator. With a single PVDF beam placed inside the resonator, the harvested voltage and power become the maximum near the resonator tube open inlet where the largest acoustic pressure gradient vibrates the PVDF beam. As the beam is moved to the resonator tube closed end, the voltage and power gradually decrease due to the decreased acoustic pressure gradient. Multiple piezoelectric beams PVDF and PVDF-ZnO have been placed inside the resonator with two different configurations: the aligned and zigzag configurations. With the zigzag configuration which has the more open path for acoustic air particle motions, the significant increases in the harvested voltage and power have been observed. Due to the interruption of acoustic air particle motion caused by the beams, it is found that placing PVDF beams near the closed tube end is not beneficial. The total output voltage of the piezoelectric beams increases linearly as the incident sound pressure increases. This study therefore reveals that the proposed technique used to harvest sound wave energy has great potential of converting free energy into useful energy.Keywords: acoustic energy, acoustic resonator, energy harvester, eigenfrequency, polyvinylidene fluoride (PVDF)
Procedia PDF Downloads 385310 Study of the Impact of Quality Management System on Chinese Baby Dairy Product Industries
Authors: Qingxin Chen, Liben Jiang, Andrew Smith, Karim Hadjri
Abstract:
Since 2007, the Chinese food industry has undergone serious food contamination in the baby dairy industry, especially milk powder contamination. One of the milk powder products was found to contain melamine and a significant number (294,000) of babies were affected by kidney stones. Due to growing concerns among consumers about food safety and protection, and high pressure from central government, companies must take radical action to ensure food quality protection through the use of an appropriate quality management system. Previously, though researchers have investigated the health and safety aspects of food industries and products, quality issues concerning food products in China have been largely over-looked. Issues associated with baby dairy products and their quality issues have not been discussed in depth. This paper investigates the impact of quality management systems on the Chinese baby dairy product industry. A literature review was carried out to analyse the use of quality management systems within the Chinese milk power market. Moreover, quality concepts, relevant standards, laws, regulations and special issues (such as Melamine, Flavacin M1 contamination) have been analysed in detail. A qualitative research approach is employed, whereby preliminary analysis was conducted by interview, and data analysis based on interview responses from four selected Chinese baby dairy product companies was carried out. Through the analysis of literature review and data findings, it has been revealed that for quality management system that has been designed by many practitioners, many theories, models, conceptualisation, and systems are present. These standards and procedures should be followed in order to provide quality products to consumers, but the implementation is lacking in the Chinese baby dairy industry. Quality management systems have been applied by the selected companies but the implementation still needs improvement. For instance, the companies have to take measures to improve their processes and procedures with relevant standards. The government need to make more interventions and take a greater supervisory role in the production process. In general, this research presents implications for the regulatory bodies, Chinese Government and dairy food companies. There are food safety laws prevalent in China but they have not been widely practiced by companies. Regulatory bodies must take a greater role in ensuring compliance with laws and regulations. The Chinese government must also play a special role in urging companies to implement relevant quality control processes. The baby dairy companies not only have to accept the interventions from the regulatory bodies and government, they also need to ensure that production, storage, distribution and other processes will follow the relevant rules and standards.Keywords: baby dairy product, food quality, milk powder contamination, quality management system
Procedia PDF Downloads 473309 Thermal and Visual Comfort Assessment in Office Buildings in Relation to Space Depth
Authors: Elham Soltani Dehnavi
Abstract:
In today’s compact cities, bringing daylighting and fresh air to buildings is a significant challenge, but it also presents opportunities to reduce energy consumption in buildings by reducing the need for artificial lighting and mechanical systems. Simple adjustments to building form can contribute to their efficiency. This paper examines how the relationship between the width and depth of the rooms in office buildings affects visual and thermal comfort, and consequently energy savings. Based on these evaluations, we can determine the best location for sedentary areas in a room. We can also propose improvements to occupant experience and minimize the difference between the predicted and measured performance in buildings by changing other design parameters, such as natural ventilation strategies, glazing properties, and shading. This study investigates the condition of spatial daylighting and thermal comfort for a range of room configurations using computer simulations, then it suggests the best depth for optimizing both daylighting and thermal comfort, and consequently energy performance in each room type. The Window-to-Wall Ratio (WWR) is 40% with 0.8m window sill and 0.4m window head. Also, there are some fixed parameters chosen according to building codes and standards, and the simulations are done in Seattle, USA. The simulation results are presented as evaluation grids using the thresholds for different metrics such as Daylight Autonomy (DA), spatial Daylight Autonomy (sDA), Annual Sunlight Exposure (ASE), and Daylight Glare Probability (DGP) for visual comfort, and Predicted Mean Vote (PMV), Predicted Percentage of Dissatisfied (PPD), occupied Thermal Comfort Percentage (occTCP), over-heated percent, under-heated percent, and Standard Effective Temperature (SET) for thermal comfort that are extracted from Grasshopper scripts. The simulation tools are Grasshopper plugins such as Ladybug, Honeybee, and EnergyPlus. According to the results, some metrics do not change much along the room depth and some of them change significantly. So, we can overlap these grids in order to determine the comfort zone. The overlapped grids contain 8 metrics, and the pixels that meet all 8 mentioned metrics’ thresholds define the comfort zone. With these overlapped maps, we can determine the comfort zones inside rooms and locate sedentary areas there. Other parts can be used for other tasks that are not used permanently or need lower or higher amounts of daylight and thermal comfort is less critical to user experience. The results can be reflected in a table to be used as a guideline by designers in the early stages of the design process.Keywords: occupant experience, office buildings, space depth, thermal comfort, visual comfort
Procedia PDF Downloads 183308 Evaluation of Batch Splitting in the Context of Load Scattering
Authors: S. Wesebaum, S. Willeke
Abstract:
Production companies are faced with an increasingly turbulent business environment, which demands very high production volumes- and delivery date flexibility. If a decoupling by storage stages is not possible (e.g. at a contract manufacturing company) or undesirable from a logistical point of view, load scattering effects the production processes. ‘Load’ characterizes timing and quantity incidence of production orders (e.g. in work content hours) to workstations in the production, which results in specific capacity requirements. Insufficient coordination between load (demand capacity) and capacity supply results in heavy load scattering, which can be described by deviations and uncertainties in the input behavior of a capacity unit. In order to respond to fluctuating loads, companies try to implement consistent and realizable input behavior using the capacity supply available. For example, a uniform and high level of equipment capacity utilization keeps production costs down. In contrast, strong load scattering at workstations leads to performance loss or disproportionately fluctuating WIP, whereby the logistics objectives are affected negatively. Options for reducing load scattering are e.g. shifting the start and end dates of orders, batch splitting and outsourcing of operations or shifting to other workstations. This leads to an adjustment of load to capacity supply, and thus to a reduction of load scattering. If the adaptation of load to capacity cannot be satisfied completely, possibly flexible capacity must be used to ensure that the performance of a workstation does not decrease for a given load. Where the use of flexible capacities normally raises costs, an adjustment of load to capacity supply reduces load scattering and, in consequence, costs. In the literature you mostly find qualitative statements for describing load scattering. Quantitative evaluation methods that describe load mathematically are rare. In this article the authors discuss existing approaches for calculating load scattering and their various disadvantages such as lack of opportunity for normalization. These approaches are the basis for the development of our mathematical quantification approach for describing load scattering that compensates the disadvantages of the current quantification approaches. After presenting our mathematical quantification approach, the method of batch splitting will be described. Batch splitting allows the adaptation of load to capacity to reduce load scattering. After describing the method, it will be explicitly analyzed in the context of the logistic curve theory by Nyhuis using the stretch factor α1 in order to evaluate the impact of the method of batch splitting on load scattering and on logistic curves. The conclusion of this article will be to show how the methods and approaches presented can help companies in a turbulent environment to quantify the occurring work load scattering accurately and apply an efficient method for adjusting work load to capacity supply. In this way, the achievements of the logistical objectives are increased without causing additional costs.Keywords: batch splitting, production logistics, production planning and control, quantification, load scattering
Procedia PDF Downloads 399307 Modeling Diel Trends of Dissolved Oxygen for Estimating the Metabolism in Pristine Streams in the Brazilian Cerrado
Authors: Wesley A. Saltarelli, Nicolas R. Finkler, Adriana C. P. Miwa, Maria C. Calijuri, Davi G. F. Cunha
Abstract:
The metabolism of the streams is an indicator of ecosystem disturbance due to the influences of the catchment on the structure of the water bodies. The study of the respiration and photosynthesis allows the estimation of energy fluxes through the food webs and the analysis of the autotrophic and heterotrophic processes. We aimed at evaluating the metabolism in streams located in the Brazilian savannah, Cerrado (Sao Carlos, SP), by determining and modeling the daily changes of dissolved oxygen (DO) in the water during one year. Three water bodies with minimal anthropogenic interference in their surroundings were selected, Espraiado (ES), Broa (BR) and Canchim (CA). Every two months, water temperature, pH and conductivity are measured with a multiparameter probe. Nitrogen and phosphorus forms are determined according to standard methods. Also, canopy cover percentages are estimated in situ with a spherical densitometer. Stream flows are quantified through the conservative tracer (NaCl) method. For the metabolism study, DO (PME-MiniDOT) and light (Odyssey Photosynthetic Active Radiation) sensors log data for at least three consecutive days every ten minutes. The reaeration coefficient (k2) is estimated through the method of the tracer gas (SF6). Finally, we model the variations in DO concentrations and calculate the rates of gross and net primary production (GPP and NPP) and respiration based on the one station method described in the literature. Three sampling were carried out in October and December 2015 and February 2016 (the next will be in April, June and August 2016). The results from the first two periods are already available. The mean water temperatures in the streams were 20.0 +/- 0.8C (Oct) and 20.7 +/- 0.5C (Dec). In general, electrical conductivity values were low (ES: 20.5 +/- 3.5uS/cm; BR 5.5 +/- 0.7uS/cm; CA 33 +/- 1.4 uS/cm). The mean pH values were 5.0 (BR), 5.7 (ES) and 6.4 (CA). The mean concentrations of total phosphorus were 8.0ug/L (BR), 66.6ug/L (ES) and 51.5ug/L (CA), whereas soluble reactive phosphorus concentrations were always below 21.0ug/L. The BR stream had the lowest concentration of total nitrogen (0.55mg/L) as compared to CA (0.77mg/L) and ES (1.57mg/L). The average discharges were 8.8 +/- 6L/s (ES), 11.4 +/- 3L/s and CA 2.4 +/- 0.5L/s. The average percentages of canopy cover were 72% (ES), 75% (BR) and 79% (CA). Significant daily changes were observed in the DO concentrations, reflecting predominantly heterotrophic conditions (respiration exceeded the gross primary production, with negative net primary production). The GPP varied from 0-0.4g/m2.d (in Oct and Dec) and the R varied from 0.9-22.7g/m2.d (Oct) and from 0.9-7g/m2.d (Dec). The predominance of heterotrophic conditions suggests increased vulnerability of the ecosystems to artificial inputs of organic matter that would demand oxygen. The investigation of the metabolism in the pristine streams can help defining natural reference conditions of trophic state.Keywords: low-order streams, metabolism, net primary production, trophic state
Procedia PDF Downloads 258306 Utilizing Temporal and Frequency Features in Fault Detection of Electric Motor Bearings with Advanced Methods
Authors: Mohammad Arabi
Abstract:
The development of advanced technologies in the field of signal processing and vibration analysis has enabled more accurate analysis and fault detection in electrical systems. This research investigates the application of temporal and frequency features in detecting faults in electric motor bearings, aiming to enhance fault detection accuracy and prevent unexpected failures. The use of methods such as deep learning algorithms and neural networks in this process can yield better results. The main objective of this research is to evaluate the efficiency and accuracy of methods based on temporal and frequency features in identifying faults in electric motor bearings to prevent sudden breakdowns and operational issues. Additionally, the feasibility of using techniques such as machine learning and optimization algorithms to improve the fault detection process is also considered. This research employed an experimental method and random sampling. Vibration signals were collected from electric motors under normal and faulty conditions. After standardizing the data, temporal and frequency features were extracted. These features were then analyzed using statistical methods such as analysis of variance (ANOVA) and t-tests, as well as machine learning algorithms like artificial neural networks and support vector machines (SVM). The results showed that using temporal and frequency features significantly improves the accuracy of fault detection in electric motor bearings. ANOVA indicated significant differences between normal and faulty signals. Additionally, t-tests confirmed statistically significant differences between the features extracted from normal and faulty signals. Machine learning algorithms such as neural networks and SVM also significantly increased detection accuracy, demonstrating high effectiveness in timely and accurate fault detection. This study demonstrates that using temporal and frequency features combined with machine learning algorithms can serve as an effective tool for detecting faults in electric motor bearings. This approach not only enhances fault detection accuracy but also simplifies and streamlines the detection process. However, challenges such as data standardization and the cost of implementing advanced monitoring systems must also be considered. Utilizing temporal and frequency features in fault detection of electric motor bearings, along with advanced machine learning methods, offers an effective solution for preventing failures and ensuring the operational health of electric motors. Given the promising results of this research, it is recommended that this technology be more widely adopted in industrial maintenance processes.Keywords: electric motor, fault detection, frequency features, temporal features
Procedia PDF Downloads 46305 Reactors with Effective Mixing as a Solutions for Micro-Biogas Plant
Authors: M. Zielinski, M. Debowski, P. Rusanowska, A. Glowacka-Gil, M. Zielinska, A. Cydzik-Kwiatkowska, J. Kazimierowicz
Abstract:
Technologies for the micro-biogas plant with heating and mixing systems are presented as a part of the Research Coordination for a Low-Cost Biomethane Production at Small and Medium Scale Applications (Record Biomap). The main objective of the Record Biomap project is to build a network of operators and scientific institutions interested in cooperation and the development of promising technologies in the sector of small and medium-sized biogas plants. The activities carried out in the project will bridge the gap between research and market and reduce the time of implementation of new, efficient technological and technical solutions. Reactor with simultaneously mixing and heating system is a concrete tank with a rectangular cross-section. In the reactor, heating is integrated with the mixing of substrate and anaerobic sludge. This reactor is solution dedicated for substrates with high solids content, which cannot be introduced to the reactor with pumps, even with positive displacement pumps. Substrates are poured to the reactor and then with a screw pump, they are mixed with anaerobic sludge. The pumped sludge, flowing through the screw pump, is simultaneously heated by a heat exchanger. The level of the fermentation sludge inside the reactor chamber is above the bottom edge of the cover. Cover of the reactor is equipped with the screw pump driver. Inside the reactor, an electric motor is installed that is driving a screw pump. The heated sludge circulates in the digester. The post-fermented sludge is collected using a drain well. The inlet to the drain well is below the level of the sludge in the digester. The biogas is discharged from the reactor by the biogas intake valve located on the cover. The technology is very useful for fermentation of lignocellulosic biomass and substrates with high content of dry mass (organic wastes). The other technology is a reactor for micro-biogas plant with a pressure mixing system. The reactor has a form of plastic or concrete tank with a circular cross-section. The effective mixing of sludge is ensured by profiled at 90° bottom of the tank. Substrates for fermentation are supplied by an inlet well. The inlet well is equipped with a cover that eliminates odour release. The introduction of a new portion of substrates is preceded by pumping of digestate to the disposal well. Optionally, digestate can gravitationally flow to digestate storage tank. The obtained biogas is discharged into the separator. The valve supplies biogas to the blower. The blower presses the biogas from the fermentation chamber in such a way as to facilitate the introduction of a new portion of substrates. Biogas is discharged from the reactor by valve that enables biogas removal but prevents suction from outside the reactor.Keywords: biogas, digestion, heating system, mixing system
Procedia PDF Downloads 154304 The Effects of Computer Game-Based Pedagogy on Graduate Students Statistics Performance
Authors: Eva Laryea, Clement Yeboah Authors
Abstract:
A pretest-posttest within subjects, experimental design was employed to examine the effects of a computerized basic statistics learning game on achievement and statistics-related anxiety of students enrolled in introductory graduate statistics course. Participants (N = 34) were graduate students in a variety of programs at state-funded research university in the Southeast United States. We analyzed pre-test posttest differences using paired samples t-tests for achievement and for statistics anxiety. The results of the t-test for knowledge in statistics were found to be statistically significant indicating significant mean gains for statistical knowledge as a function of the game-based intervention. Likewise, the results of the t-test for statistics-related anxiety were also statistically significant indicating a decrease in anxiety from pretest to posttest. The implications of the present study are significant for both teachers and students. For teachers, using computer games developed by the researchers can help to create a more dynamic and engaging classroom environment, as well as improve student learning outcomes. For students, playing these educational games can help to develop important skills such as problem solving, critical thinking, and collaboration. Students can develop interest in the subject matter and spend quality time to learn the course as they play the game without knowing that they are even learning the presupposed hard course. The future directions of the present study are promising, as technology continues to advance and become more widely available. Some potential future developments include the integration of virtual and augmented reality into educational games, the use of machine learning and artificial intelligence to create personalized learning experiences, and the development of new and innovative game-based assessment tools. It is also important to consider the ethical implications of computer game-based pedagogy, such as the potential for games to perpetuate harmful stereotypes and biases. As the field continues to evolve, it will be crucial to address these issues and work towards creating inclusive and equitable learning experiences for all students. This study has the potential to revolutionize the way basic statistics graduate students learn and offers exciting opportunities for future development and research. It is an important area of inquiry for educators, researchers, and policymakers, and will continue to be a dynamic and rapidly evolving field for years to come.Keywords: pretest-posttest within subjects, experimental design, achievement, statistics-related anxiety
Procedia PDF Downloads 58303 New Insulation Material for Solar Thermal Collectors
Authors: Nabila Ihaddadene, Razika Ihaddadene, Abdelwahaab Betka
Abstract:
1973 energy crisis (rising oil prices) pushed the world to consider other alternative energy resources to existing conventional energies consisting predominantly of hydrocarbons. Renewable energies such as solar, the wind and geothermal have received renewed interest, especially to preserve nature ( the low-temperature rise of global environmental problems). Solar energy as an available, cheap and environmental friendly alternative source has various applications such as heating, cooling, drying, power generation, etc. In short, there is no life on earth without this enormous nuclear reactor, called the sun. Among available solar collector designs, flat plate collector (FPC) is low-temperature applications (heating water, space heating, etc.) due to its simple design and ease of manufacturing. Flat plate collectors are permanently fixed in position and do not track the sun (non-concentrating collectors). They operate by converting solar radiation into heat and transferring that heat to a working fluid (usually air, water, water plus antifreeze additive) flowing through them. An FPC generally consists of the main following components: glazing, absorber plate of high absorptivity, fluid tubes welded to or can be an integral part of the absorber plate, insulation and container or casing of the above-mentioned components. Insulation is of prime importance in thermal applications. There are three main families of insulation: mineral insulation; vegetal insulation and synthetic organic insulation. The old houses of the inhabitants of North Africa were built of brick made of composite material that is clay and straw. These homes are characterized by their thermal comfort; i.e. the air inside these houses is cool in summer and warm in winter. So, the material composed from clay and straw act as a thermal insulation. In this research document, the polystyrene used as insulation in the ET200 flat plate solar collector is replaced by the cheapest natural material which is clay and straw. Trials were carried out on a solar energy demonstration system (ET 200). This system contains a solar collector, water storage tank, a high power lamp simulating solar energy and a control and command cabinet. In the experimental device, the polystyrene is placed under the absorber plate and in the edges of the casing containing the components of the solar collector. In this work, we have replaced the polystyrene of the edges by the composite material. The use of the clay and straw as insulation instead of the polystyrene increases temperature difference (T2-T1) between the inlet and the outlet of the absorber by 0.9°C; thus increases the useful power transmitted to water in the solar collector. Tank Water is well heated when using the clay and straw as insulation. However, it is less heated when using the polystyrene as insulation. Clay and straw material improves also the performance of the solar collector by 5.77%. Thus, it is recommended to use this cheapest non-polluting material instead of synthetic insulation to improve the performance of the solar collector.Keywords: clay, insulation material, polystyrene, solar collector, straw
Procedia PDF Downloads 461302 The Potential for Maritime Tourism: An African Perspective
Authors: Lynn C. Jonas
Abstract:
The African continent is rich in coastal history, heritage, and culture, presenting immense potential for the development of maritime tourism. Shipping and its related components are generally associated with the maritime industry, and tourism’s link is to the various forms of nautical tourism. Activities may include cruising, yachting, visits to lighthouses, ports, harbors, and excursions to related sites of cultural, historical, or ecological significance. There have been hundreds of years of explorers leaving a string of shipwrecks along the various coastal areas on the continent in their pursuit of establishing trade routes between Europe, Africa, and the Far East. These shipwrecks present diving opportunities in artificial reefs and marine heritage to be explored in various ways in the maritime cultural zones. Along the South African coast, for example, six Portuguese shipwrecks highlight the Bartolomeu Dias legacy of exploration, and there are a number of warships in Tanzanian waters. Furthermore, decades of African countries being under colonized rule have left the continent with an intricate cultural heritage that is enmeshed in European language architecture interlinked with, in many instances, hard-fought independent littoral states. There is potential for coastal trails to be developed to follow these historical events as, at one point in history, France had colonized 35 African states, and subsequently, 32 African states were colonized by Britain. Countries such as Cameroon still have the legacy of Francophone versus Anglophone as a result of this shift in colonizers. Further to the colonized history of the African continent, there is an uncomfortable heritage of the slave trade history. To a certain extent, these coastal slave trade posts are being considered attractive to a niche tourism audience; however, there is potential for education and interpretive measures to grow this as a tourism product. Notwithstanding these potential opportunities, there are numerous challenges to consider, such as poor maritime infrastructure, maritime security concerns with issues such as piracy, transnational crimes including weapons and migrant smuggling, drug, and human trafficking. These and related maritime issues contribute to the concerns over the porous nature of African ocean gateways, adding to the security concerns for tourists. This theoretical paper will consider these trends and how they may contribute to the growth and development of maritime tourism on the African continent. African considerations of the growth potential of tourism in coastal and marine spaces are needed, particularly with a focus on embracing the continent's tumultuous past as part of its heritage. This has the potential to contribute to the creation of a sense of ownership of opportunities.Keywords: coastal trade routes, maritime tourism, shipwrecks, slave trade routes
Procedia PDF Downloads 19301 Enhancement of Hardness Related Properties of Grey Cast Iron Powder Reinforced AA7075 Metal Matrix Composites Through T6 and T8 Heat Treatments
Authors: S. S. Sharma, P. R. Prabhu, K. Jagannath, Achutha Kini U., Gowri Shankar M. C.
Abstract:
In present global scenario, aluminum alloys are coining the attention of many innovators as competing structural materials for automotive and space applications. Comparing to other challenging alloys, especially, 7xxx series aluminum alloys have been studied seriously because of their benefits such as moderate strength; better deforming characteristics, excellent chemical decay resistance, and affordable cost. 7075 Al-alloys have been used in the transportation industry for the fabrication of several types of automobile parts, such as wheel covers, panels and structures. It is expected that substitution of such aluminum alloys for steels will result in great improvements in energy economy, durability and recyclability. However, it is necessary to improve the strength and the formability levels at low temperatures in aluminium alloys for still better applications. Aluminum–Zinc–Magnesium with or without other wetting agent denoted as 7XXX series alloys are medium strength heat treatable alloys. Cu, Mn and Si are the other solute elements which contribute for the improvement in mechanical properties achievable by selecting and tailoring the suitable heat treatment process. On subjecting to suitable treatments like age hardening or cold deformation assisted heat treatments, known as low temperature thermomechanical treatments (LTMT) the challenging properties might be incorporated. T6 is the age hardening or precipitation hardening process with artificial aging cycle whereas T8 comprises of LTMT treatment aged artificially with X% cold deformation. When the cold deformation is provided after solution treatment, there is increase in hardness related properties such as wear resistance, yield and ultimate strength, toughness with the expense of ductility. During precipitation hardening both hardness and strength of the samples are increasing. Decreasing peak hardness value with increasing aging temperature is the well-known behavior of age hardenable alloys. The peak hardness value is further increasing when room temperature deformation is positively supported with age hardening known as thermomechanical treatment. Considering these aspects, it is intended to perform heat treatment and evaluate hardness, tensile strength, wear resistance and distribution pattern of reinforcement in the matrix. 2 to 2.5 and 3 to 3.5 times increase in hardness is reported in age hardening and LTMT treatments respectively as compared to as-cast composite. There was better distribution of reinforcements in the matrix, nearly two fold increase in strength levels and upto 5 times increase in wear resistance are also observed in the present study.Keywords: reinforcement, precipitation, thermomechanical, dislocation, strain hardening
Procedia PDF Downloads 311300 High Throughput LC-MS/MS Studies on Sperm Proteome of Malnad Gidda (Bos Indicus) Cattle
Authors: Kerekoppa Puttaiah Bhatta Ramesha, Uday Kannegundla, Praseeda Mol, Lathika Gopalakrishnan, Jagish Kour Reen, Gourav Dey, Manish Kumar, Sakthivel Jeyakumar, Arumugam Kumaresan, Kiran Kumar M., Thottethodi Subrahmanya Keshava Prasad
Abstract:
Spermatozoa are the highly specialized transcriptionally and translationally inactive haploid male gamete. The understanding of proteome of sperm is indispensable to explore the mechanism of sperm motility and fertility. Though there is a large number of human sperm proteomic studies, in-depth proteomic information on Bos indicus spermatozoa is not well established yet. Therefore, we illustrated the profile of sperm proteome in indigenous cattle, Malnad gidda (Bos Indicus), using high-resolution mass spectrometry. In the current study, two semen ejaculates from 3 breeding bulls were collected employing the artificial vaginal method. Using 45% percoll purification, spermatozoa cells were isolated. Protein was extracted using lysis buffer containing 2% Sodium Dodecyl Sulphate (SDS) and protein concentration was estimated. Fifty micrograms of protein from each individual were pooled for further downstream processing. Pooled sample was fractionated using SDS-Poly Acrylamide Gel Electrophoresis, which is followed by in-gel digestion. The peptides were subjected to C18 Stage Tip clean-up and analyzed in Orbitrap Fusion Tribrid mass spectrometer interfaced with Proxeon Easy-nano LC II system (Thermo Scientific, Bremen, Germany). We identified a total of 6773 peptides with 28426 peptide spectral matches, which belonged to 1081 proteins. Gene ontology analysis has been carried out to determine the biological processes, molecular functions and cellular components associated with sperm protein. The biological process chiefly represented our data is an oxidation-reduction process (5%), spermatogenesis (2.5%) and spermatid development (1.4%). The highlighted molecular functions are ATP, and GTP binding (14%) and the prominent cellular components most observed in our data were nuclear membrane (1.5%), acrosomal vesicle (1.4%), and motile cilium (1.3%). Seventeen percent of sperm proteins identified in this study were involved in metabolic pathways. To the best of our knowledge, this data represents the first total sperm proteome from indigenous cattle, Malnad Gidda. We believe that our preliminary findings could provide a strong base for the future understanding of bovine sperm proteomics.Keywords: Bos indicus, Malnad Gidda, mass spectrometry, spermatozoa
Procedia PDF Downloads 196299 Dual-use UAVs in Armed Conflicts: Opportunities and Risks for Cyber and Electronic Warfare
Authors: Piret Pernik
Abstract:
Based on strategic, operational, and technical analysis of the ongoing armed conflict in Ukraine, this paper will examine the opportunities and risks of using small commercial drones (dual-use unmanned aerial vehicles, UAV) for military purposes. The paper discusses the opportunities and risks in the information domain, encompassing both cyber and electromagnetic interference and attacks. The paper will draw conclusions on a possible strategic impact to the battlefield outcomes in the modern armed conflicts by the widespread use of dual-use UAVs. This article will contribute to filling the gap in the literature by examining based on empirical data cyberattacks and electromagnetic interference. Today, more than one hundred states and non-state actors possess UAVs ranging from low cost commodity models, widely are dual-use, available and affordable to anyone, to high-cost combat UAVs (UCAV) with lethal kinetic strike capabilities, which can be enhanced with Artificial Intelligence (AI) and Machine Learning (ML). Dual-use UAVs have been used by various actors for intelligence, reconnaissance, surveillance, situational awareness, geolocation, and kinetic targeting. Thus they function as force multipliers enabling kinetic and electronic warfare attacks and provide comparative and asymmetric operational and tactical advances. Some go as far as argue that automated (or semi-automated) systems can change the character of warfare, while others observe that the use of small drones has not changed the balance of power or battlefield outcomes. UAVs give considerable opportunities for commanders, for example, because they can be operated without GPS navigation, makes them less vulnerable and dependent on satellite communications. They can and have been used to conduct cyberattacks, electromagnetic interference, and kinetic attacks. However, they are highly vulnerable to those attacks themselves. So far, strategic studies, literature, and expert commentary have overlooked cybersecurity and electronic interference dimension of the use of dual use UAVs. The studies that link technical analysis of opportunities and risks with strategic battlefield outcomes is missing. It is expected that dual use commercial UAV proliferation in armed and hybrid conflicts will continue and accelerate in the future. Therefore, it is important to understand specific opportunities and risks related to the crowdsourced use of dual-use UAVs, which can have kinetic effects. Technical countermeasures to protect UAVs differ depending on a type of UAV (small, midsize, large, stealth combat), and this paper will offer a unique analysis of small UAVs both from the view of opportunities and risks for commanders and other actors in armed conflict.Keywords: dual-use technology, cyber attacks, electromagnetic warfare, case studies of cyberattacks in armed conflicts
Procedia PDF Downloads 102298 Unification of Lactic Acid Bacteria and Aloe Vera for Healthy Gut
Authors: Pavitra Sharma, Anuradha Singh, Nupur Mathur
Abstract:
There exist more than 100 trillion bacteria in the digestive system of human-beings. Such bacteria are referred to as gut microbiota. Gut microbiota comprises around 75% of our immune system. The bacteria that comprise the gut microbiota are unique to every individual and their composition keeps changing with time owing to factors such as the host’s age, diet, genes, environment, and external medication. Of these factors, the variable easiest to control is one’s diet. By modulating one’s diet, one can ensure an optimal composition of the gut microbiota yielding several health benefits. Prebiotics and probiotics are two compounds that have been considered as viable options to modulate the host’s diet. Prebiotics are basically plant products that support the growth of good bacteria in the host’s gut. Examples include garden asparagus, aloe vera etc. Probiotics are living microorganisms that exist in our intestines and play an integral role in promoting digestive health and supporting our immune system in general. Examples include yogurt, kimchi, kombucha etc. In the context of modulating the host’s diet, the key attribute of prebiotics is that they support the growth of probiotics. By developing the right combination of prebiotics and probiotics, food products or supplements can be created to enhance the host’s health. An effective combination of prebiotics and probiotics that yields health benefits to the host is referred to as synbiotics. Synbiotics comprise of an optimal proportion of prebiotics and probiotics, their application benefits the host’s health more than the application of prebiotics and probiotics used in isolation. When applied to food supplements, synbiotics preserve the beneficial probiotic bacteria during storage period and during the bacteria’s passage through the intestinal tract. When applied to the gastrointestinal tract, the composition of the synbiotics assumes paramount importance. Reason being that for synbiotics to be effective in the gastrointestinal tract, the chosen probiotic must be able to survive in the stomach’s acidic environment and manifest tolerance towards bile and pancreatic secretions. Further, not every prebiotic stimulates the growth of a particular probiotic. The prebiotic chosen should be one that not only maintains 2 balance in the host’s digestive system, but also provides the required nutrition to probiotics. Hence in each application of synbiotics, the prebiotic-probiotic combination needs to be carefully selected. Once the combination is finalized, the exact proportion of prebiotics and probiotics to be used needs to be considered. When determining this proportion, only that amount of a prebiotic should be used that activates metabolism of the required number of probiotics. It was observed that while probiotics are active is both the small and large intestine, the effect of prebiotics is observed primarily in the large intestine. Hence in the host’s small intestine, synbiotics are likely to have the maximum efficacy. In small intestine, prebiotics not only assist in the growth of probiotics, but they also enable probiotics to exhibit a higher tolerance to pH levels, oxygenation, and intestinal temperatureKeywords: microbiota, probiotics, prebiotics, synbiotics
Procedia PDF Downloads 135297 Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison
Authors: Xiangtuo Chen, Paul-Henry Cournéde
Abstract:
Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.Keywords: crop yield prediction, crop model, sensitivity analysis, paramater estimation, particle swarm optimization, random forest
Procedia PDF Downloads 231296 Enhancing Large Language Models' Data Analysis Capability with Planning-and-Execution and Code Generation Agents: A Use Case for Southeast Asia Real Estate Market Analytics
Authors: Kien Vu, Jien Min Soh, Mohamed Jahangir Abubacker, Piyawut Pattamanon, Soojin Lee, Suvro Banerjee
Abstract:
Recent advances in Generative Artificial Intelligence (GenAI), in particular Large Language Models (LLMs) have shown promise to disrupt multiple industries at scale. However, LLMs also present unique challenges, notably, these so-called "hallucination" which is the generation of outputs that are not grounded in the input data that hinders its adoption into production. Common practice to mitigate hallucination problem is utilizing Retrieval Agmented Generation (RAG) system to ground LLMs'response to ground truth. RAG converts the grounding documents into embeddings, retrieve the relevant parts with vector similarity between user's query and documents, then generates a response that is not only based on its pre-trained knowledge but also on the specific information from the retrieved documents. However, the RAG system is not suitable for tabular data and subsequent data analysis tasks due to multiple reasons such as information loss, data format, and retrieval mechanism. In this study, we have explored a novel methodology that combines planning-and-execution and code generation agents to enhance LLMs' data analysis capabilities. The approach enables LLMs to autonomously dissect a complex analytical task into simpler sub-tasks and requirements, then convert them into executable segments of code. In the final step, it generates the complete response from output of the executed code. When deployed beta version on DataSense, the property insight tool of PropertyGuru, the approach yielded promising results, as it was able to provide market insights and data visualization needs with high accuracy and extensive coverage by abstracting the complexities for real-estate agents and developers from non-programming background. In essence, the methodology not only refines the analytical process but also serves as a strategic tool for real estate professionals, aiding in market understanding and enhancement without the need for programming skills. The implication extends beyond immediate analytics, paving the way for a new era in the real estate industry characterized by efficiency and advanced data utilization.Keywords: large language model, reasoning, planning and execution, code generation, natural language processing, prompt engineering, data analysis, real estate, data sense, PropertyGuru
Procedia PDF Downloads 87295 Measurement of Fatty Acid Changes in Post-Mortem Belowground Carcass (Sus-scrofa) Decomposition: A Semi-Quantitative Methodology for Determining the Post-Mortem Interval
Authors: Nada R. Abuknesha, John P. Morgan, Andrew J. Searle
Abstract:
Information regarding post-mortem interval (PMI) in criminal investigations is vital to establish a time frame when reconstructing events. PMI is defined as the time period that has elapsed between the occurrence of death and the discovery of the corpse. Adipocere, commonly referred to as ‘grave-wax’, is formed when post-mortem adipose tissue is converted into a solid material that is heavily comprised of fatty acids. Adipocere is of interest to forensic anthropologists, as its formation is able to slow down the decomposition process. Therefore, analysing the changes in the patterns of fatty acids during the early decomposition process may be able to estimate the period of burial, and hence the PMI. The current study concerned the investigation of the fatty acid composition and patterns in buried pig fat tissue. This was in an attempt to determine whether particular patterns of fatty acid composition can be shown to be associated with the duration of the burial, and hence may be used to estimate PMI. The use of adipose tissue from the abdominal region of domestic pigs (Sus-scrofa), was used to model the human decomposition process. 17 x 20cm piece of pork belly was buried in a shallow artificial grave, and weekly samples (n=3) from the buried pig fat tissue were collected over an 11-week period. Marker fatty acids: palmitic (C16:0), oleic (C18:1n-9) and linoleic (C18:2n-6) acid were extracted from the buried pig fat tissue and analysed as fatty acid methyl esters using the gas chromatography system. Levels of the marker fatty acids were quantified from their respective standards. The concentrations of C16:0 (69.2 mg/mL) and C18:1n-9 (44.3 mg/mL) from time zero exhibited significant fluctuations during the burial period. Levels rose (116 and 60.2 mg/mL, respectively) and fell starting from the second week to reach 19.3 and 18.3 mg/mL, respectively at week 6. Levels showed another increase at week 9 (66.3 and 44.1 mg/mL, respectively) followed by gradual decrease at week 10 (20.4 and 18.5 mg/mL, respectively). A sharp increase was observed in the final week (131.2 and 61.1 mg/mL, respectively). Conversely, the levels of C18:2n-6 remained more or less constant throughout the study. In addition to fluctuations in the concentrations, several new fatty acids appeared in the latter weeks. Other fatty acids which were detectable in the time zero sample, were lost in the latter weeks. There are several probable opportunities to utilise fatty acid analysis as a basic technique for approximating PMI: the quantification of marker fatty acids and the detection of selected fatty acids that either disappear or appear during the burial period. This pilot study indicates that this may be a potential semi-quantitative methodology for determining the PMI. Ideally, the analysis of particular fatty acid patterns in the early stages of decomposition could be an additional tool to the already available techniques or methods in improving the overall processes in estimating PMI of a corpse.Keywords: adipocere, fatty acids, gas chromatography, post-mortem interval
Procedia PDF Downloads 131294 Servitization in Machine and Plant Engineering: Leveraging Generative AI for Effective Product Portfolio Management Amidst Disruptive Innovations
Authors: Till Gramberg
Abstract:
In the dynamic world of machine and plant engineering, stagnation in the growth of new product sales compels companies to reconsider their business models. The increasing shift toward service orientation, known as "servitization," along with challenges posed by digitalization and sustainability, necessitates an adaptation of product portfolio management (PPM). Against this backdrop, this study investigates the current challenges and requirements of PPM in this industrial context and develops a framework for the application of generative artificial intelligence (AI) to enhance agility and efficiency in PPM processes. The research approach of this study is based on a mixed-method design. Initially, qualitative interviews with industry experts were conducted to gain a deep understanding of the specific challenges and requirements in PPM. These interviews were analyzed using the Gioia method, painting a detailed picture of the existing issues and needs within the sector. This was complemented by a quantitative online survey. The combination of qualitative and quantitative research enabled a comprehensive understanding of the current challenges in the practical application of machine and plant engineering PPM. Based on these insights, a specific framework for the application of generative AI in PPM was developed. This framework aims to assist companies in implementing faster and more agile processes, systematically integrating dynamic requirements from trends such as digitalization and sustainability into their PPM process. Utilizing generative AI technologies, companies can more quickly identify and respond to trends and market changes, allowing for a more efficient and targeted adaptation of the product portfolio. The study emphasizes the importance of an agile and reactive approach to PPM in a rapidly changing environment. It demonstrates how generative AI can serve as a powerful tool to manage the complexity of a diversified and continually evolving product portfolio. The developed framework offers practical guidelines and strategies for companies to improve their PPM processes by leveraging the latest technological advancements while maintaining ecological and social responsibility. This paper significantly contributes to deepening the understanding of the application of generative AI in PPM and provides a framework for companies to manage their product portfolios more effectively and adapt to changing market conditions. The findings underscore the relevance of continuous adaptation and innovation in PPM strategies and demonstrate the potential of generative AI for proactive and future-oriented business management.Keywords: servitization, product portfolio management, generative AI, disruptive innovation, machine and plant engineering
Procedia PDF Downloads 82293 Prediction of Live Birth in a Matched Cohort of Elective Single Embryo Transfers
Authors: Mohsen Bahrami, Banafsheh Nikmehr, Yueqiang Song, Anuradha Koduru, Ayse K. Vuruskan, Hongkun Lu, Tamer M. Yalcinkaya
Abstract:
In recent years, we have witnessed an explosion of studies aimed at using a combination of artificial intelligence (AI) and time-lapse imaging data on embryos to improve IVF outcomes. However, despite promising results, no study has used a matched cohort of transferred embryos which only differ in pregnancy outcome, i.e., embryos from a single clinic which are similar in parameters, such as: morphokinetic condition, patient age, and overall clinic and lab performance. Here, we used time-lapse data on embryos with known pregnancy outcomes to see if the rich spatiotemporal information embedded in this data would allow the prediction of the pregnancy outcome regardless of such critical parameters. Methodology—We did a retrospective analysis of time-lapse data from our IVF clinic utilizing Embryoscope 100% of the time for embryo culture to blastocyst stage with known clinical outcomes, including live birth vs nonpregnant (embryos with spontaneous abortion outcomes were excluded). We used time-lapse data from 200 elective single transfer embryos randomly selected from January 2019 to June 2021. Our sample included 100 embryos in each group with no significant difference in patient age (P=0.9550) and morphokinetic scores (P=0.4032). Data from all patients were combined to make a 4th order tensor, and feature extraction were subsequently carried out by a tensor decomposition methodology. The features were then used in a machine learning classifier to classify the two groups. Major Findings—The performance of the model was evaluated using 100 random subsampling cross validation (train (80%) - test (20%)). The prediction accuracy, averaged across 100 permutations, exceeded 80%. We also did a random grouping analysis, in which labels (live birth, nonpregnant) were randomly assigned to embryos, which yielded 50% accuracy. Conclusion—The high accuracy in the main analysis and the low accuracy in random grouping analysis suggest a consistent spatiotemporal pattern which is associated with pregnancy outcomes, regardless of patient age and embryo morphokinetic condition, and beyond already known parameters, such as: early cleavage or early blastulation. Despite small samples size, this ongoing analysis is the first to show the potential of AI methods in capturing the complex morphokinetic changes embedded in embryo time-lapse data, which contribute to successful pregnancy outcomes, regardless of already known parameters. The results on a larger sample size with complementary analysis on prediction of other key outcomes, such as: euploidy and aneuploidy of embryos will be presented at the meeting.Keywords: IVF, embryo, machine learning, time-lapse imaging data
Procedia PDF Downloads 92