Search results for: advanced reactor fuel reprocessing
79 Advancements in Arthroscopic Surgery Techniques for Anterior Cruciate Ligament (ACL) Reconstruction
Authors: Islam Sherif, Ahmed Ashour, Ahmed Hassan, Hatem Osman
Abstract:
Anterior Cruciate Ligament (ACL) injuries are common among athletes and individuals participating in sports with sudden stops, pivots, and changes in direction. Arthroscopic surgery is the gold standard for ACL reconstruction, aiming to restore knee stability and function. Recent years have witnessed significant advancements in arthroscopic surgery techniques, graft materials, and technological innovations, revolutionizing the field of ACL reconstruction. This presentation delves into the latest advancements in arthroscopic surgery techniques for ACL reconstruction and their potential impact on patient outcomes. Traditionally, autografts from the patellar tendon, hamstring tendon, or quadriceps tendon have been commonly used for ACL reconstruction. However, recent studies have explored the use of allografts, synthetic scaffolds, and tissue-engineered grafts as viable alternatives. This abstract evaluates the benefits and potential drawbacks of each graft type, considering factors such as graft incorporation, strength, and risk of graft failure. Moreover, the application of augmented reality (AR) and virtual reality (VR) technologies in surgical planning and intraoperative navigation has gained traction. AR and VR platforms provide surgeons with detailed 3D anatomical reconstructions of the knee joint, enhancing preoperative visualization and aiding in graft tunnel placement during surgery. We discuss the integration of AR and VR in arthroscopic ACL reconstruction procedures, evaluating their accuracy, cost-effectiveness, and overall impact on surgical outcomes. Beyond graft selection and surgical navigation, patient-specific planning has gained attention in recent research. Advanced imaging techniques, such as MRI-based personalized planning, enable surgeons to tailor ACL reconstruction procedures to each patient's unique anatomy. By accounting for individual variations in the femoral and tibial insertion sites, this personalized approach aims to optimize graft placement and potentially improve postoperative knee kinematics and stability. Furthermore, rehabilitation and postoperative care play a crucial role in the success of ACL reconstruction. This abstract explores novel rehabilitation protocols, emphasizing early mobilization, neuromuscular training, and accelerated recovery strategies. Integrating technology, such as wearable sensors and mobile applications, into postoperative care can facilitate remote monitoring and timely intervention, contributing to enhanced rehabilitation outcomes. In conclusion, this presentation provides an overview of the cutting-edge advancements in arthroscopic surgery techniques for ACL reconstruction. By embracing innovative graft materials, augmented reality, patient-specific planning, and technology-driven rehabilitation, orthopedic surgeons and sports medicine specialists can achieve superior outcomes in ACL injury management. These developments hold great promise for improving the functional outcomes and long-term success rates of ACL reconstruction, benefitting athletes and patients alike.Keywords: arthroscopic surgery, ACL, autograft, allograft, graft materials, ACL reconstruction, synthetic scaffolds, tissue-engineered graft, virtual reality, augmented reality, surgical planning, intra-operative navigation
Procedia PDF Downloads 9278 Thermally Stable Crystalline Triazine-Based Organic Polymeric Nanodendrites for Mercury(2+) Ion Sensing
Authors: Dimitra Das, Anuradha Mitra, Kalyan Kumar Chattopadhyay
Abstract:
Organic polymers, constructed from light elements like carbon, hydrogen, nitrogen, oxygen, sulphur, and boron atoms, are the emergent class of non-toxic, metal-free, environmental benign advanced materials. Covalent triazine-based polymers with a functional triazine group are significant class of organic materials due to their remarkable stability arising out of strong covalent bonds. They can conventionally form hydrogen bonds, favour π–π contacts, and they were recently revealed to be involved in interesting anion–π interactions. The present work mainly focuses upon the development of a single-crystalline, highly cross-linked triazine-based nitrogen-rich organic polymer with nanodendritic morphology and significant thermal stability. The polymer has been synthesized through hydrothermal treatment of melamine and ethylene glycol resulting in cross-polymerization via condensation-polymerization reaction. The crystal structure of the polymer has been evaluated by employing Rietveld whole profile fitting method. The polymer has been found to be composed of monoclinic melamine having space group P21/a. A detailed insight into the chemical structure of the as synthesized polymer has been elucidated by Fourier Transform Infrared Spectroscopy (FTIR) and Raman spectroscopic analysis. X-Ray Photoelectron Spectroscopic (XPS) analysis has also been carried out for further understanding of the different types of linkages required to create the backbone of the polymer. The unique rod-like morphology of the triazine based polymer has been revealed from the images obtained from Field Emission Scanning Electron Microscopy (FESEM) and Transmission Electron Microscopy (TEM). Interestingly, this polymer has been found to selectively detect mercury (Hg²⁺) ions at an extremely low concentration through fluorescent quenching with detection limit as low as 0.03 ppb. The high toxicity of mercury ions (Hg²⁺) arise from its strong affinity towards the sulphur atoms of biological building blocks. Even a trace quantity of this metal is dangerous for human health. Furthermore, owing to its small ionic radius and high solvation energy, Hg²⁺ ions remain encapsulated by water molecules making its detection a challenging task. There are some existing reports on fluorescent-based heavy metal ion sensors using covalent organic frameworks (COFs) but reports on mercury sensing using triazine based polymers are rather undeveloped. Thus, the importance of ultra-trace detection of Hg²⁺ ions with high level of selectivity and sensitivity has contemporary significance. A plausible sensing phenomenon by the polymer has been proposed to understand the applicability of the material as a potential sensor. The impressive sensitivity of the polymer sample towards Hg²⁺ is the very first report in the field of highly crystalline triazine based polymers (without the introduction of any sulphur groups or functionalization) towards mercury ion detection through photoluminescence quenching technique. This crystalline metal-free organic polymer being cheap, non-toxic and scalable has current relevance and could be a promising candidate for Hg²⁺ ion sensing at commercial level.Keywords: fluorescence quenching , mercury ion sensing, single-crystalline, triazine-based polymer
Procedia PDF Downloads 13677 Chatbots vs. Websites: A Comparative Analysis Measuring User Experience and Emotions in Mobile Commerce
Authors: Stephan Boehm, Julia Engel, Judith Eisser
Abstract:
During the last decade communication in the Internet transformed from a broadcast to a conversational model by supporting more interactive features, enabling user generated content and introducing social media networks. Another important trend with a significant impact on electronic commerce is a massive usage shift from desktop to mobile devices. However, a presentation of product- or service-related information accumulated on websites, micro pages or portals often remains the pivot and focal point of a customer journey. A more recent change of user behavior –especially in younger user groups and in Asia– is going along with the increasing adoption of messaging applications supporting almost real-time but asynchronous communication on mobile devices. Mobile apps of this type cannot only provide an alternative for traditional one-to-one communication on mobile devices like voice calls or short messaging service. Moreover, they can be used in mobile commerce as a new marketing and sales channel, e.g., for product promotions and direct marketing activities. This requires a new way of customer interaction compared to traditional mobile commerce activities and functionalities provided based on mobile web-sites. One option better aligned to the customer interaction in mes-saging apps are so-called chatbots. Chatbots are conversational programs or dialog systems simulating a text or voice based human interaction. They can be introduced in mobile messaging and social media apps by using rule- or artificial intelligence-based imple-mentations. In this context, a comparative analysis is conducted to examine the impact of using traditional websites or chatbots for promoting a product in an impulse purchase situation. The aim of this study is to measure the impact on the customers’ user experi-ence and emotions. The study is based on a random sample of about 60 smartphone users in the group of 20 to 30-year-olds. Participants are randomly assigned into two groups and participate in a traditional website or innovative chatbot based mobile com-merce scenario. The chatbot-based scenario is implemented by using a Wizard-of-Oz experimental approach for reasons of sim-plicity and to allow for more flexibility when simulating simple rule-based and more advanced artificial intelligence-based chatbot setups. A specific set of metrics is defined to measure and com-pare the user experience in both scenarios. It can be assumed, that users get more emotionally involved when interacting with a system simulating human communication behavior instead of browsing a mobile commerce website. For this reason, innovative face-tracking and analysis technology is used to derive feedback on the emotional status of the study participants while interacting with the website or the chatbot. This study is a work in progress. The results will provide first insights on the effects of chatbot usage on user experiences and emotions in mobile commerce environments. Based on the study findings basic requirements for a user-centered design and implementation of chatbot solutions for mobile com-merce can be derived. Moreover, first indications on situations where chatbots might be favorable in comparison to the usage of traditional website based mobile commerce can be identified.Keywords: chatbots, emotions, mobile commerce, user experience, Wizard-of-Oz prototyping
Procedia PDF Downloads 45876 Broad Host Range Bacteriophage Cocktail for Reduction of Staphylococcus aureus as Potential Therapy for Atopic Dermatitis
Authors: Tamar Lin, Nufar Buchshtab, Yifat Elharar, Julian Nicenboim, Rotem Edgar, Iddo Weiner, Lior Zelcbuch, Ariel Cohen, Sharon Kredo-Russo, Inbar Gahali-Sass, Naomi Zak, Sailaja Puttagunta, Merav Bassan
Abstract:
Background: Atopic dermatitis (AD) is a chronic, relapsing inflammatory skin disorder that is characterized by dry skin and flares of eczematous lesions and intense pruritus. Multiple lines of evidence suggest that AD is associated with increased colonization by Staphylococcus aureus, which contributes to disease pathogenesis through the release of virulence factors that affect both keratinocytes and immune cells, leading to disruption of the skin barrier and immune cell dysfunction. The aim of the current study is to develop a bacteriophage-based product that specifically targets S. aureus. Methods: For the discovery of phage, environmental samples were screened on 118 S. aureus strains isolated from skin samples, followed by multiple enrichment steps. Natural phages were isolated, subjected to Next-generation Sequencing (NGS), and analyzed using proprietary bioinformatics tools for undesirable genes (toxins, antibiotic resistance genes, lysogeny potential), taxonomic classification, and purity. Phage host range was determined by an efficiency of plating (EOP) value above 0.1 and the ability of the cocktail to completely lyse liquid bacterial culture under different growth conditions (e.g., temperature, bacterial stage). Results: Sequencing analysis demonstrated that the 118 S. aureus clinical strains were distributed across the phylogenetic tree of all available Refseq S. aureus (~10,750 strains). Screening environmental samples on the S. aureus isolates resulted in the isolation of 50 lytic phages from different genera, including Silviavirus, Kayvirus, Podoviridae, and a novel unidentified phage. NGS sequencing confirmed the absence of toxic elements in the phages’ genomes. The host range of the individual phages, as measured by the efficiency of plating (EOP), ranged between 41% (48/118) to 79% (93/118). Host range studies in liquid culture revealed that a subset of the phages can infect a broad range of S. aureus strains in different metabolic states, including stationary state. Combining the single-phage EOP results of selected phages resulted in a broad host range cocktail which infected 92% (109/118) of the strains. When tested in vitro in a liquid infection assay, clearance was achieved in 87% (103/118) of the strains, with no evidence of phage resistance throughout the study (24 hours). A S. aureus host was identified that can be used for the production of all the phages in the cocktail at high titers suitable for large-scale manufacturing. This host was validated for the absence of contaminating prophages using advanced NGS methods combined with multiple production cycles. The phages are produced under optimized scale-up conditions and are being used for the development of a topical formulation (BX005) that may be administered to subjects with atopic dermatitis. Conclusions: A cocktail of natural phages targeting S. aureus was effective in reducing bacterial burden across multiple assays. Phage products may offer safe and effective steroid-sparing options for atopic dermatitis.Keywords: atopic dermatitis, bacteriophage cocktail, host range, Staphylococcus aureus
Procedia PDF Downloads 15375 Design Thinking and Project-Based Learning: Opportunities, Challenges, and Possibilities
Authors: Shoba Rathilal
Abstract:
High unemployment rates and a shortage of experienced and qualified employees appear to be a paradox that currently plagues most countries worldwide. In a developing country like South Africa, the rate of unemployment is reported to be approximately 35%, the highest recorded globally. At the same time, a countrywide deficit in experienced and qualified potential employees is reported in South Africa, which is causing fierce rivalry among firms. Employers have reported that graduates are very rarely able to meet the demands of the job as there are gaps in their knowledge and conceptual understanding and other 21st-century competencies, attributes, and dispositions required to successfully negotiate the multiple responsibilities of employees in organizations. In addition, the rates of unemployment and suitability of graduates appear to be skewed by race and social class, the continued effects of a legacy of inequitable educational access. Higher Education in the current technologically advanced and dynamic world needs to serve as an agent of transformation, aspiring to develop graduates to be creative, flexible, critical, and with entrepreneurial acumen. This requires that higher education curricula and pedagogy require a re-envisioning of our selection, sequencing, and pacing of the learning, teaching, and assessment. At a particular Higher education Institution in South Africa, Design Thinking and Project Based learning are being adopted as two approaches that aim to enhance the student experience through the provision of a “distinctive education” that brings together disciplinary knowledge, professional engagement, technology, innovation, and entrepreneurship. Using these methodologies forces the students to solve real-time applied problems using various forms of knowledge and finding innovative solutions that can result in new products and services. The intention is to promote the development of skills for self-directed learning, facilitate the development of self-awareness, and contribute to students being active partners in the application and production of knowledge. These approaches emphasize active and collaborative learning, teamwork, conflict resolution, and problem-solving through effective integration of theory and practice. In principle, both these approaches are extremely impactful. However, at the institution in this study, the implementation of the PBL and DT was not as “smooth” as anticipated. This presentation reports on the analysis of the implementation of these two approaches within higher education curricula at a particular university in South Africa. The study adopts a qualitative case study design. Data were generated through the use of surveys, evaluation feedback at workshops, and content analysis of project reports. Data were analyzed using document analysis, content, and thematic analysis. Initial analysis shows that the forces constraining the implementation of PBL and DT range from the capacity to engage with DT and PBL, both from staff and students, educational contextual realities of higher education institutions, administrative processes, and resources. At the same time, the implementation of DT and PBL was enabled through the allocation of strategic funding and capacity development workshops. These factors, however, could not achieve maximum impact. In addition, the presentation will include recommendations on how DT and PBL could be adapted for differing contexts will be explored.Keywords: design thinking, project based learning, innovative higher education pedagogy, student and staff capacity development
Procedia PDF Downloads 7774 High Pressure Thermophysical Properties of Complex Mixtures Relevant to Liquefied Natural Gas (LNG) Processing
Authors: Saif Al Ghafri, Thomas Hughes, Armand Karimi, Kumarini Seneviratne, Jordan Oakley, Michael Johns, Eric F. May
Abstract:
Knowledge of the thermophysical properties of complex mixtures at extreme conditions of pressure and temperature have always been essential to the Liquefied Natural Gas (LNG) industry’s evolution because of the tremendous technical challenges present at all stages in the supply chain from production to liquefaction to transport. Each stage is designed using predictions of the mixture’s properties, such as density, viscosity, surface tension, heat capacity and phase behaviour as a function of temperature, pressure, and composition. Unfortunately, currently available models lead to equipment over-designs of 15% or more. To achieve better designs that work more effectively and/or over a wider range of conditions, new fundamental property data are essential, both to resolve discrepancies in our current predictive capabilities and to extend them to the higher-pressure conditions characteristic of many new gas fields. Furthermore, innovative experimental techniques are required to measure different thermophysical properties at high pressures and over a wide range of temperatures, including near the mixture’s critical points where gas and liquid become indistinguishable and most existing predictive fluid property models used breakdown. In this work, we present a wide range of experimental measurements made for different binary and ternary mixtures relevant to LNG processing, with a particular focus on viscosity, surface tension, heat capacity, bubble-points and density. For this purpose, customized and specialized apparatus were designed and validated over the temperature range (200 to 423) K at pressures to 35 MPa. The mixtures studied were (CH4 + C3H8), (CH4 + C3H8 + CO2) and (CH4 + C3H8 + C7H16); in the last of these the heptane contents was up to 10 mol %. Viscosity was measured using a vibrating wire apparatus, while mixture densities were obtained by means of a high-pressure magnetic-suspension densimeter and an isochoric cell apparatus; the latter was also used to determine bubble-points. Surface tensions were measured using the capillary rise method in a visual cell, which also enabled the location of the mixture critical point to be determined from observations of critical opalescence. Mixture heat capacities were measured using a customised high-pressure differential scanning calorimeter (DSC). The combined standard relative uncertainties were less than 0.3% for density, 2% for viscosity, 3% for heat capacity and 3 % for surface tension. The extensive experimental data gathered in this work were compared with a variety of different advanced engineering models frequently used for predicting thermophysical properties of mixtures relevant to LNG processing. In many cases the discrepancies between the predictions of different engineering models for these mixtures was large, and the high quality data allowed erroneous but often widely-used models to be identified. The data enable the development of new or improved models, to be implemented in process simulation software, so that the fluid properties needed for equipment and process design can be predicted reliably. This in turn will enable reduced capital and operational expenditure by the LNG industry. The current work also aided the community of scientists working to advance theoretical descriptions of fluid properties by allowing to identify deficiencies in theoretical descriptions and calculations.Keywords: LNG, thermophysical, viscosity, density, surface tension, heat capacity, bubble points, models
Procedia PDF Downloads 27473 Understanding Different Facets of Chromosome Abnormalities: A 17-year Cytogenetic Study and Indian Perspectives
Authors: Lakshmi Rao Kandukuri, Mamata Deenadayal, Suma Prasad, Bipin Sethi, Srinadh Buragadda, Lalji Singh
Abstract:
Worldwide; at least 7.6 million children are born annually with severe genetic or congenital malformations and among them 90% of these are born in mid and low-income countries. Precise prevalence data are difficult to collect, especially in developing countries, owing to the great diversity of conditions and also because many cases remain undiagnosed. The genetic and congenital disorder is the second most common cause of infant and childhood mortality and occurs with a prevalence of 25-60 per 1000 births. The higher prevalence of genetic diseases in a particular community may, however, be due to some social or cultural factors. Such factors include the tradition of consanguineous marriage, which results in a higher rate of autosomal recessive conditions including congenital malformations, stillbirths, or mental retardation. Genetic diseases can vary in severity, from being fatal before birth to requiring continuous management; their onset covers all life stages from infancy to old age. Those presenting at birth are particularly burdensome and may cause early death or life-long chronic morbidity. Genetic testing for several genetic diseases identifies changes in chromosomes, genes, or proteins. The results of a genetic test can confirm or rule out a suspected genetic condition or help determine a person's chance of developing or passing on a genetic disorder. Several hundred genetic tests are currently in use and more are being developed. Chromosomal abnormalities are the major cause of human suffering, which are implicated in mental retardation, congenital malformations, dysmorphic features, primary and secondary amenorrhea, reproductive wastage, infertility neoplastic diseases. Cytogenetic evaluation of patients is helpful in the counselling and management of affected individuals and families. We present here especially chromosomal abnormalities which form a major part of genetic disease burden in India. Different programmes on chromosome research and human reproductive genetics primarily relate to infertility since this is a major public health problem in our country, affecting 10-15 percent of couples. Prenatal diagnosis of chromosomal abnormalities in high-risk pregnancies helps in detecting chromosomally abnormal foetuses. Such couples are counselled regarding the continuation of pregnancy. In addition to the basic research, the team is providing chromosome diagnostic services that include conventional and advanced techniques for identifying various genetic defects. Other than routine chromosome diagnosis for infertility, also include patients with short stature, hypogonadism, undescended testis, microcephaly, delayed developmental milestones, familial, and isolated mental retardation, and cerebral palsy. Thus, chromosome diagnostics has found its applicability not only in disease prevention and management but also in guiding the clinicians in certain aspects of treatment. It would be appropriate to affirm that chromosomes are the images of life and they unequivocally mirror the states of human health. The importance of genetic counseling is increasing with the advancement in the field of genetics. The genetic counseling can help families to cope with emotional, psychological, and medical consequences of genetic diseases.Keywords: India, chromosome abnormalities, genetic disorders, cytogenetic study
Procedia PDF Downloads 31572 Internet of Things, Edge and Cloud Computing in Rock Mechanical Investigation for Underground Surveys
Authors: Esmael Makarian, Ayub Elyasi, Fatemeh Saberi, Olusegun Stanley Tomomewo
Abstract:
Rock mechanical investigation is one of the most crucial activities in underground operations, especially in surveys related to hydrocarbon exploration and production, geothermal reservoirs, energy storage, mining, and geotechnics. There is a wide range of traditional methods for driving, collecting, and analyzing rock mechanics data. However, these approaches may not be suitable or work perfectly in some situations, such as fractured zones. Cutting-edge technologies have been provided to solve and optimize the mentioned issues. Internet of Things (IoT), Edge, and Cloud Computing technologies (ECt & CCt, respectively) are among the most widely used and new artificial intelligence methods employed for geomechanical studies. IoT devices act as sensors and cameras for real-time monitoring and mechanical-geological data collection of rocks, such as temperature, movement, pressure, or stress levels. Structural integrity, especially for cap rocks within hydrocarbon systems, and rock mass behavior assessment, to further activities such as enhanced oil recovery (EOR) and underground gas storage (UGS), or to improve safety risk management (SRM) and potential hazards identification (P.H.I), are other benefits from IoT technologies. EC techniques can process, aggregate, and analyze data immediately collected by IoT on a real-time scale, providing detailed insights into the behavior of rocks in various situations (e.g., stress, temperature, and pressure), establishing patterns quickly, and detecting trends. Therefore, this state-of-the-art and useful technology can adopt autonomous systems in rock mechanical surveys, such as drilling and production (in hydrocarbon wells) or excavation (in mining and geotechnics industries). Besides, ECt allows all rock-related operations to be controlled remotely and enables operators to apply changes or make adjustments. It must be mentioned that this feature is very important in environmental goals. More often than not, rock mechanical studies consist of different data, such as laboratory tests, field operations, and indirect information like seismic or well-logging data. CCt provides a useful platform for storing and managing a great deal of volume and different information, which can be very useful in fractured zones. Additionally, CCt supplies powerful tools for predicting, modeling, and simulating rock mechanical information, especially in fractured zones within vast areas. Also, it is a suitable source for sharing extensive information on rock mechanics, such as the direction and size of fractures in a large oil field or mine. The comprehensive review findings demonstrate that digital transformation through integrated IoT, Edge, and Cloud solutions is revolutionizing traditional rock mechanical investigation. These advanced technologies have empowered real-time monitoring, predictive analysis, and data-driven decision-making, culminating in noteworthy enhancements in safety, efficiency, and sustainability. Therefore, by employing IoT, CCt, and ECt, underground operations have experienced a significant boost, allowing for timely and informed actions using real-time data insights. The successful implementation of IoT, CCt, and ECt has led to optimized and safer operations, optimized processes, and environmentally conscious approaches in underground geological endeavors.Keywords: rock mechanical studies, internet of things, edge computing, cloud computing, underground surveys, geological operations
Procedia PDF Downloads 6271 Theoretical Modelling of Molecular Mechanisms in Stimuli-Responsive Polymers
Authors: Catherine Vasnetsov, Victor Vasnetsov
Abstract:
Context: Thermo-responsive polymers are materials that undergo significant changes in their physical properties in response to temperature changes. These polymers have gained significant attention in research due to their potential applications in various industries and medicine. However, the molecular mechanisms underlying their behavior are not well understood, particularly in relation to cosolvency, which is crucial for practical applications. Research Aim: This study aimed to theoretically investigate the phenomenon of cosolvency in long-chain polymers using the Flory-Huggins statistical-mechanical framework. The main objective was to understand the interactions between the polymer, solvent, and cosolvent under different conditions. Methodology: The research employed a combination of Monte Carlo computer simulations and advanced machine-learning methods. The Flory-Huggins mean field theory was used as the basis for the simulations. Spinodal graphs and ternary plots were utilized to develop an initial computer model for predicting polymer behavior. Molecular dynamic simulations were conducted to mimic real-life polymer systems. Machine learning techniques were incorporated to enhance the accuracy and reliability of the simulations. Findings: The simulations revealed that the addition of very low or very high volumes of cosolvent molecules resulted in smaller radii of gyration for the polymer, indicating poor miscibility. However, intermediate volume fractions of cosolvent led to higher radii of gyration, suggesting improved miscibility. These findings provide a possible microscopic explanation for the cosolvency phenomenon in polymer systems. Theoretical Importance: This research contributes to a better understanding of the behavior of thermo-responsive polymers and the role of cosolvency. The findings provide insights into the molecular mechanisms underlying cosolvency and offer specific predictions for future experimental investigations. The study also presents a more rigorous analysis of the Flory-Huggins free energy theory in the context of polymer systems. Data Collection and Analysis Procedures: The data for this study was collected through Monte Carlo computer simulations and molecular dynamic simulations. The interactions between the polymer, solvent, and cosolvent were analyzed using the Flory-Huggins mean field theory. Machine learning techniques were employed to enhance the accuracy of the simulations. The collected data was then analyzed to determine the impact of cosolvent volume fractions on the radii of gyration of the polymer. Question Addressed: The research addressed the question of how cosolvency affects the behavior of long-chain polymers. Specifically, the study aimed to investigate the interactions between the polymer, solvent, and cosolvent under different volume fractions and understand the resulting changes in the radii of gyration. Conclusion: In conclusion, this study utilized theoretical modeling and computer simulations to investigate the phenomenon of cosolvency in long-chain polymers. The findings suggest that moderate cosolvent volume fractions can lead to improved miscibility, as indicated by higher radii of gyration. These insights contribute to a better understanding of the molecular mechanisms underlying cosolvency in polymer systems and provide predictions for future experimental studies. The research also enhances the theoretical analysis of the Flory-Huggins free energy theory.Keywords: molecular modelling, flory-huggins, cosolvency, stimuli-responsive polymers
Procedia PDF Downloads 7070 Comparing Practices of Swimming in the Netherlands against a Global Model for Integrated Development of Mass and High Performance Sport: Perceptions of Coaches
Authors: Melissa de Zeeuw, Peter Smolianov, Arnold Bohl
Abstract:
This study was designed to help and improve international performance as well increase swimming participation in the Netherlands. Over 200 sources of literature on sport delivery systems from 28 Australasian, North and South American, Western and Eastern European countries were analyzed to construct a globally applicable model of high performance swimming integrated with mass participation, comprising of the following seven elements and three levels: Micro level (operations, processes, and methodologies for development of individual athletes): 1. Talent search and development, 2. Advanced athlete support. Meso level (infrastructures, personnel, and services enabling sport programs): 3. Training centers, 4. Competition systems, 5. Intellectual services. Macro level (socio-economic, cultural, legislative, and organizational): 6. Partnerships with supporting agencies, 7. Balanced and integrated funding and structures of mass and elite sport. This model emerged from the integration of instruments that have been used to analyse and compare national sport systems. The model has received scholarly validation and showed to be a framework for program analysis that is not culturally bound. It has recently been accepted as a model for further understanding North American sport systems, including (in chronological order of publications) US rugby, tennis, soccer, swimming and volleyball. The above model was used to design a questionnaire of 42 statements reflecting desired practices. The statements were validated by 12 international experts, including executives from sport governing bodies, academics who published on high performance and sport development, and swimming coaches and administrators. In this study both a highly structured and open ended qualitative analysis tools were used. This included a survey of swim coaches where open responses accompanied structured questions. After collection of the surveys, semi-structured discussions with Federation coaches were conducted to add triangulation to the findings. Lastly, a content analysis of Dutch Swimming’s website and organizational documentation was conducted. A representative sample of 1,600 Dutch Swim coaches and administrators was collected via email addresses from Royal Dutch Swimming Federation' database. Fully completed questionnaires were returned by 122 coaches from all key country’s regions for a response rate of 7,63% - higher than the response rate of the previously mentioned US studies which used the same model and method. Results suggest possible enhancements at macro level (e.g., greater public and corporate support to prepare and hire more coaches and to address the lack of facilities, monies and publicity at mass participation level in order to make swimming affordable for all), at meso level (e.g., comprehensive education for all coaches and full spectrum of swimming pools particularly 50 meters long), and at micro level (e.g., better preparation of athletes for a future outside swimming and better use of swimmers to stimulate swimming development). Best Dutch swimming management practices (e.g., comprehensive support to most talented swimmers who win Olympic medals) as well as relevant international practices available for transfer to the Netherlands (e.g., high school competitions) are discussed.Keywords: sport development, high performance, mass participation, swimming
Procedia PDF Downloads 20569 Simulation and Analysis of Mems-Based Flexible Capacitive Pressure Sensors with COMSOL
Authors: Ding Liangxiao
Abstract:
The technological advancements in Micro-Electro-Mechanical Systems (MEMS) have significantly contributed to the development of new, flexible capacitive pressure sensors,which are pivotal in transforming wearable and medical device technologies. This study employs the sophisticated simulation tools available in COMSOL Multiphysics® to develop and analyze a MEMS-based sensor with a tri-layered design. This sensor comprises top and bottom electrodes made from gold (Au), noted for their excellent conductivity, a middle dielectric layer made from a composite of Silver Nanowires (AgNWs) embedded in Thermoplastic Polyurethane (TPU), and a flexible, durable substrate of Polydimethylsiloxane (PDMS). This research was directed towards understanding how changes in the physical characteristics of the AgNWs/TPU dielectric layer—specifically, its thickness and surface area—impact the sensor's operational efficacy. We assessed several key electrical properties: capacitance, electric potential, and membrane displacement under varied pressure conditions. These investigations are crucial for enhancing the sensor's sensitivity and ensuring its adaptability across diverse applications, including health monitoring systems and dynamic user interface technologies. To ensure the reliability of our simulations, we applied the Effective Medium Theory to calculate the dielectric constant of the AgNWs/TPU composite accurately. This approach is essential for predicting how the composite material will perform under different environmental and operational stresses, thus facilitating the optimization of the sensor design for enhanced performance and longevity. Moreover, we explored the potential benefits of innovative three-dimensional structures for the dielectric layer compared to traditional flat designs. Our hypothesis was that 3D configurations might improve the stress distribution and optimize the electrical field interactions within the sensor, thereby boosting its sensitivity and accuracy. Our simulation protocol includes comprehensive performance testing under simulated environmental conditions, such as temperature fluctuations and mechanical pressures, which mirror the actual operational conditions. These tests are crucial for assessing the sensor's robustness and its ability to function reliably over extended periods, ensuring high reliability and accuracy in complex real-world environments. In our current research, although a full dynamic simulation analysis of the three-dimensional structures has not yet been conducted, preliminary explorations through three-dimensional modeling have indicated the potential for mechanical and electrical performance improvements over traditional planar designs. These initial observations emphasize the potential advantages and importance of incorporating advanced three-dimensional modeling techniques in the development of Micro-Electro-Mechanical Systems (MEMS)sensors, offering new directions for the design and functional optimization of future sensors. Overall, this study not only highlights the powerful capabilities of COMSOL Multiphysics® for modeling sophisticated electronic devices but also underscores the potential of innovative MEMS technology in advancing the development of more effective, reliable, and adaptable sensor solutions for a broad spectrum of technological applications.Keywords: MEMS, flexible sensors, COMSOL Multiphysics, AgNWs/TPU, PDMS, 3D modeling, sensor durability
Procedia PDF Downloads 4568 An Integrated Real-Time Hydrodynamic and Coastal Risk Assessment Model
Authors: M. Reza Hashemi, Chris Small, Scott Hayward
Abstract:
The Northeast Coast of the US faces damaging effects of coastal flooding and winds due to Atlantic tropical and extratropical storms each year. Historically, several large storm events have produced substantial levels of damage to the region; most notably of which were the Great Atlantic Hurricane of 1938, Hurricane Carol, Hurricane Bob, and recently Hurricane Sandy (2012). The objective of this study was to develop an integrated modeling system that could be used as a forecasting/hindcasting tool to evaluate and communicate the risk coastal communities face from these coastal storms. This modeling system utilizes the ADvanced CIRCulation (ADCIRC) model for storm surge predictions and the Simulating Waves Nearshore (SWAN) model for the wave environment. These models were coupled, passing information to each other and computing over the same unstructured domain, allowing for the most accurate representation of the physical storm processes. The coupled SWAN-ADCIRC model was validated and has been set up to perform real-time forecast simulations (as well as hindcast). Modeled storm parameters were then passed to a coastal risk assessment tool. This tool, which is generic and universally applicable, generates spatial structural damage estimate maps on an individual structure basis for an area of interest. The required inputs for the coastal risk model included a detailed information about the individual structures, inundation levels, and wave heights for the selected region. Additionally, calculation of wind damage to structures was incorporated. The integrated coastal risk assessment system was then tested and applied to Charlestown, a small vulnerable coastal town along the southern shore of Rhode Island. The modeling system was applied to Hurricane Sandy and a synthetic storm. In both storm cases, effect of natural dunes on coastal risk was investigated. The resulting damage maps for the area (Charlestown) clearly showed that the dune eroded scenarios affected more structures, and increased the estimated damage. The system was also tested in forecast mode for a large Nor’Easters: Stella (March 2017). The results showed a good performance of the coupled model in forecast mode when compared to observations. Finally, a nearshore model XBeach was then nested within this regional grid (ADCIRC-SWAN) to simulate nearshore sediment transport processes and coastal erosion. Hurricane Irene (2011) was used to validate XBeach, on the basis of a unique beach profile dataset at the region. XBeach showed a relatively good performance, being able to estimate eroded volumes along the beach transects with a mean error of 16%. The validated model was then used to analyze the effectiveness of several erosion mitigation methods that were recommended in a recent study of coastal erosion in New England: beach nourishment, coastal bank (engineered core), and submerged breakwater as well as artificial surfing reef. It was shown that beach nourishment and coastal banks perform better to mitigate shoreline retreat and coastal erosion.Keywords: ADCIRC, coastal flooding, storm surge, coastal risk assessment, living shorelines
Procedia PDF Downloads 11667 Modeling and Simulation of the Structural, Electronic and Magnetic Properties of Fe-Ni Based Nanoalloys
Authors: Ece A. Irmak, Amdulla O. Mekhrabov, M. Vedat Akdeniz
Abstract:
There is a growing interest in the modeling and simulation of magnetic nanoalloys by various computational methods. Magnetic crystalline/amorphous nanoparticles (NP) are interesting materials from both the applied and fundamental points of view, as their properties differ from those of bulk materials and are essential for advanced applications such as high-performance permanent magnets, high-density magnetic recording media, drug carriers, sensors in biomedical technology, etc. As an important magnetic material, Fe-Ni based nanoalloys have promising applications in the chemical industry (catalysis, battery), aerospace and stealth industry (radar absorbing material, jet engine alloys), magnetic biomedical applications (drug delivery, magnetic resonance imaging, biosensor) and computer hardware industry (data storage). The physical and chemical properties of the nanoalloys depend not only on the particle or crystallite size but also on composition and atomic ordering. Therefore, computer modeling is an essential tool to predict structural, electronic, magnetic and optical behavior at atomistic levels and consequently reduce the time for designing and development of new materials with novel/enhanced properties. Although first-principles quantum mechanical methods provide the most accurate results, they require huge computational effort to solve the Schrodinger equation for only a few tens of atoms. On the other hand, molecular dynamics method with appropriate empirical or semi-empirical inter-atomic potentials can give accurate results for the static and dynamic properties of larger systems in a short span of time. In this study, structural evolutions, magnetic and electronic properties of Fe-Ni based nanoalloys have been studied by using molecular dynamics (MD) method in Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) and Density Functional Theory (DFT) in the Vienna Ab initio Simulation Package (VASP). The effects of particle size (in 2-10 nm particle size range) and temperature (300-1500 K) on stability and structural evolutions of amorphous and crystalline Fe-Ni bulk/nanoalloys have been investigated by combining molecular dynamic (MD) simulation method with Embedded Atom Model (EAM). EAM is applicable for the Fe-Ni based bimetallic systems because it considers both the pairwise interatomic interaction potentials and electron densities. Structural evolution of Fe-Ni bulk and nanoparticles (NPs) have been studied by calculation of radial distribution functions (RDF), interatomic distances, coordination number, core-to-surface concentration profiles as well as Voronoi analysis and surface energy dependences on temperature and particle size. Moreover, spin-polarized DFT calculations were performed by using a plane-wave basis set with generalized gradient approximation (GGA) exchange and correlation effects in the VASP-MedeA package to predict magnetic and electronic properties of the Fe-Ni based alloys in bulk and nanostructured phases. The result of theoretical modeling and simulations for the structural evolutions, magnetic and electronic properties of Fe-Ni based nanostructured alloys were compared with experimental and other theoretical results published in the literature.Keywords: density functional theory, embedded atom model, Fe-Ni systems, molecular dynamics, nanoalloys
Procedia PDF Downloads 24366 The Use of Artificial Intelligence in the Context of a Space Traffic Management System: Legal Aspects
Authors: George Kyriakopoulos, Photini Pazartzis, Anthi Koskina, Crystalie Bourcha
Abstract:
The need for securing safe access to and return from outer space, as well as ensuring the viability of outer space operations, maintains vivid the debate over the promotion of organization of space traffic through a Space Traffic Management System (STM). The proliferation of outer space activities in recent years as well as the dynamic emergence of the private sector has gradually resulted in a diverse universe of actors operating in outer space. The said developments created an increased adverse impact on outer space sustainability as the case of the growing number of space debris clearly demonstrates. The above landscape sustains considerable threats to outer space environment and its operators that need to be addressed by a combination of scientific-technological measures and regulatory interventions. In this context, recourse to recent technological advancements and, in particular, to Artificial Intelligence (AI) and machine learning systems, could achieve exponential results in promoting space traffic management with respect to collision avoidance as well as launch and re-entry procedures/phases. New technologies can support the prospects of a successful space traffic management system at an international scale by enabling, inter alia, timely, accurate and analytical processing of large data sets and rapid decision-making, more precise space debris identification and tracking and overall minimization of collision risks and reduction of operational costs. What is more, a significant part of space activities (i.e. launch and/or re-entry phase) takes place in airspace rather than in outer space, hence the overall discussion also involves the highly developed, both technically and legally, international (and national) Air Traffic Management System (ATM). Nonetheless, from a regulatory perspective, the use of AI for the purposes of space traffic management puts forward implications that merit particular attention. Key issues in this regard include the delimitation of AI-based activities as space activities, the designation of the applicable legal regime (international space or air law, national law), the assessment of the nature and extent of international legal obligations regarding space traffic coordination, as well as the appropriate liability regime applicable to AI-based technologies when operating for space traffic coordination, taking into particular consideration the dense regulatory developments at EU level. In addition, the prospects of institutionalizing international cooperation and promoting an international governance system, together with the challenges of establishment of a comprehensive international STM regime are revisited in the light of intervention of AI technologies. This paper aims at examining regulatory implications advanced by the use of AI technology in the context of space traffic management operations and its key correlating concepts (SSA, space debris mitigation) drawing in particular on international and regional considerations in the field of STM (e.g. UNCOPUOS, International Academy of Astronautics, European Space Agency, among other actors), the promising advancements of the EU approach to AI regulation and, last but not least, national approaches regarding the use of AI in the context of space traffic management, in toto. Acknowledgment: The present work was co-funded by the European Union and Greek national funds through the Operational Program "Human Resources Development, Education and Lifelong Learning " (NSRF 2014-2020), under the call "Supporting Researchers with an Emphasis on Young Researchers – Cycle B" (MIS: 5048145).Keywords: artificial intelligence, space traffic management, space situational awareness, space debris
Procedia PDF Downloads 25865 Poly(Trimethylene Carbonate)/Poly(ε-Caprolactone) Phase-Separated Triblock Copolymers with Advanced Properties
Authors: Nikola Toshikj, Michel Ramonda, Sylvain Catrouillet, Jean-Jacques Robin, Sebastien Blanquer
Abstract:
Biodegradable and biocompatible block copolymers have risen as the golden materials in both medical and environmental applications. Moreover, if their architecture is of controlled manner, higher applications can be foreseen. In the meantime, organocatalytic ROP has been promoted as more rapid and immaculate route, compared to the traditional organometallic catalysis, towards efficient synthesis of block copolymer architectures. Therefore, herein we report novel organocatalytic pathway with guanidine molecules (TBD) for supported synthesis of trimethylene carbonate initiated by poly(caprolactone) as pre-polymer. Pristine PTMC-b-PCL-b-PTMC block copolymer structure, without any residual products and clear desired block proportions, was achieved under 1.5 hours at room temperature and verified by NMR spectroscopies and size-exclusion chromatography. Besides, when elaborating block copolymer films, further stability and amelioration of mechanical properties can be achieved via additional reticulation step of precedently methacrylated block copolymers. Subsequently, stimulated by the insufficient studies on the phase-separation/crystallinity relationship in these semi-crystalline block copolymer systems, their intrinsic thermal and morphology properties were investigated by differential scanning calorimetry and atomic force microscopy. Firstly, by DSC measurements, the block copolymers with χABN values superior to 20 presented two distinct glass transition temperatures, close to the ones of the respecting homopolymers, demonstrating an initial indication of a phase-separated system. In the interim, the existence of the crystalline phase was supported by the presence of melting temperature. As expected, the crystallinity driven phase-separated morphology predominated in the AFM analysis of the block copolymers. Neither crosslinking at melted state, hence creation of a dense polymer network, disturbed the crystallinity phenomena. However, the later revealed as sensible to rapid liquid nitrogen quenching directly from the melted state. Therefore, AFM analysis of liquid nitrogen quenched and crosslinked block copolymer films demonstrated a thermodynamically driven phase-separation clearly predominating over the originally crystalline one. These AFM films remained stable with their morphology unchanged even after 4 months at room temperature. However, as demonstrated by DSC analysis once rising the temperature above the melting temperature of the PCL block, neither the crosslinking nor the liquid nitrogen quenching shattered the semi-crystalline network, while the access to thermodynamical phase-separated structures was possible for temperatures under the poly (caprolactone) melting point. Precisely this coexistence of dual crosslinked/crystalline networks in the same copolymer structure allowed us to establish, for the first time, the shape-memory properties in such materials, as verified by thermomechanical analysis. Moreover, the response temperature to the material original shape depended on the block copolymer emplacement, hence PTMC or PCL as end-block. Therefore, it has been possible to reach a block copolymer with transition temperature around 40°C thus opening potential real-life medical applications. In conclusion, the initial study of phase-separation/crystallinity relationship in PTMC-b-PCL-b-PTMC block copolymers lead to the discovery of novel shape memory materials with superior properties, widely demanded in modern-life applications.Keywords: biodegradable block copolymers, organocatalytic ROP, self-assembly, shape-memory
Procedia PDF Downloads 12864 Enhancing the Implementation Strategy of Simultaneous Operations (SIMOPS) for the Major Turnaround at Pertamina Plaju Refinery
Authors: Fahrur Rozi, Daniswara Krisna Prabatha, Latief Zulfikar Chusaini
Abstract:
Amidst the backdrop of Pertamina Plaju Refinery, which stands as the oldest and historically less technologically advanced among Pertamina's refineries, lies a unique challenge. Originally integrating facilities established by Shell in 1904 and Stanvac (originally Standard Oil) in 1926, the primary challenge at Plaju Refinery does not solely revolve around complexity; instead, it lies in ensuring reliability, considering its operational history of over a century. After centuries of existence, Plaju Refinery has never undergone a comprehensive major turnaround encompassing all its units. The usual practice involves partial turnarounds that are sequentially conducted across its primary, secondary, and tertiary units (utilities and offsite). However, a significant shift is on the horizon. In the Q-IV of 2023, the refinery embarks on its first-ever major turnaround since its establishment. This decision was driven by the alignment of maintenance timelines across various units. Plaju Refinery's major turnaround was scheduled for October-November 2023, spanning 45 calendar days, with the objective of enhancing the operational reliability of all refinery units. The extensive job list for this turnaround encompasses 1583 tasks across 18 units/areas, involving approximately 9000 contracted workers. In this context, the Strategy of Simultaneous Operations (SIMOPS) execution emerges as a pivotal tool to optimize time efficiency and ensure safety. A Hazard Effect Management Process (HEMP) has been employed to assess the risk ratings of each task within the turnaround. Out of the tasks assessed, 22 are deemed high-risk and necessitate mitigation. The SIMOPS approach serves as a preventive measure against potential incidents. It is noteworthy that every turnaround period at Pertamina Plaju Refinery involves SIMOPS-related tasks. In this context, enhancing the implementation strategy of "Simultaneous Operations (SIMOPS)" becomes imperative to minimize the occurrence of incidents. At least four improvements have been introduced in the enhancement process for the major turnaround at Refinery Plaju. The first improvement involves conducting systematic risk assessment and potential hazard mitigation studies for SIMOPS tasks before task execution, as opposed to the previous on-site approach. The second improvement includes the completion of SIMOPS Job Mitigation and Work Matrices Sheets, which was often neglected in the past. The third improvement emphasizes comprehensive awareness to workers/contractors regarding potential hazards and mitigation strategies for SIMOPS tasks before and during the major turnaround. The final improvement is the introduction of a daily program for inspecting and observing work in progress for SIMOPS tasks. Prior to these improvements, there was no established program for monitoring ongoing activities related to SIMOPS tasks during the turnaround. This study elucidates the steps taken to enhance SIMOPS within Pertamina, drawing from the experiences of Plaju Refinery as a guide. A real actual case study will be provided from our experience in the operational unit. In conclusion, these efforts are essential for the success of the first-ever major turnaround at Plaju Refinery, with the SIMOPS strategy serving as a central component. Based on these experiences, enhancements have been made to Pertamina's official Internal Guidelines for Executing SIMOPS Risk Mitigation, benefiting all Pertamina units.Keywords: process safety management, turn around, oil refinery, risk assessment
Procedia PDF Downloads 7463 Phytochemical Analysis and in vitro Biological Activities of an Ethyl Acetate Extract from the Peel of Punica granatum L. var. Dente di Cavallo
Authors: Silvia Di Giacomo, Marcello Locatelli, Simone Carradori, Francesco Cacciagrano, Chiara Toniolo, Gabriela Mazzanti, Luisa Mannina, Stefania Cesa, Antonella Di Sotto
Abstract:
Hyperglycemia represents the main pathogenic factor in the development of diabetes complications and has been found associated with mitochondrial dysfunction and oxidative stress, which in turn increase cell dysfunction. Therefore, counteract oxidative species appears to be a suitable strategy for preventing the hyperglycemia-induce cell damage and support the pharmacotherapy of diabetes and metabolic diseases. Antidiabetic potential of many food sources has been linked to the presence of polyphenolic metabolites, particularly flavonoids such as quercetin and its glycosylated form rutin. In line with this evidence, in the present study, we assayed the potential anti-hyperglycemic activity of an ethyl acetate extract from the peel of Punica granatum L. var. Dente di Cavallo (PGE), a fruit well known to traditional medicine for the beneficial properties of its edible juice. The effect of the extract on the glucidic metabolism has been evaluated by assessing its ability to inhibit α-amylase and α-glucosidase, two digestive enzymes responsible for the hydrolysis of dietary carbohydrates: their inhibition can delay the carbohydrate digestion and reduce glucose absorption, thus representing an important strategy for the management of hyperglycemia. Also, the PGE ability to block the release of advanced glycated end-products (AGEs), whose accumulation is known to be responsible for diabetic vascular complications, was studied. The iron-reducing and chelating activities, which are the primary mechanisms by which AGE inhibitors stop their metal-catalyzed formation, were evaluated as possible antioxidant mechanisms. At last, the phenolic content of PGE was characterized by chromatographic and spectrophotometric methods. Our results displayed the ability of PGE to inhibit α-amylase enzyme with a similar potency to the positive control: the IC₅₀ values were 52.2 (CL 27.7 - 101.2) µg/ml and 35.6 (CL 22.8 - 55.5) µg/ml for acarbose and PGE, respectively. PGE also inhibited the α-glucosidase enzyme with about a 25 higher potency than the positive controls of acarbose and quercetin. Furthermore, the extract exhibited ferrous and ferric ion chelating ability, with a maximum effect of 82.1% and 80.6% at a concentration of 250 µg/ml respectively, and reducing properties, reaching the maximum effect of 80.5% at a concentration of 10 µg/ml. At last, PGE was found able to inhibit the AGE production (maximum inhibition of 82.2% at the concentration of 1000 µg/ml), although with lower potency with respect to the positive control rutin. The phytochemical analysis of PGE displayed the presence of high levels of total polyphenols, tannins, and flavonoids, among which ellagic acid, gallic acid and catechin were identified. Altogether these data highlight the ability of PGE to control the carbohydrate metabolism at different levels, both by inhibiting the metabolic enzymes and by affecting the AGE formation likely by chelating mechanisms. It is also noteworthy that peel from pomegranate, although being a waste of juice production, can be reviewed as a nutraceutical source. In conclusion, present results suggest the possible role of PGE as a remedy for preventing hyperglycemia complications and encourage further in vivo studies.Keywords: anti-hyperglycemic activity, antioxidant properties, nutraceuticals, polyphenols, pomegranate
Procedia PDF Downloads 18562 Essential Oils of Polygonum L. Plants Growing in Kazakhstan and Their Antibacterial and Antifungal Activity
Authors: Dmitry Yu. Korulkin, Raissa A. Muzychkina
Abstract:
Bioactive substances of plant origin can be one of the advanced means of solution to the issue of combined therapy to inflammation. The main advantages of medical plants are softness and width of their therapeutic effect on an organism, the absence of side effects and complications even if the used continuously, high tolerability by patients. Moreover, medial plants are often the only and (or) cost-effective sources of natural biologically active substances and medicines. Along with other biologically active groups of chemical compounds, essential oils with wide range of pharmacological effects became very ingrained in medical practice. Essential oil was obtained by the method hydrodistillation air-dry aerial part of Polygonum L. plants using Clevenger apparatus. Qualitative composition of essential oils was analyzed by chromatography-mass-spectrometry method using Agilent 6890N apparatus. The qualitative analysis is based on the comparison of retention time and full mass-spectra with respective data on components of reference oils and pure compounds, if there were any, and with the data of libraries of mass-spectra Wiley 7th edition and NIST 02. The main components of essential oil are for: Polygonum amphibium L. - γ-terpinene, borneol, piperitol, 1,8-cyneole, α-pinene, linalool, terpinolene and sabinene; Polygonum minus Huds. Fl. Angl. – linalool, terpinolene, camphene, borneol, 1,8-cyneole, α-pinene, 4-terpineol and 1-octen-3-ol; Polygonum alpinum All. – camphene, sabinene, 1-octen-3-ol, 4-carene, p- and o-cymol, γ-terpinene, borneol, -terpineol; Polygonum persicaria L. - α-pinene, sabinene, -terpinene, 4-carene, 1,8-cyneole, borneol, 4-terpineol. Antibacterial activity was researched relating to strains of gram-positive bacteria Staphylococcus aureus, Bacillus subtilis, Streptococcus agalacticae, relating to gram-negative strain Escherichia coli and to yeast fungus Сandida albicans using agar diffusion method. The medicines of comparison were gentamicin for bacteria and nystatin for yeast fungus Сandida albicans. It has been shown that Polygonum L. essential oils has moderate antibacterial effect to gram-positive microorganisms and weak antifungal activity to Candida albicans yeast fungus. At the second stage of our researches wound healing properties of ointment form of 3% essential oil was researched on the model of flat dermal wounds. To assess the influence of essential oil on healing processes the model of flat dermal wound. The speed of wound healing on rats of different groups was judged based on assessment the area of a wound from time to time. During research of wound healing properties disturbance of integral in neither group: general condition and behavior of animals, food intake, and excretion. Wound healing action of 3% ointment on base of Polygonum L. essential oil and polyethyleneglycol is comparable with the action of reference substances. As more favorable healing dynamics was observed in the experimental group than in control group, the tested ointment can be deemed more promising for further detailed study as wound healing means.Keywords: antibacterial, antifungal, bioactive substances, essential oils, isolation, Polygonum L.
Procedia PDF Downloads 53261 Holistic Urban Development: Incorporating Both Global and Local Optimization
Authors: Christoph Opperer
Abstract:
The rapid urbanization of modern societies and the need for sustainable urban development demand innovative solutions that meet both individual and collective needs while addressing environmental concerns. To address these challenges, this paper presents a study that explores the potential of spatial and energetic/ecological optimization to enhance the performance of urban settlements, focusing on both architectural and urban scales. The study focuses on the application of biological principles and self-organization processes in urban planning and design, aiming to achieve a balance between ecological performance, architectural quality, and individual living conditions. The research adopts a case study approach, focusing on a 10-hectare brownfield site in the south of Vienna. The site is surrounded by a small-scale built environment as an appropriate starting point for the research and design process. However, the selected urban form is not a prerequisite for the proposed design methodology, as the findings can be applied to various urban forms and densities. The methodology used in this research involves dividing the overall building mass and program into individual small housing units. A computational model has been developed to optimize the distribution of these units, considering factors such as solar exposure/radiation, views, privacy, proximity to sources of disturbance (such as noise), and minimal internal circulation areas. The model also ensures that existing vegetation and buildings on the site are preserved and incorporated into the optimization and design process. The model allows for simultaneous optimization at two scales, architectural and urban design, which have traditionally been addressed sequentially. This holistic design approach leads to individual and collective benefits, resulting in urban environments that foster a balance between ecology and architectural quality. The results of the optimization process demonstrate a seemingly random distribution of housing units that, in fact, is a densified hybrid between traditional garden settlements and allotment settlements. This urban typology is selected due to its compatibility with the surrounding urban context, although the presented methodology can be extended to other forms of urban development and density levels. The benefits of this approach are threefold. First, it allows for the determination of ideal housing distribution that optimizes solar radiation for each building density level, essentially extending the concept of sustainable building to the urban scale. Second, the method enhances living quality by considering the orientation and positioning of individual functions within each housing unit, achieving optimal views and privacy. Third, the algorithm's flexibility and robustness facilitate the efficient implementation of urban development with various stakeholders, architects, and construction companies without compromising its performance. The core of the research is the application of global and local optimization strategies to create efficient design solutions. By considering both, the performance of individual units and the collective performance of the urban aggregation, we ensure an optimal balance between private and communal benefits. By promoting a holistic understanding of urban ecology and integrating advanced optimization strategies, our methodology offers a sustainable and efficient solution to the challenges of modern urbanization.Keywords: sustainable development, self-organization, ecological performance, solar radiation and exposure, daylight, visibility, accessibility, spatial distribution, local and global optimization
Procedia PDF Downloads 6660 Hydrogen Production Using an Anion-Exchange Membrane Water Electrolyzer: Mathematical and Bond Graph Modeling
Authors: Hugo Daneluzzo, Christelle Rabbat, Alan Jean-Marie
Abstract:
Water electrolysis is one of the most advanced technologies for producing hydrogen and can be easily combined with electricity from different sources. Under the influence of electric current, water molecules can be split into oxygen and hydrogen. The production of hydrogen by water electrolysis favors the integration of renewable energy sources into the energy mix by compensating for their intermittence through the storage of the energy produced when production exceeds demand and its release during off-peak production periods. Among the various electrolysis technologies, anion exchange membrane (AEM) electrolyser cells are emerging as a reliable technology for water electrolysis. Modeling and simulation are effective tools to save time, money, and effort during the optimization of operating conditions and the investigation of the design. The modeling and simulation become even more important when dealing with multiphysics dynamic systems. One of those systems is the AEM electrolysis cell involving complex physico-chemical reactions. Once developed, models may be utilized to comprehend the mechanisms to control and detect flaws in the systems. Several modeling methods have been initiated by scientists. These methods can be separated into two main approaches, namely equation-based modeling and graph-based modeling. The former approach is less user-friendly and difficult to update as it is based on ordinary or partial differential equations to represent the systems. However, the latter approach is more user-friendly and allows a clear representation of physical phenomena. In this case, the system is depicted by connecting subsystems, so-called blocks, through ports based on their physical interactions, hence being suitable for multiphysics systems. Among the graphical modelling methods, the bond graph is receiving increasing attention as being domain-independent and relying on the energy exchange between the components of the system. At present, few studies have investigated the modelling of AEM systems. A mathematical model and a bond graph model were used in previous studies to model the electrolysis cell performance. In this study, experimental data from literature were simulated using OpenModelica using bond graphs and mathematical approaches. The polarization curves at different operating conditions obtained by both approaches were compared with experimental ones. It was stated that both models predicted satisfactorily the polarization curves with error margins lower than 2% for equation-based models and lower than 5% for the bond graph model. The activation polarization of hydrogen evolution reactions (HER) and oxygen evolution reactions (OER) were behind the voltage loss in the AEM electrolyzer, whereas ion conduction through the membrane resulted in the ohmic loss. Therefore, highly active electro-catalysts are required for both HER and OER while high-conductivity AEMs are needed for effectively lowering the ohmic losses. The bond graph simulation of the polarisation curve for operating conditions at various temperatures has illustrated that voltage increases with temperature owing to the technology of the membrane. Simulation of the polarisation curve can be tested virtually, hence resulting in reduced cost and time involved due to experimental testing and improved design optimization. Further improvements can be made by implementing the bond graph model in a real power-to-gas-to-power scenario.Keywords: hydrogen production, anion-exchange membrane, electrolyzer, mathematical modeling, multiphysics modeling
Procedia PDF Downloads 9159 Risk Factors Associated to Low Back Pain among Active Adults: Cross-Sectional Study among Workers in Tunisian Public Hospital
Authors: Lamia Bouzgarrou, Irtyah Merchaoui, Amira Omrane, Salma Kammoun, Amine Daafa, Neila Chaari
Abstract:
Backgrounds: Currently, low back pain (LBP) is one of the most prevalent public health problems, which caused severe morbidity among a large portion of the adult population. It is also associated with heavy direct and indirect costs, in particular, related to absenteeism and early retirement. Health care workers are one of most occupational groups concerned by LBP, especially because of biomechanical and psycho-organizational risk factors. Our current study aims to investigate risk factors associated with chronic low back pain among Tunisian caregivers in university-hospitals. Methods: Cross-sectional study conducted over a period of 14 months, with a representative sample of caregivers, matched according to age, sex and work department, in two university-hospitals in Tunisia. Data collection included items related to socio-professional characteristics, the evaluation of the working capacity index (WAI), the occupational stress (Karazek job strain questionnaire); the quality of life (SF12), the musculoskeletal disorders Nordic questionnaire, and the examination of the spine flexibility (distance finger-ground, sit-stand maneuver and equilibrium test). Results: Totally, 293 caregivers were included with a mean age equal to 42.64 ± 11.65 years. A body mass index (BMI) exceeding 30, was noted in 20.82% of cases. Moreover, no regular physical activity was practiced in 51.9% of cases. In contrast, domestic activity equal or exceeding 20 hours per week, was reported by 38.22%. Job strain was noted in 19.79 % of cases and the work capacity was 'low' to 'average' among 27.64% of subjects. During the 12 months previous to the investigation, 65% of caregivers complained of LBP, with pain rated as 'severe' or 'extremely severe' in 54.4% of cases and with a frequency of discomfort exceeding one episode per week in 58.52% of cases. During physical examination, the mean distance finger-ground was 7.10 ± 7.5cm. Caregivers assigned to 'high workload' services had the highest prevalence of LBP (77.4%) compared to other categories of hospital services, with no statistically significant relationship (P = 0.125). LBP prevalence was statistically correlated with female gender (p = 0.01) and impaired work capacity (p < 10⁻³). Moreover, the increase of the distance finger-ground was statistically associated with LBP (p = 0.05), advanced age (p < 10⁻³), professional seniority (p < 10⁻³) and the BMI ≥ 25 (p = 0.001). Furthermore, others physical tests of spine flexibility were underperformed among LBP suffering workers with a statistically significant difference (sit-stand maneuver (p = 0.03); equilibrium test (p = 0.01)). According to the multivariate analysis, only the domestic activity exceeding 20H/week, the degraded quality of physical life, and the presence of neck pain were significantly corelated to LBP. The final model explains 36.7% of the variability of this complaint. Conclusion: Our results highlighted the elevate prevalence of LBP among caregivers in Tunisian public hospital and identified both professional and individual predisposing factors. The preliminary analysis supports the necessity of a multidimensional approach to prevent this critical occupational and public health problem. The preventive strategy should be based both on the improvement of working conditions, and also on lifestyle modifications, and reinforcement of healthy behaviors in these active populations.Keywords: health care workers, low back pain, prevention, risk factor
Procedia PDF Downloads 15358 A Tool to Provide Advanced Secure Exchange of Electronic Documents through Europe
Authors: Jesus Carretero, Mario Vasile, Javier Garcia-Blas, Felix Garcia-Carballeira
Abstract:
Supporting cross-border secure and reliable exchange of data and documents and to promote data interoperability is critical for Europe to enhance sector (like eFinance, eJustice and eHealth). This work presents the status and results of the European Project MADE, a Research Project funded by Connecting Europe facility Programme, to provide secure e-invoicing and e-document exchange systems among Europe countries in compliance with the eIDAS Regulation (Regulation EU 910/2014 on electronic identification and trust services). The main goal of MADE is to develop six new AS4 Access Points and SMP in Europe to provide secure document exchanges using the eDelivery DSI (Digital Service Infrastructure) amongst both private and public entities. Moreover, the project demonstrates the feasibility and interest of the solution provided by providing several months of interoperability among the providers of the six partners in different EU countries. To achieve those goals, we have followed a methodology setting first a common background for requirements in the partner countries and the European regulations. Then, the partners have implemented access points in each country, including their service metadata publisher (SMP), to allow the access to their clients to the pan-European network. Finally, we have setup interoperability tests with the other access points of the consortium. The tests will include the use of each entity production-ready Information Systems that process the data to confirm all steps of the data exchange. For the access points, we have chosen AS4 instead of other existing alternatives because it supports multiple payloads, native web services, pulling facilities, lightweight client implementations, modern crypto algorithms, and more authentication types, like username-password and X.509 authentication and SAML authentication. The main contribution of MADE project is to open the path for European companies to use eDelivery services with cross-border exchange of electronic documents following PEPPOL (Pan-European Public Procurement Online) based on the e-SENS AS4 Profile. It also includes the development/integration of new components, integration of new and existing logging and traceability solutions and maintenance tool support for PKI. Moreover, we have found that most companies are still not ready to support those profiles. Thus further efforts will be needed to promote this technology into the companies. The consortium includes the following 9 partners. From them, 2 are research institutions: University Carlos III of Madrid (Coordinator), and Universidad Politecnica de Valencia. The other 7 (EDICOM, BIZbrains, Officient, Aksesspunkt Norge, eConnect, LMT group, Unimaze) are private entities specialized in secure delivery of electronic documents and information integration brokerage in their respective countries. To achieve cross-border operativity, they will include AS4 and SMP services in their platforms according to the EU Core Service Platform. Made project is instrumental to test the feasibility of cross-border documents eDelivery in Europe. If successful, not only einvoices, but many other types of documents will be securely exchanged through Europe. It will be the base to extend the network to the whole Europe. This project has been funded under the Connecting Europe Facility Agreement number: INEA/CEF/ICT/A2016/1278042. Action No: 2016-EU-IA-0063.Keywords: security, e-delivery, e-invoicing, e-delivery, e-document exchange, trust
Procedia PDF Downloads 26557 Dynamic Facades: A Literature Review on Double-Skin Façade with Lightweight Materials
Authors: Victor Mantilla, Romeu Vicente, António Figueiredo, Victor Ferreira, Sandra Sorte
Abstract:
Integrating dynamic facades into contemporary building design is shaping a new era of energy efficiency and user comfort. These innovative facades, often constructed using lightweight construction systems and materials, offer an opportunity to have a responsive and adaptive nature to the dynamic behavior of the outdoor climate. Therefore, in regions characterized by high fluctuations in daily temperatures, the ability to adapt to environmental changes is of paramount importance and a challenge. This paper presents a thorough review of the state of the art on double-skin facades (DSF), focusing on lightweight solutions for the external envelope. Dynamic facades featuring elements like movable shading devices, phase change materials, and advanced control systems have revolutionized the built environment. They offer a promising path for reducing energy consumption while enhancing occupant well-being. Lightweight construction systems are increasingly becoming the choice for the constitution of these facade solutions, offering benefits such as reduced structural loads and reduced construction waste, improving overall sustainability. However, the performance of dynamic facades based on low thermal inertia solutions in climatic contexts with high thermal amplitude is still in need of research since their ability to adapt is traduced in variability/manipulation of the thermal transmittance coefficient (U-value). Emerging technologies can enable such a dynamic thermal behavior through innovative materials, changes in geometry and control to optimize the facade performance. These innovations will allow a facade system to respond to shifting outdoor temperature, relative humidity, wind, and solar radiation conditions, ensuring that energy efficiency and occupant comfort are both met/coupled. This review addresses the potential configuration of double-skin facades, particularly concerning their responsiveness to seasonal variations in temperature, with a specific focus on addressing the challenges posed by winter and summer conditions. Notably, the design of a dynamic facade is significantly shaped by several pivotal factors, including the choice of materials, geometric considerations, and the implementation of effective monitoring systems. Within the realm of double skin facades, various configurations are explored, encompassing exhaust air, supply air, and thermal buffering mechanisms. According to the review places a specific emphasis on the thermal dynamics at play, closely examining the impact of factors such as the color of the facade, the slat angle's dimensions, and the positioning and type of shading devices employed in these innovative architectural structures.This paper will synthesize the current research trends in this field, with the presentation of case studies and technological innovations with a comprehensive understanding of the cutting-edge solutions propelling the evolution of building envelopes in the face of climate change, namely focusing on double-skin lightweight solutions to create sustainable, adaptable, and responsive building envelopes. As indicated in the review, flexible and lightweight systems have broad applicability across all building sectors, and there is a growing recognition that retrofitting existing buildings may emerge as the predominant approach.Keywords: adaptive, control systems, dynamic facades, energy efficiency, responsive, thermal comfort, thermal transmittance
Procedia PDF Downloads 8056 Ecotoxicological Test-Battery for Efficiency Assessment of TiO2 Assisted Photodegradation of Emerging Micropolluants
Authors: Ildiko Fekete-Kertesz, Jade Chaker, Sylvain Berthelot, Viktoria Feigl, Monika Molnar, Lidia Favier
Abstract:
There has been growing concern about emerging micropollutants in recent years, because of the possible environmental and health risk posed by these substances, which are released into the environment as a consequence of anthropogenic activities. Among them pharmaceuticals are currently not considered under water quality regulations; however, their potential effect on the environment have become more frequent in recent years. Due to the fact that these compounds can be detected in natural water matrices, it can be concluded, that the currently applied water treatment processes are not efficient enough for their effective elimination. To date, advanced oxidation processes (AOPs) are considered as highly competitive water treatment technologies for the removal of those organic micropollutants not treatable by conventional techniques due to their high chemical stability and/or low biodegradability. AOPs such as (photo)chemical oxidation and heterogeneous photocatalysis have proven their potential in degrading harmful organic compounds from aqueous matrices. However, some of these technologies generate reaction by-products, which can even be more toxic to aquatic organisms than the parent compounds. Thus, target compound removal does not necessarily result in the removal of toxicity. Therefore, to evaluate process efficiency the determination of the toxicity and ecotoxicity of the reaction intermediates is crucial to estimate the environmental risk of such techniques. In this context, the present study investigates the effectiveness of TiO2 assisted photodegradation for the removal of emerging water contaminants. Two drugs named losartan (used in high blood pressure medication) and levetiracetam (used to treat epilepsy) were considered in this work. The photocatalytic reactions were carried out with a commercial catalyst usually employed in photocatalysis. Moreover, the toxicity of the by-products generated during the process was assessed with various ecotoxicological methods applying aquatic test organisms from different trophic levels. A series of experiments were performed to evaluate the toxicity of untreated and treated solutions applying the Aliivibrio fischeri bioluminescence inhibition test, the Tetrahymena pyriformis proliferation inhibition test, the Daphnia magna lethality and immobilization tests and the Lemna minor growth inhibition test. The applied ecotoxicological methodology indicated sensitively the toxic effects of the treated and untreated water samples, hence the applied test battery is suitable for the ecotoxicological characterization of TiO2 based photocatalytic water treatment technologies and the indication of the formation of toxic by-products from the parent chemical compounds. Obtained results clearly showed that the TiO2 assisted photodegradation was more efficient in the elimination of losartan than levetiracetam. It was also observed that the treated levetiracetam solutions had more severe effect on the applied test organisms. A possible explanation would be the production of levetiracetam by-products, which are more toxic than the parent compound. The increased toxicity and the risk of formation of toxic metabolites represent one possible limitation to the implementation of photocatalytic treatment using TiO2 for the removal of losartan and levetiracetam. Our results proved that, the battery of ecotoxicity tests used in this work can be a promising investigation tool for the environmental risk assessment of photocatalytic processes.Keywords: aquatic micropollutants, ecotoxicology, nano titanium dioxide, photocatalysis, water treatment
Procedia PDF Downloads 19055 Traditional Wisdom of Indigenous Vernacular Architecture as Tool for Climate Resilience Among PVTG Indigenous Communities in Jharkhand, India
Authors: Ankush, Harshit Sosan Lakra, Rachita Kuthial
Abstract:
Climate change poses significant challenges to vulnerable communities, particularly indigenous populations in ecologically sensitive regions. Jharkhand, located in the heart of India, is home to several indigenous communities, including the Particularly Vulnerable Tribal Groups (PVTGs). The Indigenous architecture of the region functions as a significant reservoir of climate adaptation wisdom. It explores the architectural analysis encompassing the construction materials, construction techniques, design principles, climate responsiveness, cultural relevance, adaptation, integration with the environment and traditional wisdom that has evolved through generations, rooted in cultural and socioeconomic traditions, and has allowed these communities to thrive in a variety of climatic zones, including hot and dry, humid, and hilly terrains to withstand the test of time. Despite their historical resilience to adverse climatic conditions, PVTG tribal communities face new and amplified challenges due to the accelerating pace of climate change. There is a significant research void that exists in assimilating their traditional practices and local wisdom into contemporary climate resilience initiatives. Most of the studies place emphasis on technologically advanced solutions, often ignoring the invaluable Indigenous Local knowledge that can complement and enhance these efforts. This research gap highlights the need to bridge the disconnect between indigenous knowledge and contemporary climate adaptation strategies. The study aims to explore and leverage indigenous knowledge of vernacular architecture as a strategic tool for enhancing climatic resilience among PVTGs of the region. The first objective is to understand the traditional wisdom of vernacular architecture by analyzing and documenting distinct architectural practices and cultural significance of PVTG communities, emphasizing construction techniques, materials and spatial planning. The second objective is to develop culturally sensitive climatic resilience strategies based on findings of vernacular architecture by employing a multidisciplinary research approach that encompasses ethnographic fieldwork climate data assessment considering multiple variables such as temperature variations, precipitation patterns, extreme weather events and climate change reports. This will be a tailor-made solution integrating indigenous knowledge with modern technology and sustainable practices. With the involvement of indigenous communities in the process, the research aims to ensure that the developed strategies are practical, culturally appropriate, and accepted. To foster long-term resilience against the global issue of climate change, we can bridge the gap between present needs and future aspirations with Traditional wisdom, offering sustainable solutions that will empower PVTG communities. Moreover, the study emphasizes the significance of preserving and reviving traditional Architectural wisdom for enhancing climatic resilience. It also highlights the need for cooperative endeavors of communities, stakeholders, policymakers, and researchers to encourage integrating traditional Knowledge into Modern sustainable design methods. Through these efforts, this research will contribute not only to the well-being of PVTG communities but also to the broader global effort to build a more resilient and sustainable future. Also, the Indigenous communities like PVTG in the state of Jharkhand can achieve climatic resilience while respecting and safeguarding the cultural heritage and peculiar characteristics of its native population.Keywords: vernacular architecture, climate change, resilience, PVTGs, Jharkhand, indigenous people, India
Procedia PDF Downloads 7454 Transport Hubs as Loci of Multi-Layer Ecosystems of Innovation: Case Study of Airports
Authors: Carolyn Hatch, Laurent Simon
Abstract:
Urban mobility and the transportation industry are undergoing a transformation, shifting from an auto production-consumption model that has dominated since the early 20th century towards new forms of personal and shared multi-modality [1]. This is shaped by key forces such as climate change, which has induced a shift in production and consumption patterns and efforts to decarbonize and improve transport services through, for instance, the integration of vehicle automation, electrification and mobility sharing [2]. Advanced innovation practices and platforms for experimentation and validation of new mobility products and services that are increasingly complex and multi-stakeholder-oriented are shaping this new world of mobility. Transportation hubs – such as airports - are emblematic of these disruptive forces playing out in the mobility industry. Airports are emerging as the core of innovation ecosystems on and around contemporary mobility issues, and increasingly recognized as complex public/private nodes operating in many societal dimensions [3,4]. These include urban development, sustainability transitions, digital experimentation, customer experience, infrastructure development and data exploitation (for instance, airports generate massive and often untapped data flows, with significant potential for use, commercialization and social benefit). Yet airport innovation practices have not been well documented in the innovation literature. This paper addresses this gap by proposing a model of airport innovation that aims to equip airport stakeholders to respond to these new and complex innovation needs in practice. The methodology involves: 1 – a literature review bringing together key research and theory on airport innovation management, open innovation and innovation ecosystems in order to evaluate airport practices through an innovation lens; 2 – an international benchmarking of leading airports and their innovation practices, including such examples as Aéroports de Paris, Schipol in Amsterdam, Changi in Singapore, and others; and 3 – semi-structured interviews with airport managers on key aspects of organizational practice, facilitated through a close partnership with the Airport Council International (ACI), a major stakeholder in this research project. Preliminary results find that the most successful airports are those that have shifted to a multi-stakeholder, platform ecosystem model of innovation. The recent entrance of new actors in airports (Google, Amazon, Accor, Vinci, Airbnb and others) have forced the opening of organizational boundaries to share and exchange knowledge with a broader set of ecosystem players. This has also led to new forms of governance and intermediation by airport actors to connect complex, highly distributed knowledge, along with new kinds of inter-organizational collaboration, co-creation and collective ideation processes. Leading airports in the case study have demonstrated a unique capacity to force traditionally siloed activities to “think together”, “explore together” and “act together”, to share data, contribute expertise and pioneer new governance approaches and collaborative practices. In so doing, they have successfully integrated these many disruptive change pathways and forced their implementation and coordination towards innovative mobility outcomes, with positive societal, environmental and economic impacts. This research has implications for: 1 - innovation theory, 2 - urban and transport policy, and 3 - organizational practice - within the mobility industry and across the economy.Keywords: airport management, ecosystem, innovation, mobility, platform, transport hubs
Procedia PDF Downloads 18153 Black-Box-Optimization Approach for High Precision Multi-Axes Forward-Feed Design
Authors: Sebastian Kehne, Alexander Epple, Werner Herfs
Abstract:
A new method for optimal selection of components for multi-axes forward-feed drive systems is proposed in which the choice of motors, gear boxes and ball screw drives is optimized. Essential is here the synchronization of electrical and mechanical frequency behavior of all axes because even advanced controls (like H∞-controls) can only control a small part of the mechanical modes – namely only those of observable and controllable states whose value can be derived from the positions of extern linear length measurement systems and/or rotary encoders on the motor or gear box shafts. Further problems are the unknown processing forces like cutting forces in machine tools during normal operation which make the estimation and control via an observer even more difficult. To start with, the open source Modelica Feed Drive Library which was developed at the Laboratory for Machine Tools, and Production Engineering (WZL) is extended from one axis design to the multi axes design. It is capable to simulate the mechanical, electrical and thermal behavior of permanent magnet synchronous machines with inverters, different gear boxes and ball screw drives in a mechanical system. To keep the calculation time down analytical equations are used for field and torque producing equivalent circuit, heat dissipation and mechanical torque at the shaft. As a first step, a small machine tool with a working area of 635 x 315 x 420 mm is taken apart, and the mechanical transfer behavior is measured with an impulse hammer and acceleration sensors. With the frequency transfer functions, a mechanical finite element model is built up which is reduced with substructure coupling to a mass-damper system which models the most important modes of the axes. The model is modelled with Modelica Feed Drive Library and validated by further relative measurements between machine table and spindle holder with a piezo actor and acceleration sensors. In a next step, the choice of possible components in motor catalogues is limited by derived analytical formulas which are based on well-known metrics to gain effective power and torque of the components. The simulation in Modelica is run with different permanent magnet synchronous motors, gear boxes and ball screw drives from different suppliers. To speed up the optimization different black-box optimization methods (Surrogate-based, gradient-based and evolutionary) are tested on the case. The objective that was chosen is to minimize the integral of the deviations if a step is given on the position controls of the different axes. Small values are good measures for a high dynamic axes. In each iteration (evaluation of one set of components) the control variables are adjusted automatically to have an overshoot less than 1%. It is obtained that the order of the components in optimization problem has a deep impact on the speed of the black-box optimization. An approach to do efficient black-box optimization for multi-axes design is presented in the last part. The authors would like to thank the German Research Foundation DFG for financial support of the project “Optimierung des mechatronischen Entwurfs von mehrachsigen Antriebssystemen (HE 5386/14-1 | 6954/4-1)” (English: Optimization of the Mechatronic Design of Multi-Axes Drive Systems).Keywords: ball screw drive design, discrete optimization, forward feed drives, gear box design, linear drives, machine tools, motor design, multi-axes design
Procedia PDF Downloads 28652 Italian Speech Vowels Landmark Detection through the Legacy Tool 'xkl' with Integration of Combined CNNs and RNNs
Authors: Kaleem Kashif, Tayyaba Anam, Yizhi Wu
Abstract:
This paper introduces a methodology for advancing Italian speech vowels landmark detection within the distinctive feature-based speech recognition domain. Leveraging the legacy tool 'xkl' by integrating combined convolutional neural networks (CNNs) and recurrent neural networks (RNNs), the study presents a comprehensive enhancement to the 'xkl' legacy software. This integration incorporates re-assigned spectrogram methodologies, enabling meticulous acoustic analysis. Simultaneously, our proposed model, integrating combined CNNs and RNNs, demonstrates unprecedented precision and robustness in landmark detection. The augmentation of re-assigned spectrogram fusion within the 'xkl' software signifies a meticulous advancement, particularly enhancing precision related to vowel formant estimation. This augmentation catalyzes unparalleled accuracy in landmark detection, resulting in a substantial performance leap compared to conventional methods. The proposed model emerges as a state-of-the-art solution in the distinctive feature-based speech recognition systems domain. In the realm of deep learning, a synergistic integration of combined CNNs and RNNs is introduced, endowed with specialized temporal embeddings, harnessing self-attention mechanisms, and positional embeddings. The proposed model allows it to excel in capturing intricate dependencies within Italian speech vowels, rendering it highly adaptable and sophisticated in the distinctive feature domain. Furthermore, our advanced temporal modeling approach employs Bayesian temporal encoding, refining the measurement of inter-landmark intervals. Comparative analysis against state-of-the-art models reveals a substantial improvement in accuracy, highlighting the robustness and efficacy of the proposed methodology. Upon rigorous testing on a database (LaMIT) speech recorded in a silent room by four Italian native speakers, the landmark detector demonstrates exceptional performance, achieving a 95% true detection rate and a 10% false detection rate. A majority of missed landmarks were observed in proximity to reduced vowels. These promising results underscore the robust identifiability of landmarks within the speech waveform, establishing the feasibility of employing a landmark detector as a front end in a speech recognition system. The synergistic integration of re-assigned spectrogram fusion, CNNs, RNNs, and Bayesian temporal encoding not only signifies a significant advancement in Italian speech vowels landmark detection but also positions the proposed model as a leader in the field. The model offers distinct advantages, including unparalleled accuracy, adaptability, and sophistication, marking a milestone in the intersection of deep learning and distinctive feature-based speech recognition. This work contributes to the broader scientific community by presenting a methodologically rigorous framework for enhancing landmark detection accuracy in Italian speech vowels. The integration of cutting-edge techniques establishes a foundation for future advancements in speech signal processing, emphasizing the potential of the proposed model in practical applications across various domains requiring robust speech recognition systems.Keywords: landmark detection, acoustic analysis, convolutional neural network, recurrent neural network
Procedia PDF Downloads 6351 A Fresh Approach to Learn Evidence-Based Practice, a Prospective Interventional Study
Authors: Ebtehal Qulisy, Geoffrey Dougherty, Kholoud Hothan, Mylene Dandavino
Abstract:
Background: For more than 200 years, journal clubs (JCs) have been used to teach the fundamentals of critical appraisal and evidence-based practice (EBP). However, JCs curricula face important challenges, including poor sustainability, insufficient time to prepare for and conduct the activities, and lack of trainee skills and self-efficacy with critical appraisal. Andragogy principles and modern technology could help EBP be taught in more relevant, modern, and interactive ways. Method: We propose a fresh educational activity to teach EBP. Educational sessions are designed to encourage collaborative and experiential learning and do not require advanced preparation by the participants. Each session lasts 60 minutes and is adaptable to in-person, virtual, or hybrid contexts. Sessions are structured around a worksheet and include three educational objectives: “1. Identify a Clinical Conundrum”, “2. Compare and Contrast Current Guidelines”, and “3. Choose a Recent Journal Article”. Sessions begin with a short presentation by a facilitator of a clinical scenario highlighting a “grey-zone” in pediatrics. Trainees are placed in groups of two to four (based on the participants’ number) of varied training levels. The first task requires the identification of a clinical conundrum (a situation where there is no clear answer but only a reasonable solution) related to the scenario. For the second task, trainees must identify two or three clinical guidelines. The last task requires trainees to find a journal article published in the last year that reports an update regarding the scenario’s topic. Participants are allowed to use their electronic devices throughout the session. Our university provides full-text access to major journals, which facilitated this exercise. Results: Participants were a convenience sample of trainees in the inpatient services at the Montréal Children’s Hospital, McGill University. Sessions were conducted as a part of an existing weekly academic activity and facilitated by pediatricians with experience in critical appraisal. There were 28 participants in 4 sessions held during Spring 2022. Time was allocated at the end of each session to collect participants’ feedback via a self-administered online survey. There were 22 responses, were 41%(n=9) pediatric residents, 22.7%(n=5) family medicine residents, 31.8%(n=7) medical students, and 4.5%(n=1) nurse practitioner. Four respondents participated in more than one session. The “Satisfied” rates were 94.7% for session format, 100% for topic selection, 89.5% for time allocation, and 84.3% for worksheet structure. 60% of participants felt that including the sessions during the clinical ward rotation was “Feasible.” As per self-efficacy, participants reported being “Confident” for the tasks as follows: 89.5% for the ability to identify a relevant conundrum, 94.8% for the compare and contrast task, and 84.2% for the identification of a published update. The perceived effectiveness to learn EBP was reported as “Agreed” by all participants. All participants would recommend this session for further teaching. Conclusion: We developed a modern approach to teach EBP, enjoyed by all levels of participants, who also felt it was a useful learning experience. Our approach addresses known JCs challenges by being relevant to clinical care, fostering active engagement but not requiring any preparation, using available technology, and being adaptable to hybrid contexts.Keywords: medical education, journal clubs, post-graduate teaching, andragogy, experiential learning, evidence-based practice
Procedia PDF Downloads 11550 Well Inventory Data Entry: Utilization of Developed Technologies to Progress the Integrated Asset Plan
Authors: Danah Al-Selahi, Sulaiman Al-Ghunaim, Bashayer Sadiq, Fatma Al-Otaibi, Ali Ameen
Abstract:
In light of recent changes affecting the Oil & Gas Industry, optimization measures have become imperative for all companies globally, including Kuwait Oil Company (KOC). To keep abreast of the dynamic market, a detailed Integrated Asset Plan (IAP) was developed to drive optimization across the organization, which was facilitated through the in-house developed software “Well Inventory Data Entry” (WIDE). This comprehensive and integrated approach enabled centralization of all planned asset components for better well planning, enhancement of performance, and to facilitate continuous improvement through performance tracking and midterm forecasting. Traditionally, this was hard to achieve as, in the past, various legacy methods were used. This paper briefly describes the methods successfully adopted to meet the company’s objective. IAPs were initially designed using computerized spreadsheets. However, as data captured became more complex and the number of stakeholders requiring and updating this information grew, the need to automate the conventional spreadsheets became apparent. WIDE, existing in other aspects of the company (namely, the Workover Optimization project), was utilized to meet the dynamic requirements of the IAP cycle. With the growth of extensive features to enhance the planning process, the tool evolved into a centralized data-hub for all asset-groups and technical support functions to analyze and infer from, leading WIDE to become the reference two-year operational plan for the entire company. To achieve WIDE’s goal of operational efficiency, asset-groups continuously add their parameters in a series of predefined workflows that enable the creation of a structured process which allows risk factors to be flagged and helps mitigation of the same. This tool dictates assigned responsibilities for all stakeholders in a method that enables continuous updates for daily performance measures and operational use. The reliable availability of WIDE, combined with its user-friendliness and easy accessibility, created a platform of cross-functionality amongst all asset-groups and technical support groups to update contents of their respective planning parameters. The home-grown entity was implemented across the entire company and tailored to feed in internal processes of several stakeholders across the company. Furthermore, the implementation of change management and root cause analysis techniques captured the dysfunctionality of previous plans, which in turn resulted in the improvement of already existing mechanisms of planning within the IAP. The detailed elucidation of the 2 year plan flagged any upcoming risks and shortfalls foreseen in the plan. All results were translated into a series of developments that propelled the tool’s capabilities beyond planning and into operations (such as Asset Production Forecasts, setting KPIs, and estimating operational needs). This process exemplifies the ability and reach of applying advanced development techniques to seamlessly integrated the planning parameters of various assets and technical support groups. These techniques enables the enhancement of integrating planning data workflows that ultimately lay the founding plans towards an epoch of accuracy and reliability. As such, benchmarks of establishing a set of standard goals are created to ensure the constant improvement of the efficiency of the entire planning and operational structure.Keywords: automation, integration, value, communication
Procedia PDF Downloads 146