Search results for: multi layer
425 MCD-017: Potential Candidate from the Class of Nitroimidazoles to Treat Tuberculosis
Authors: Gurleen Kour, Mowkshi Khullar, B. K. Chandan, Parvinder Pal Singh, Kushalava Reddy Yumpalla, Gurunadham Munagala, Ram A. Vishwakarma, Zabeer Ahmed
Abstract:
New chemotherapeutic compounds against multidrug-resistant Mycobacterium tuberculosis (Mtb) are urgently needed to combat drug resistance in tuberculosis (TB). Apart from in-vitro potency against the target, physiochemical properties and pharmacokinetic properties play an imperative role in the process of drug discovery. We have identified novel nitroimidazole derivatives with potential activity against mycobacterium tuberculosis. One lead candidates, MCD-017, which showed potent activity against H37Rv strain (MIC=0.5µg/ml) and was further evaluated in the process of drug development. Methods: Basic physicochemical parameters like solubility and lipophilicity (LogP) were evaluated. Thermodynamic solubility was determined in PBS buffer (pH 7.4) using LC/MS-MS. The partition coefficient (Log P) of the compound was determined between octanol and phosphate buffered saline (PBS at pH 7.4) at 25°C by the microscale shake flask method. The compound followed Lipinski’s rule of five, which is predictive of good oral bioavailability and was further evaluated for metabolic stability. In-vitro metabolic stability was determined in rat liver microsomes. The hepatotoxicity of the compound was also determined in HepG2 cell line. In vivo pharmacokinetic profile of the compound after oral dosing was also obtained using balb/c mice. Results: The compound exhibited favorable solubility and lipophilicity. The physical and chemical properties of the compound were made use of as the first determination of drug-like properties. The compound obeyed Lipinski’s rule of five, with molecular weight < 500, number of hydrogen bond donors (HBD) < 5 and number of hydrogen bond acceptors(HBA) not more then 10. The log P of the compound was less than 5 and therefore the compound is predictive of exhibiting good absorption and permeation. Pooled rat liver microsomes were prepared from rat liver homogenate for measuring the metabolic stability. 99% of the compound was not metabolized and remained intact. The compound did not exhibit cytoxicity in hepG2 cells upto 40 µg/ml. The compound revealed good pharmacokinetic profile at a dose of 5mg/kg administered orally with a half life (t1/2) of 1.15 hours, Cmax of 642ng/ml, clearance of 4.84 ml/min/kg and a volume of distribution of 8.05 l/kg. Conclusion : The emergence of multi drug resistance (MDR) and extensively drug resistant (XDR) Tuberculosis emphasize the requirement of novel drugs active against tuberculosis. Thus, the need to evaluate physicochemical and pharmacokinetic properties in the early stages of drug discovery is required to reduce the attrition associated with poor drug exposure. In summary, it can be concluded that MCD-017 may be considered a good candidate for further preclinical and clinical evaluations.Keywords: mycobacterium tuberculosis, pharmacokinetics, physicochemical properties, hepatotoxicity
Procedia PDF Downloads 457424 Radar on Bike: Coarse Classification based on Multi-Level Clustering for Cyclist Safety Enhancement
Authors: Asma Omri, Noureddine Benothman, Sofiane Sayahi, Fethi Tlili, Hichem Besbes
Abstract:
Cycling, a popular mode of transportation, can also be perilous due to cyclists' vulnerability to collisions with vehicles and obstacles. This paper presents an innovative cyclist safety system based on radar technology designed to offer real-time collision risk warnings to cyclists. The system incorporates a low-power radar sensor affixed to the bicycle and connected to a microcontroller. It leverages radar point cloud detections, a clustering algorithm, and a supervised classifier. These algorithms are optimized for efficiency to run on the TI’s AWR 1843 BOOST radar, utilizing a coarse classification approach distinguishing between cars, trucks, two-wheeled vehicles, and other objects. To enhance the performance of clustering techniques, we propose a 2-Level clustering approach. This approach builds on the state-of-the-art Density-based spatial clustering of applications with noise (DBSCAN). The objective is to first cluster objects based on their velocity, then refine the analysis by clustering based on position. The initial level identifies groups of objects with similar velocities and movement patterns. The subsequent level refines the analysis by considering the spatial distribution of these objects. The clusters obtained from the first level serve as input for the second level of clustering. Our proposed technique surpasses the classical DBSCAN algorithm in terms of geometrical metrics, including homogeneity, completeness, and V-score. Relevant cluster features are extracted and utilized to classify objects using an SVM classifier. Potential obstacles are identified based on their velocity and proximity to the cyclist. To optimize the system, we used the View of Delft dataset for hyperparameter selection and SVM classifier training. The system's performance was assessed using our collected dataset of radar point clouds synchronized with a camera on an Nvidia Jetson Nano board. The radar-based cyclist safety system is a practical solution that can be easily installed on any bicycle and connected to smartphones or other devices, offering real-time feedback and navigation assistance to cyclists. We conducted experiments to validate the system's feasibility, achieving an impressive 85% accuracy in the classification task. This system has the potential to significantly reduce the number of accidents involving cyclists and enhance their safety on the road.Keywords: 2-level clustering, coarse classification, cyclist safety, warning system based on radar technology
Procedia PDF Downloads 83423 Exploring the Impact of Input Sequence Lengths on Long Short-Term Memory-Based Streamflow Prediction in Flashy Catchments
Authors: Farzad Hosseini Hossein Abadi, Cristina Prieto Sierra, Cesar Álvarez Díaz
Abstract:
Predicting streamflow accurately in flashy catchments prone to floods is a major research and operational challenge in hydrological modeling. Recent advancements in deep learning, particularly Long Short-Term Memory (LSTM) networks, have shown to be promising in achieving accurate hydrological predictions at daily and hourly time scales. In this work, a multi-timescale LSTM (MTS-LSTM) network was applied to the context of regional hydrological predictions at an hourly time scale in flashy catchments. The case study includes 40 catchments allocated in the Basque Country, north of Spain. We explore the impact of hyperparameters on the performance of streamflow predictions given by regional deep learning models through systematic hyperparameter tuning - where optimal regional values for different catchments are identified. The results show that predictions are highly accurate, with Nash-Sutcliffe (NSE) and Kling-Gupta (KGE) metrics values as high as 0.98 and 0.97, respectively. A principal component analysis reveals that a hyperparameter related to the length of the input sequence contributes most significantly to the prediction performance. The findings suggest that input sequence lengths have a crucial impact on the model prediction performance. Moreover, employing catchment-scale analysis reveals distinct sequence lengths for individual basins, highlighting the necessity of customizing this hyperparameter based on each catchment’s characteristics. This aligns with well known “uniqueness of the place” paradigm. In prior research, tuning the length of the input sequence of LSTMs has received limited focus in the field of streamflow prediction. Initially it was set to 365 days to capture a full annual water cycle. Later, performing limited systematic hyper-tuning using grid search, revealed a modification to 270 days. However, despite the significance of this hyperparameter in hydrological predictions, usually studies have overlooked its tuning and fixed it to 365 days. This study, employing a simultaneous systematic hyperparameter tuning approach, emphasizes the critical role of input sequence length as an influential hyperparameter in configuring LSTMs for regional streamflow prediction. Proper tuning of this hyperparameter is essential for achieving accurate hourly predictions using deep learning models.Keywords: LSTMs, streamflow, hyperparameters, hydrology
Procedia PDF Downloads 72422 EverPro as the Missing Piece in the Plant Protein Portfolio to Aid the Transformation to Sustainable Food Systems
Authors: Aylin W Sahin, Alice Jaeger, Laura Nyhan, Gregory Belt, Steffen Münch, Elke K. Arendt
Abstract:
Our current food systems cause an increase in malnutrition resulting in more people being overweight or obese in the Western World. Additionally, our natural resources are under enormous pressure and the greenhouse gas emission increases yearly with a significant contribution to climate change. Hence, transforming our food systems is of highest priority. Plant-based food products have a lower environmental impact compared to their animal-based counterpart, representing a more sustainable protein source. However, most plant-based protein ingredients, such as soy and pea, are lacking indispensable amino acids and extremely limited in their functionality and, thus, in their food application potential. They are known to have a low solubility in water and change their properties during processing. The low solubility displays the biggest challenge in the development of milk alternatives leading to inferior protein content and protein quality in dairy alternatives on the market. Moreover, plant-based protein ingredients often possess an off-flavour, which makes them less attractive to consumers. EverPro, a plant-protein isolate originated from Brewer’s Spent Grain, the most abundant by-product in the brewing industry, represents the missing piece in the plant protein portfolio. With a protein content of >85%, it is of high nutritional value, including all indispensable amino acids which allows closing the protein quality gap of plant proteins. Moreover, it possesses high techno-functional properties. It is fully soluble in water (101.7 ± 2.9%), has a high fat absorption capacity (182.4 ± 1.9%), and a foaming capacity which is superior to soy protein or pea protein. This makes EverPro suitable for a vast range of food applications. Furthermore, it does not cause changes in viscosity during heating and cooling of dispersions, such as beverages. Besides its outstanding nutritional and functional characteristics, the production of EverPro has a much lower environmental impact compared to dairy or other plant protein ingredients. Life cycle assessment analysis showed that EverPro has the lowest impact on global warming compared to soy protein isolate, pea protein isolate, whey protein isolate, and egg white powder. It also contributes significantly less to freshwater eutrophication, marine eutrophication and land use compared the protein sources mentioned above. EverPro is the prime example of sustainable ingredients, and the type of plant protein the food industry was waiting for: nutritious, multi-functional, and environmentally friendly.Keywords: plant-based protein, upcycled, brewers' spent grain, low environmental impact, highly functional ingredient
Procedia PDF Downloads 80421 Disability Management and Occupational Health Enhancement Program in Hong Kong Hospital Settings
Authors: K. C. M. Wong, C. P. Y. Cheng, K. Y. Chan, G. S. C. Fung, T. F. O. Lau, K. F. C. Leung, J. P. C. Fok
Abstract:
Hospital Authority (HA) is the statutory body to manage all public hospitals in Hong Kong. Occupational Care Medicine Service (OMCS) is an in-house multi-disciplinary team responsible for injury management in HA. Hospital administrative services (AS) provides essential support in hospital daily operation to facilitate the provision of quality healthcare services. An occupational health enhancement program in Tai Po Hospital (TPH) domestic service supporting unit (DSSU) was piloted in 2013 with satisfactory outcome, the keys to success were staff engagement and management support. Riding on the success, the program was rolled out to another 5 AS departments of Alice Ho Miu Ling Nethersole Hospital (AHNH) and TPH in 2015. This paper highlights the indispensable components of disability management and occupational health enhancement program in hospital settings. Objectives: 1) Facilitate workplace to support staff with health affecting work problem, 2) Enhance staff’s occupational health. Methodology: Hospital Occupational Safety and Health (OSH) team and AS departments (catering, linen services, and DSSU) of AHNH and TPH worked closely with OMCS. Focus group meetings and worksite visits were conducted with frontline staff engagement. OSH hazards were identified with corresponding OSH improvement measures introduced, e.g., invention of high dusting device to minimize working at height; tailor-made linen cart to minimize back bending at work, etc. Specific MHO trainings were offered to each AS department. A disability management workshop was provided to supervisors in order to enhance their knowledge and skills in return-to-work (RTW) facilitation. Based on injured staff's health condition, OMCS would provide work recommendation, and RTW plan was formulated with engagement of staff and their supervisors. Genuine communication among stakeholders with expectation management paved the way for realistic goals setting and success in our program. Outcome: After implementation of the program, a significant drop of 26% in musculoskeletal disorders related sickness absence day was noted in 2016 as compared to the average of 2013-2015. The improvement was postulated by innovative OSH improvement measures, teamwork, staff engagement and management support. Staff and supervisors’ feedback were very encouraging that 90% respondents rated very satisfactory in program evaluation. This program exemplified good work sharing among departments to support staff in need.Keywords: disability management, occupational health, return to work, occupational medicine
Procedia PDF Downloads 213420 Machine Learning Approach for Automating Electronic Component Error Classification and Detection
Authors: Monica Racha, Siva Chandrasekaran, Alex Stojcevski
Abstract:
The engineering programs focus on promoting students' personal and professional development by ensuring that students acquire technical and professional competencies during four-year studies. The traditional engineering laboratory provides an opportunity for students to "practice by doing," and laboratory facilities aid them in obtaining insight and understanding of their discipline. Due to rapid technological advancements and the current COVID-19 outbreak, the traditional labs were transforming into virtual learning environments. Aim: To better understand the limitations of the physical laboratory, this research study aims to use a Machine Learning (ML) algorithm that interfaces with the Augmented Reality HoloLens and predicts the image behavior to classify and detect the electronic components. The automated electronic components error classification and detection automatically detect and classify the position of all components on a breadboard by using the ML algorithm. This research will assist first-year undergraduate engineering students in conducting laboratory practices without any supervision. With the help of HoloLens, and ML algorithm, students will reduce component placement error on a breadboard and increase the efficiency of simple laboratory practices virtually. Method: The images of breadboards, resistors, capacitors, transistors, and other electrical components will be collected using HoloLens 2 and stored in a database. The collected image dataset will then be used for training a machine learning model. The raw images will be cleaned, processed, and labeled to facilitate further analysis of components error classification and detection. For instance, when students conduct laboratory experiments, the HoloLens captures images of students placing different components on a breadboard. The images are forwarded to the server for detection in the background. A hybrid Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs) algorithm will be used to train the dataset for object recognition and classification. The convolution layer extracts image features, which are then classified using Support Vector Machine (SVM). By adequately labeling the training data and classifying, the model will predict, categorize, and assess students in placing components correctly. As a result, the data acquired through HoloLens includes images of students assembling electronic components. It constantly checks to see if students appropriately position components in the breadboard and connect the components to function. When students misplace any components, the HoloLens predicts the error before the user places the components in the incorrect proportion and fosters students to correct their mistakes. This hybrid Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs) algorithm automating electronic component error classification and detection approach eliminates component connection problems and minimizes the risk of component damage. Conclusion: These augmented reality smart glasses powered by machine learning provide a wide range of benefits to supervisors, professionals, and students. It helps customize the learning experience, which is particularly beneficial in large classes with limited time. It determines the accuracy with which machine learning algorithms can forecast whether students are making the correct decisions and completing their laboratory tasks.Keywords: augmented reality, machine learning, object recognition, virtual laboratories
Procedia PDF Downloads 137419 Identification of Tangible and Intangible Heritage and Preparation of Conservation Proposal for the Historic City of Karanja Laad
Authors: Prachi Buche Marathe
Abstract:
Karanja Laad is a city located in the Vidarbha region in the state of Maharashtra, India. It has a huge amount of tangible and intangible heritage in the form of monuments, precincts, a group of structures, festivals and procession route, which is neglected and lost with time. Three different religions Hinduism, Islam and Jainism along with associations of being a birthplace of Swami Nrusinha Saraswati, an exponent of Datta Sampradaya sect and the British colonial layer have shaped the culture and society of the place over the period. The architecture of the town Karanja Laad has enhanced its unique historic and cultural value with a combination of all these historic layers. Karanja Laad is also a traditional trading historic town with unique hybrid architectural style and has a good potential for developing as a tourist place along with the present image of a pilgrim destination of Datta Sampradaya. The aim of the research is to prepare a conservation proposal for the historic town along with the management framework. Objectives of the research are to study the evolution of Karanja town, to identify the cultural resources along with issues of the historic core of the city, to understand Datta sampradaya, and contribution of Saint Nrusinha Saraswati in the religious sect and his association as an important personality with Karanja. The methodology of the research is site visits to the Karanja city, making field surveys for documentation and discussions and questionnaires with the residents to establish heritage and identify potential and issues within the historic core thereby establishing a case for conservation. Field surveys are conducted for town level study of land use, open spaces, occupancy, ownership, traditional commodity and community, infrastructure, streetscapes, and precinct activities during the festival and non-festival period. Building level study includes establishing various typologies like residential, institutional commercial, religious, and traditional infrastructure from the mythological references like waterbodies (kund), lake and wells. One of the main issues is that the loss of the traditional footprint as well as the traditional open spaces which are getting lost due to the new illegal encroachments and lack of guidelines for the new additions to conserve the original fabric of the structures. Traditional commodities are getting lost since there is no promotion of these skills like pottery and painting. Lavish bungalows like Kannava mansion, main temple Wada (birthplace of the saint) have a huge potential to be developed as a museum by adaptive re-use which will, in turn, attract many visitors during festivals which will boost the economy. Festival procession routes can be identified and a heritage walk can be developed so as to highlight the traditional features of the town. Overall study has resulted in establishing a heritage map with 137 heritage structures identified as potential. Conservation proposal is worked out on the town level, precinct level and building level with interventions such as developing construction guidelines for further development and establishing a heritage cell consisting architects and engineers for the upliftment of the existing rich heritage of the Karanja city.Keywords: built heritage, conservation, Datta Sampradaya, Karanja Laad, Swami Nrusinha Saraswati, procession route
Procedia PDF Downloads 161418 Spatial Analysis in the Impact of Aquifer Capacity Reduction on Land Subsidence Rate in Semarang City between 2014-2017
Authors: Yudo Prasetyo, Hana Sugiastu Firdaus, Diyanah Diyanah
Abstract:
The phenomenon of the lack of clean water supply in several big cities in Indonesia is a major problem in the development of urban areas. Moreover, in the city of Semarang, the population density and growth of physical development is very high. Continuous and large amounts of underground water (aquifer) exposure can result in a drastically aquifer supply declining in year by year. Especially, the intensity of aquifer use in the fulfilment of household needs and industrial activities. This is worsening by the land subsidence phenomenon in some areas in the Semarang city. Therefore, special research is needed to know the spatial correlation of the impact of decreasing aquifer capacity on the land subsidence phenomenon. This is necessary to give approve that the occurrence of land subsidence can be caused by loss of balance of pressure on below the land surface. One method to observe the correlation pattern between the two phenomena is the application of remote sensing technology based on radar and optical satellites. Implementation of Differential Interferometric Synthetic Aperture Radar (DINSAR) or Small Baseline Area Subset (SBAS) method in SENTINEL-1A satellite image acquisition in 2014-2017 period will give a proper pattern of land subsidence. These results will be spatially correlated with the aquifer-declining pattern in the same time period. Utilization of survey results to 8 monitoring wells with depth in above 100 m to observe the multi-temporal pattern of aquifer change capacity. In addition, the pattern of aquifer capacity will be validated with 2 underground water cavity maps from observation of ministries of energy and natural resources (ESDM) in Semarang city. Spatial correlation studies will be conducted on the pattern of land subsidence and aquifer capacity using overlapping and statistical methods. The results of this correlation will show how big the correlation of decrease in underground water capacity in influencing the distribution and intensity of land subsidence in Semarang city. In addition, the results of this study will also be analyzed based on geological aspects related to hydrogeological parameters, soil types, aquifer species and geological structures. The results of this study will be a correlation map of the aquifer capacity on the decrease in the face of the land in the city of Semarang within the period 2014-2017. So hopefully the results can help the authorities in spatial planning and the city of Semarang in the future.Keywords: aquifer, differential interferometric synthetic aperture radar (DINSAR), land subsidence, small baseline area subset (SBAS)
Procedia PDF Downloads 183417 Enhanced Dielectric and Ferroelectric Properties in Holmium Substituted Stoichiometric and Non-Stoichiometric SBT Ferroelectric Ceramics
Authors: Sugandha Gupta, Arun Kumar Jha
Abstract:
A large number of ferroelectric materials have been intensely investigated for applications in non-volatile ferroelectric random access memories (FeRAMs), piezoelectric transducers, actuators, pyroelectric sensors, high dielectric constant capacitors, etc. Bismuth layered ferroelectric materials such as Strontium Bismuth Tantalate (SBT) has attracted a lot of attention due to low leakage current, high remnant polarization and high fatigue endurance up to 1012 switching cycles. However, pure SBT suffers from various major limitations such as high dielectric loss, low remnant polarization values, high processing temperature, bismuth volatilization, etc. Significant efforts have been made to improve the dielectric and ferroelectric properties of this compound. Firstly, it has been reported that electrical properties vary with the Sr/ Bi content ratio in the SrBi2Ta2O9 compsition i.e. non-stoichiometric compositions with Sr-deficient / Bi excess content have higher remnant polarization values than stoichiometic SBT compositions. With the objective to improve structural, dielectric, ferroelectric and piezoelectric properties of SBT compound, rare earth holmium (Ho3+) was chosen as a donor cation for substitution onto the Bi2O2 layer. Moreover, hardly any report on holmium substitution in stoichiometric SrBi2Ta2O9 and non-stoichiometric Sr0.8Bi2.2Ta2O9 compositions were available in the literature. The holmium substituted SrBi2-xHoxTa2O9 (x= 0.00-2.0) and Sr0.8Bi2.2Ta2O9 (x=0.0 and 0.01) compositions were synthesized by the solid state reaction method. The synthesized specimens were characterized for their structural and electrical properties. X-ray diffractograms reveal single phase layered perovskite structure formation for holmium content in stoichiometric SBT samples up to x ≤ 0.1. The granular morphology of the samples was investigated using scanning electron microscope (Hitachi, S-3700 N). The dielectric measurements were carried out using a precision LCR meter (Agilent 4284A) operating at oscillation amplitude of 1V. The variation of dielectric constant with temperature shows that the Curie temperature (Tc) decreases on increasing the holmium content. The specimen with x=2.0 i.e. the bismuth free specimen, has very low dielectric constant and does not show any appreciable variation with temperature. The dielectric loss reduces significantly with holmium substitution. The polarization–electric field (P–E) hysteresis loops were recorded using a P–E loop tracer based on Sawyer–Tower circuit. It is observed that the ferroelectric property improve with Ho substitution. Holmium substituted specimen exhibits enhanced value of remnant polarization (Pr= 9.22 μC/cm²) as compared to holmium free specimen (Pr= 2.55 μC/cm²). Piezoelectric co-efficient (d33 values) was measured using a piezo meter system (Piezo Test PM300). It is observed that holmium substitution enhances piezoelectric coefficient. Further, the optimized holmium content (x=0.01) in stoichiometric SrBi2-xHoxTa2O9 composition has been substituted in non-stoichiometric Sr0.8Bi2.2Ta2O9 composition to obtain further enhanced structural and electrical characteristics. It is expected that a new class of ferroelectric materials i.e. Rare Earth Layered Structured Ferroelectrics (RLSF) derived from Bismuth Layered Structured Ferroelectrics (BLSF) will generate which can be used to replace static (SRAM) and dynamic (DRAM) random access memories with ferroelectric random access memories (FeRAMS).Keywords: dielectrics, ferroelectrics, piezoelectrics, strontium bismuth tantalate
Procedia PDF Downloads 210416 Developing and Shake Table Testing of Semi-Active Hydraulic Damper as Active Interaction Control Device
Authors: Ming-Hsiang Shih, Wen-Pei Sung, Shih-Heng Tung
Abstract:
Semi-active control system for structure under excitation of earthquake provides with the characteristics of being adaptable and requiring low energy. DSHD (Displacement Semi-Active Hydraulic Damper) was developed by our research team. Shake table test results of this DSHD installed in full scale test structure demonstrated that this device brought its energy-dissipating performance into full play for test structure under excitation of earthquake. The objective of this research is to develop a new AIC (Active Interaction Control Device) and apply shake table test to perform its dissipation of energy capability. This new proposed AIC is converting an improved DSHD (Displacement Semi-Active Hydraulic Damper) to AIC with the addition of an accumulator. The main concept of this energy-dissipating AIC is to apply the interaction function of affiliated structure (sub-structure) and protected structure (main structure) to transfer the input seismic force into sub-structure to reduce the structural deformation of main structure. This concept is tested using full-scale multi-degree of freedoms test structure, installed with this proposed AIC subjected to external forces of various magnitudes, for examining the shock absorption influence of predictive control, stiffness of sub-structure, synchronous control, non-synchronous control and insufficient control position. The test results confirm: (1) this developed device is capable of diminishing the structural displacement and acceleration response effectively; (2) the shock absorption of low precision of semi-active control method did twice as much seismic proof efficacy as that of passive control method; (3) active control method may not exert a negative influence of amplifying acceleration response of structure; (4) this AIC comes into being time-delay problem. It is the same problem of ordinary active control method. The proposed predictive control method can overcome this defect; (5) condition switch is an important characteristics of control type. The test results show that synchronism control is very easy to control and avoid stirring high frequency response. This laboratory results confirm that the device developed in this research is capable of applying the mutual interaction between the subordinate structure and the main structure to be protected is capable of transforming the quake energy applied to the main structure to the subordinate structure so that the objective of minimizing the deformation of main structural can be achieved.Keywords: DSHD (Displacement Semi-Active Hydraulic Damper), AIC (Active Interaction Control Device), shake table test, full scale structure test, sub-structure, main-structure
Procedia PDF Downloads 519415 Effect of Velocity-Slip in Nanoscale Electroosmotic Flows: Molecular and Continuum Transport Perspectives
Authors: Alper T. Celebi, Ali Beskok
Abstract:
Electroosmotic (EO) slip flows in nanochannels are investigated using non-equilibrium molecular dynamics (MD) simulations, and the results are compared with analytical solution of Poisson-Boltzmann and Stokes (PB-S) equations with slip contribution. The ultimate objective of this study is to show that well-known continuum flow model can accurately predict the EO velocity profiles in nanochannels using the slip lengths and apparent viscosities obtained from force-driven flow simulations performed at various liquid-wall interaction strengths. EO flow of aqueous NaCl solution in silicon nanochannels are simulated under realistic electrochemical conditions within the validity region of Poisson-Boltzmann theory. A physical surface charge density is determined for nanochannels based on dissociations of silanol functional groups on channel surfaces at known salt concentration, temperature and local pH. First, we present results of density profiles and ion distributions by equilibrium MD simulations, ensuring that the desired thermodynamic state and ionic conditions are satisfied. Next, force-driven nanochannel flow simulations are performed to predict the apparent viscosity of ionic solution between charged surfaces and slip lengths. Parabolic velocity profiles obtained from force-driven flow simulations are fitted to a second-order polynomial equation, where viscosity and slip lengths are quantified by comparing the coefficients of the fitted equation with continuum flow model. Presence of charged surface increases the viscosity of ionic solution while the velocity-slip at wall decreases. Afterwards, EO flow simulations are carried out under uniform electric field for different liquid-wall interaction strengths. Velocity profiles present finite slips near walls, followed with a conventional viscous flow profile in the electrical double layer that reaches a bulk flow region in the center of the channel. The EO flow enhances with increased slip at the walls, which depends on wall-liquid interaction strength and the surface charge. MD velocity profiles are compared with the predictions from analytical solutions of the slip modified PB-S equation, where the slip length and apparent viscosity values are obtained from force-driven flow simulations in charged silicon nano-channels. Our MD results show good agreements with the analytical solutions at various slip conditions, verifying the validity of PB-S equation in nanochannels as small as 3.5 nm. In addition, the continuum model normalizes slip length with the Debye length instead of the channel height, which implies that enhancement in EO flows is independent of the channel height. Further MD simulations performed at different channel heights also shows that the flow enhancement due to slip is independent of the channel height. This is important because slip enhanced EO flow is observable even in micro-channels experiments by using a hydrophobic channel with large slip and high conductivity solutions with small Debye length. The present study provides an advanced understanding of EO flows in nanochannels. Correct characterization of nanoscale EO slip flow is crucial to discover the extent of well-known continuum models, which is required for various applications spanning from ion separation to drug delivery and bio-fluidic analysis.Keywords: electroosmotic flow, molecular dynamics, slip length, velocity-slip
Procedia PDF Downloads 159414 Simo-syl: A Computer-Based Tool to Identify Language Fragilities in Italian Pre-Schoolers
Authors: Marinella Majorano, Rachele Ferrari, Tamara Bastianello
Abstract:
The recent technological advance allows for applying innovative and multimedia screen-based assessment tools to test children's language and early literacy skills, monitor their growth over the preschool years, and test their readiness for primary school. Several are the advantages that a computer-based assessment tool offers with respect to paper-based tools. Firstly, computer-based tools which provide the use of games, videos, and audio may be more motivating and engaging for children, especially for those with language difficulties. Secondly, computer-based assessments are generally less time-consuming than traditional paper-based assessments: this makes them less demanding for children and provides clinicians and researchers, but also teachers, with the opportunity to test children multiple times over the same school year and, thus, to monitor their language growth more systematically. Finally, while paper-based tools require offline coding, computer-based tools sometimes allow obtaining automatically calculated scores, thus producing less subjective evaluations of the assessed skills and provide immediate feedback. Nonetheless, using computer-based assessment tools to test meta-phonological and language skills in children is not yet common practice in Italy. The present contribution aims to estimate the internal consistency of a computer-based assessment (i.e., the Simo-syl assessment). Sixty-three Italian pre-schoolers aged between 4;10 and 5;9 years were tested at the beginning of the last year of the preschool through paper-based standardised tools in their lexical (Peabody Picture Vocabulary Test), morpho-syntactical (Grammar Repetition Test for Children), meta-phonological (Meta-Phonological skills Evaluation test), and phono-articulatory skills (non-word repetition). The same children were tested through Simo-syl assessment on their phonological and meta-phonological skills (e.g., recognise syllables and vowels and read syllables and words). The internal consistency of the computer-based tool was acceptable (Cronbach's alpha = .799). Children's scores obtained in the paper-based assessment and scores obtained in each task of the computer-based assessment were correlated. Significant and positive correlations emerged between all the tasks of the computer-based assessment and the scores obtained in the CMF (r = .287 - .311, p < .05) and in the correct sentences in the RCGB (r = .360 - .481, p < .01); non-word repetition standardised test significantly correlates with the reading tasks only (r = .329 - .350, p < .05). Further tasks should be included in the current version of Simo-syl to have a comprehensive and multi-dimensional approach when assessing children. However, such a tool represents a good chance for the teachers to early identifying language-related problems even in the school environment.Keywords: assessment, computer-based, early identification, language-related skills
Procedia PDF Downloads 185413 Estimating Age in Deceased Persons from the North Indian Population Using Ossification of the Sternoclavicular Joint
Authors: Balaji Devanathan, Gokul G., Raveena Divya, Abhishek Yadav, Sudhir K. Gupta
Abstract:
Background: Age estimation is a common problem in administrative settings, medico legal cases, and among athletes competing in different sports. Age estimation is a problem in medico legal problems that arise in hospitals when there has been a criminal abortion, when consenting to surgery or a general physical examination, when there has been infanticide, impotence, sterility, etc. Medical imaging progress has benefited forensic anthropology in various ways, most notably in the area of determining bone age. An efficient method for researching the epiphyseal union and other differences in the body's bones and joints is multi-slice computed tomography. There isn't a significant database on Indians available. So to obtain an Indian based database author has performed this original study. Methodologies: The appearance and fusion of ossification centre of sternoclavicular joint is evaluated, and grades were assigned accordingly. Using MSCT scans, we examined the relationship between the age of the deceased and alterations in the sternoclavicular joint during the appearance and union in 500 instances, 327 men and 173 females, in the age range of 0 to 25 years. Results: According to our research in both the male and female groups, the ossification centre for the medial end of the clavicle first appeared between the ages of 18.5 and 17.1 respectively. The age range of the partial union was 20.4 and 20.2 years old. The earliest age of complete fusion was 23 years for males and 22 years for females. For fusion of their sternebrae into one, age range is 11–24 years for females and 17–24 years. The fusion of the third and fourth sternebrae was completed by 11 years. The fusions of the first and second and second and third sternebrae occur by the age of 17 years. Furthermore, correlation and reliability were carried out which yielded significant results. Conclusion: With numerous exceptions, the projected values are consistent with a large number of the previously developed age charts. These variations may be caused by the ethnic or regional heterogeneity in the ossification pattern among the population under study. The pattern of bone maturation did not significantly differ between the sexes, according to the study. The study's age range was 0 to 25 years, and for obvious reasons, the majority of the occurrences occurred in the last five years, or between 20 and 25 years of age. This resulted in a comparatively smaller study population for the 12–18 age group, where age estimate is crucial because of current legal requirements. It will require specialized PMCT research in this age range to produce population standard charts for age estimate. The medial end of the clavicle is one of several ossification foci that are being thoroughly investigated since they are challenging to assess with a traditional X-ray examination. Combining the two has been shown to be a valid result when it comes to raising the age beyond eighteen.Keywords: age estimation, sternoclavicular joint, medial clavicle, computed tomography
Procedia PDF Downloads 46412 Role of Platelet Volume Indices in Diabetes Related Vascular Angiopathies
Authors: Mitakshara Sharma, S. K. Nema, Sanjeev Narang
Abstract:
Diabetes mellitus (DM) is a group of metabolic disorders characterized by metabolic abnormalities, chronic hyperglycaemia and long term macrovascular & microvascular complications. Vascular complications are due to platelet hyperactivity and dysfunction, increased inflammation, altered coagulation and endothelial dysfunction. Large proportion of patients with Type II DM suffers from preventable vascular angiopathies, and there is need to develop risk factor modifications and interventions to reduce impact of complications. These complications are attributed to platelet activation, recognised by increase in Platelet Volume Indices (PVI) including Mean Platelet Volume (MPV) and Platelet Distribution Width (PDW). The current study is prospective analytical study conducted over 2 years. Out of 1100 individuals, 930 individuals fulfilled inclusion criteria and were segregated into three groups on basis of glycosylated haemoglobin (HbA1C): - (a) Diabetic, (b) Non-Diabetic and (c) Subjects with Impaired fasting glucose (IFG) with 300 individuals in IFG and non-diabetic groups & 330 individuals in diabetic group. Further, diabetic group was divided into two groups on the basis of presence or absence of known diabetes related vascular complications. Samples for HbA1c and PVI were collected using Ethylene diamine tetraacetic acid (EDTA) as anticoagulant and processed on SYSMEX-X-800i autoanalyser. The study revealed gradual increase in PVI from non-diabetics to IFG to diabetics. PVI were markedly increased in diabetic patients. MPV and PDW of diabetics, IFG and non diabetics were (17.60 ± 2.04)fl, (11.76 ± 0.73)fl, (9.93 ± 0.64)fl and (19.17 ± 1.48)fl, (15.49 ± 0.67)fl, (10.59 ± 0.67)fl respectively with a significant p value 0.00 and a significant positive correlation (MPV-HbA1c r = 0.951; PDW-HbA1c r = 0.875). MPV & PDW of subjects with diabetes related complications were higher as compared to those without them and were (17.51±0.39)fl & (15.14 ± 1.04)fl and (20.09 ± 0.98) fl & (18.96 ± 0.83)fl respectively with a significant p value 0.00. There was a significant positive correlation between PVI and duration of diabetes across the groups (MPV-HbA1c r = 0.951; PDW-HbA1c r = 0.875). However, a significant negative correlation was found between glycaemic levels and total platelet count (PC- HbA1c r =-0.164). This is multi-parameter and comprehensive study with an adequately powered study design. It can be concluded from our study that PVI are extremely useful and important indicators of impending vascular complications in all patients with deranged glycaemic control. Introduction of automated cell counters has facilitated the availability of PVI as routine parameters. PVI is a useful means for identifying larger & active platelets which play important role in development of micro and macro angiopathic complications of diabetes leading to mortality and morbidity. PVI can be used as cost effective markers to predict and prevent impending vascular events in patients with Diabetes mellitus especially in developing countries like India. PVI, if incorporated into protocols for management of diabetes, could revolutionize care and curtail the ever increasing cost of patient management.Keywords: diabetes, IFG, HbA1C, MPV, PDW, PVI
Procedia PDF Downloads 259411 Household Perspectives and Resistance to Preventive Relocation in Flood Prone Areas: A Case Study in the Polwatta River Basin, Southern Sri Lanka
Authors: Ishara Madusanka, So Morikawa
Abstract:
Natural disasters, particularly floods, pose severe challenges globally, affecting both developed and developing countries. In many regions, especially Asia, riverine floods are prevalent and devastating. Integrated flood management incorporates structural and non-structural measures, with preventive relocation emerging as a cost-effective and proactive strategy for areas repeatedly impacted by severe flooding. However, preventive relocation is often hindered by economic, psychological, social, and institutional barriers. This study investigates the factors influencing resistance to preventive relocation and evaluates the role of flood risk information in shaping relocation decisions through risk perception. A conceptual model was developed, incorporating variables such as Flood Risk Information (FRI), Place Attachment (PA), Good Living Conditions (GLC), and Adaptation to Flooding (ATF), with Flood Risk Perception (FRP) serving as a mediating variable. The research was conducted in Welipitiya in the Polwatta river basin, Matara district, Sri Lanka, a region experiencing recurrent flood damage. For this study, an experimental design involving a structured questionnaire survey was utilized, with 185 households participating. The treatment group received flood risk information, including flood risk maps and historical data, while the control group did not. Data were collected in 2023 and analyzed using independent sample t-tests and Partial Least Squares Structural Equation Modeling (PLS-SEM). PLS-SEM was chosen for its ability to model latent variables, handle complex relationships, and suitability for exploratory research. Multi-group Analysis (MGA) assessed variations across different flood risk areas. Findings indicate that flood risk information had a limited impact on flood risk perception and relocation decisions, though its effect was significant in specific high-risk areas. Place attachment was a significant factor influencing relocation decisions across the sample. One potential reason for the limited impact of flood risk information on relocation decisions could be the lack of specificity in the information provided. The results suggest that while flood risk information alone may not significantly influence relocation decisions, it is crucial in specific contexts. Future studies and practitioners should focus on providing more detailed risk information and addressing psychological factors like place attachments to enhance preventive relocation efforts.Keywords: flood risk communication, flood risk perception, place attachment, preventive relocation, structural equation modeling
Procedia PDF Downloads 34410 Hydrological Challenges and Solutions in the Nashik Region: A Multi Tracer and Geochemistry Approach to Groundwater Management
Authors: Gokul Prasad, Pennan Chinnasamy
Abstract:
The degradation of groundwater resources, attributed to factors such as excessive abstraction and contamination, has emerged as a global concern. This study delves into the stable isotopes of water) in a hard-rock aquifer situated in the Upper Godavari watershed, an agriculturally rich region in India underlain by Basalt. The higher groundwater draft (> 90%) poses significant risks; comprehending groundwater sources, flow patterns, and their environmental impacts is pivotal for researchers and water managers. The region has faced five droughts in the past 20 years; four are categorized as medium. The recharge rates are variable and show a very minimum contribution to groundwater. The rainfall pattern shows vast variability, with the region receiving seasonal monsoon rainfall for just four months and the rest of the year experiencing minimal rainfall. This research closely monitored monsoon precipitation inputs and examined spatial and temporal fluctuations in δ18O and δ2H in both groundwater and precipitation. By discerning individual recharge events during monsoons, it became possible to identify periods when evaporation led to groundwater quality deterioration, characterized by elevated salinity and stable isotope values in the return flow. The locally derived meteoric water line (LMWL) (δ2H = 6.72 * δ18O + 1.53, r² = 0.6) provided valuable insights into the groundwater system. The leftward shift of the Nashik LMWL in relation to the GMWL and LMWL indicated groundwater evaporation (-33 ‰), supported by spatial variations in electrical conductivity (EC) data. Groundwater in the eastern and northern watershed areas exhibited higher salinity > 3000uS/cm, expanding > 40% of the area compared to the western and southern regions due to geological disparities (alluvium vs basalt). The findings emphasize meteoric precipitation as the primary groundwater source in the watershed. However, spatial variations in isotope values and chemical constituents indicate other contributing factors, including evaporation, groundwater source type, and natural or anthropogenic (specifically agricultural and industrial) contaminants. Therefore, the study recommends focused hydro geochemistry and isotope analysis in areas with strong agricultural and industrial influence for the development of holistic groundwater management plans for protecting the groundwater aquifers' quantity and quality.Keywords: groundwater quality, stable isotopes, salinity, groundwater management, hard-rock aquifer
Procedia PDF Downloads 48409 Multifield Problems in 3D Structural Analysis of Advanced Composite Plates and Shells
Authors: Salvatore Brischetto, Domenico Cesare
Abstract:
Major improvements in future aircraft and spacecraft could be those dependent on an increasing use of conventional and unconventional multilayered structures embedding composite materials, functionally graded materials, piezoelectric or piezomagnetic materials, and soft foam or honeycomb cores. Layers made of such materials can be combined in different ways to obtain structures that are able to fulfill several structural requirements. The next generation of aircraft and spacecraft will be manufactured as multilayered structures under the action of a combination of two or more physical fields. In multifield problems for multilayered structures, several physical fields (thermal, hygroscopic, electric and magnetic ones) interact each other with different levels of influence and importance. An exact 3D shell model is here proposed for these types of analyses. This model is based on a coupled system including 3D equilibrium equations, 3D Fourier heat conduction equation, 3D Fick diffusion equation and electric and magnetic divergence equations. The set of partial differential equations of second order in z is written using a mixed curvilinear orthogonal reference system valid for spherical and cylindrical shell panels, cylinders and plates. The order of partial differential equations is reduced to the first one thanks to the redoubling of the number of variables. The solution in the thickness z direction is obtained by means of the exponential matrix method and the correct imposition of interlaminar continuity conditions in terms of displacements, transverse stresses, electric and magnetic potentials, temperature, moisture content and transverse normal multifield fluxes. The investigated structures have simply supported sides in order to obtain a closed form solution in the in-plane directions. Moreover, a layerwise approach is proposed which allows a 3D correct description of multilayered anisotropic structures subjected to field loads. Several results will be proposed in tabular and graphical formto evaluate displacements, stresses and strains when mechanical loads, temperature gradients, moisture content gradients, electric potentials and magnetic potentials are applied at the external surfaces of the structures in steady-state conditions. In the case of inclusions of piezoelectric and piezomagnetic layers in the multilayered structures, so called smart structures are obtained. In this case, a free vibration analysis in open and closed circuit configurations and a static analysis for sensor and actuator applications will be proposed. The proposed results will be useful to better understand the physical and structural behaviour of multilayered advanced composite structures in the case of multifield interactions. Moreover, these analytical results could be used as reference solutions for those scientists interested in the development of 3D and 2D numerical shell/plate models based, for example, on the finite element approach or on the differential quadrature methodology. The correct impositions of boundary geometrical and load conditions, interlaminar continuity conditions and the zigzag behaviour description due to transverse anisotropy will be also discussed and verified.Keywords: composite structures, 3D shell model, stress analysis, multifield loads, exponential matrix method, layer wise approach
Procedia PDF Downloads 68408 Application of Flow Cytometry for Detection of Influence of Abiotic Stress on Plants
Authors: Dace Grauda, Inta Belogrudova, Alexei Katashev, Linda Lancere, Isaak Rashal
Abstract:
The goal of study was the elaboration of easy applicable flow cytometry method for detection of influence of abiotic stress factors on plants, which could be useful for detection of environmental stresses in urban areas. The lime tree Tillia vulgaris H. is a popular tree species used for urban landscaping in Europe and is one of the main species of street greenery in Riga, Latvia. Tree decline and low vitality has observed in the central part of Riga. For this reason lime trees were select as a model object for the investigation. During the period of end of June and beginning of July 12 samples from different urban environment locations as well as plant material from a greenhouse were collected. BD FACSJazz® cell sorter (BD Biosciences, USA) with flow cytometer function was used to test viability of plant cells. The method was based on changes of relative fluorescence intensity of cells in blue laser (488 nm) after influence of stress factors. SpheroTM rainbow calibration particles (3.0–3.4 μm, BD Biosciences, USA) in phosphate buffered saline (PBS) were used for calibration of flow cytometer. BD PharmingenTM PBS (BD Biosciences, USA) was used for flow cytometry assays. The mean fluorescence intensity information from the purified cell suspension samples was recorded. Preliminary, multiple gate sizes and shapes were tested to find one with the lowest CV. It was found that low CV can be obtained if only the densest part of plant cells forward scatter/side scatter profile is analysed because in this case plant cells are most similar in size and shape. The young pollen cells in one nucleus stage were found as the best for detection of influence of abiotic stress. For experiments only fresh plant material was used– the buds of Tillia vulgaris with diameter 2 mm. For the cell suspension (in vitro culture) establishment modified protocol of microspore culture was applied. The cells were suspended in the MS (Murashige and Skoog) medium. For imitation of dust of urban area SiO2 nanoparticles with concentration 0.001 g/ml were dissolved in distilled water. Into 10 ml of cell suspension 1 ml of SiO2 nanoparticles suspension was added, then cells were incubated in speed shaking regime for 1 and 3 hours. As a stress factor the irradiation of cells for 20 min by UV was used (Hamamatsu light source L9566-02A, L10852 lamp, A10014-50-0110), maximum relative intensity (100%) at 365 nm and at ~310 nm (75%). Before UV irradiation the suspension of cells were placed onto a thin layer on a filter paper disk (diameter 45 mm) in a Petri dish with solid MS media. Cells without treatment were used as a control. Experiments were performed at room temperature (23-25 °C). Using flow cytometer BS FACS Software cells plot was created to determine the densest part, which was later gated using oval-shaped gate. Gate included from 95 to 99% of all cells. To determine relative fluorescence of cells logarithmic fluorescence scale in arbitrary fluorescence units were used. 3x103 gated cells were analysed from the each sample. The significant differences were found among relative fluorescence of cells from different trees after treatment with SiO2 nanoparticles and UV irradiation in comparison with the control.Keywords: flow cytometry, fluorescence, SiO2 nanoparticles, UV irradiation
Procedia PDF Downloads 415407 The Impact of Sensory Overload on Students on the Autism Spectrum in Italian Inclusive Classrooms: Teachers' Perspectives and Training Needs
Authors: Paola Molteni, Luigi d’Alonzo
Abstract:
Background: Sensory issues are now considered one of the key aspects in defining and diagnosing autism, changing the perspectives on behavioural analysis and intervention in mainstream educational services. However, Italian teachers’ training is yet not specific on the topic of autism and its sensory-related effects and this research investigates the teacher’s capability in understanding the student’s needs and his/her challenging behaviours considering sensory perceptions. Objectives: The research aims to analyse mainstream schools teachers’ awareness on students’ sensory perceptions and how this affects classroom inclusion and learning process. The research questions are: i) Are teachers able to identify student’s sensory issues?; ii) Are trained teachers more able to identify sensory problems then untrained ones?; iii) What is the impact of sensory issues on inclusion in mainstream classrooms?; iv) What should teachers know about autistic sensory dimensions? Methods: This research was designed as a pilot study that involves a multi-methods approach, including action and collaborative research methodology. The designed research allows the researcher to catch the complexity of a province school district (from kindergarten to high school) through a deep detailed analysis of selected aspects. The researcher explored the questions described above through 133 questionnaires and 6 focus groups. The qualitative and quantitative data collected during the research were analysed using the Interpretative Phenomenological Analysis (IPA). Results: Mainstream schools teachers are not able to confidently recognise sensory issues of children included in the classroom. The research underlines: how professionals with no specific training on autism are not able to recognise sensory problems in students on the spectrum; how hearing and sight issues have higher impact on classroom inclusion and student’s learning process; how a lack of understanding is often followed by misinterpretations of the impact of sensory issues and challenging behaviours. Conclusions: As this research has shown, promoting and enhancing the importance of understanding sensory issues related to autism is fundamental to enable mainstream schools teachers to define educational and life-long plans able to properly answer the student’s needs and support his/her real inclusion in the classroom. This study is a good example of how the educational research can meet and help the daily practice in working with people on the autism spectrum and support the training design for mainstream school teachers: the emerging need of designed preparation on sensory issues is fundamental to be considered when planning school district in-service training programmes, specifically declined for inclusive services.Keywords: autism spectrum condition, scholastic inclusion, sensory overload, teacher's training
Procedia PDF Downloads 319406 Role of Artificial Intelligence in Nano Proteomics
Authors: Mehrnaz Mostafavi
Abstract:
Recent advances in single-molecule protein identification (ID) and quantification techniques are poised to revolutionize proteomics, enabling researchers to delve into single-cell proteomics and identify low-abundance proteins crucial for biomedical and clinical research. This paper introduces a different approach to single-molecule protein ID and quantification using tri-color amino acid tags and a plasmonic nanopore device. A comprehensive simulator incorporating various physical phenomena was designed to predict and model the device's behavior under diverse experimental conditions, providing insights into its feasibility and limitations. The study employs a whole-proteome single-molecule identification algorithm based on convolutional neural networks, achieving high accuracies (>90%), particularly in challenging conditions (95–97%). To address potential challenges in clinical samples, where post-translational modifications affecting labeling efficiency, the paper evaluates protein identification accuracy under partial labeling conditions. Solid-state nanopores, capable of processing tens of individual proteins per second, are explored as a platform for this method. Unlike techniques relying solely on ion-current measurements, this approach enables parallel readout using high-density nanopore arrays and multi-pixel single-photon sensors. Convolutional neural networks contribute to the method's versatility and robustness, simplifying calibration procedures and potentially allowing protein ID based on partial reads. The study also discusses the efficacy of the approach in real experimental conditions, resolving functionally similar proteins. The theoretical analysis, protein labeler program, finite difference time domain calculation of plasmonic fields, and simulation of nanopore-based optical sensing are detailed in the methods section. The study anticipates further exploration of temporal distributions of protein translocation dwell-times and the impact on convolutional neural network identification accuracy. Overall, the research presents a promising avenue for advancing single-molecule protein identification and quantification with broad applications in proteomics research. The contributions made in methodology, accuracy, robustness, and technological exploration collectively position this work at the forefront of transformative developments in the field.Keywords: nano proteomics, nanopore-based optical sensing, deep learning, artificial intelligence
Procedia PDF Downloads 102405 A Comparison of the Microbiology Profile for Periprosthetic Joint Infection (PJI) of Knee Arthroplasty and Lower Limb Endoprostheses in Tumour Surgery
Authors: Amirul Adlan, Robert A McCulloch, Neil Jenkins, MIchael Parry, Jonathan Stevenson, Lee Jeys
Abstract:
Background and Objectives: The current antibiotic prophylaxis for oncological patients is based upon evidence from primary arthroplasty despite significant differences in both patient group and procedure. The aim of this study was to compare the microbiology organisms responsible for PJI in patients who underwent two-stage revision for infected primary knee replacement with those of infected oncological endoprostheses of the lower limb in a single institution. This will subsequently guide decision making regarding antibiotic prophylaxis at primary implantation for oncological procedures and empirical antibiotics for infected revision procedures (where the infecting organism(s) are unknown). Patient and Methods: 118 patients were treated with two-stage revision surgery for infected knee arthroplasty and lower limb endoprostheses between 1999 and 2019. 74 patients had two-stage revision for PJI of knee arthroplasty, and 44 had two-stage revision of lower limb endoprostheses. There were 68 males and 50 females. The mean age for the knee arthroplasty cohort and lower limb endoprostheses cohort were 70.2 years (50-89) and 36.1 years (12-78), respectively (p<0.01). Patient host and extremity criteria were categorised according to the MSIS Host and Extremity Staging System. Patient microbiological culture, the incidence of polymicrobial infection and multi-drug resistance (MDR) were analysed and recorded. Results: Polymicrobial infection was reported in 16% (12 patients) from knee arthroplasty PJI and 14.5% (8 patients) in endoprostheses PJI (p=0.783). There was a significantly higher incidence of MDR in endoprostheses PJI, isolated in 36.4% of cultures, compared to knee arthroplasty PJI (17.2%) (p=0.01). Gram-positive organisms were isolated in more than 80% of cultures from both cohorts. Coagulase-negative Staphylococcus (CoNS) was the commonest gram-positive organism, and Escherichia coli was the commonest Gram-negative organism in both groups. According to the MSIS staging system, the host and extremity grade of knee arthroplasty PJI cohort were significantly better than endoprostheses PJI(p<0.05). Conclusion: Empirical antibiotic management of PJI in orthopaedic oncology is based upon PJI in arthroplasty despite differences in both host and microbiology. Our results show a significant increase in MDR pathogens within the oncological group despite CoNS being the most common infective organism in both groups. Endoprosthetic patients presented with poorer host and extremity criteria. These factors should be considered when managing this complex patient group, emphasising the importance of broad-spectrum antibiotic prophylaxis and preoperative sampling to ensure appropriate perioperative antibiotic cover.Keywords: microbiology, periprosthetic Joint infection, knee arthroplasty, endoprostheses
Procedia PDF Downloads 118404 No-Par Shares Working in European LLCs
Authors: Agnieszka P. Regiec
Abstract:
Capital companies are based on monetary capital. In the traditional model, the capital is the sum of the nominal values of all shares issued. For a few years within the European countries, the limited liability companies’ (LLC) regulations are leaning towards liberalization of the capital structure in order to provide higher degree of autonomy regarding the intra-corporate governance. Reforms were based primarily on the legal system of the USA. In the USA, the tradition of no-par shares is well-established. Thus, as a point of reference, the American legal system is being chosen. Regulations of Germany, Great Britain, France, Netherlands, Finland, Poland and the USA will be taken into consideration. The analysis of the share capital is important for the development of science not only because the capital structure of the corporation has significant impact on the shareholders’ rights, but also it reflects on relationships between creditors of the company and the company itself. Multi-level comparative approach towards the problem will allow to present a wide range of the possible outcomes stemming from the novelization. The dogmatic method was applied. The analysis was based on the statutes, secondary sources and judicial awards. Both the substantive and the procedural aspects of the capital structure were considered. In Germany, as a result of the regulatory competition, typical for the EU, the structure of LLCs was reshaped. New LLC – Unternehmergesellschaft, which does not require a minimum share capital, was introduced. The minimum share capital for Gesellschaft mit beschrankter Haftung was lowered from 25 000 to 10 000 euro. In France the capital structure of corporations was also altered. In 2003, the minimum share capital of société à responsabilité limitée (S.A.R.L.) was repealed. In 2009, the minimum share capital of société par actions simplifiée – in the “simple” version of S.A.R.L. was also changed – there is no minimum share capital required by a statute. The company has to, however, indicate a share capital without the legislator imposing the minimum value of said capital. In Netherlands the reform of the Besloten Vennootschap met beperkte aansprakelijkheid (B.V.) was planned with the following change: repeal of the minimum share capital as the answer to the need for higher degree of autonomy for shareholders. It, however, preserved shares with nominal value. In Finland the novelization of yksityinen osakeyhtiö took place in 2006 and as a result the no-par shares were introduced. Despite the fact that the statute allows shares without face value, it still requires the minimum share capital in the amount of 2 500 euro. In Poland the proposal for the restructuration of the capital structure of the LLC has been introduced. The proposal provides among others: devaluation of the capital to 1 PLN or complete liquidation of the minimum share capital, allowing the no-par shares to be issued. In conclusion: American solutions, in particular, balance sheet test and solvency test provide better protection for creditors; European no-par shares are not the same as American and the existence of share capital in Poland is crucial.Keywords: balance sheet test, limited liability company, nominal value of shares, no-par shares, share capital, solvency test
Procedia PDF Downloads 185403 Enhance Concurrent Design Approach through a Design Methodology Based on an Artificial Intelligence Framework: Guiding Group Decision Making to Balanced Preliminary Design Solution
Authors: Loris Franchi, Daniele Calvi, Sabrina Corpino
Abstract:
This paper presents a design methodology in which stakeholders are assisted with the exploration of a so-called negotiation space, aiming to the maximization of both group social welfare and single stakeholder’s perceived utility. The outcome results in less design iterations needed for design convergence while obtaining a higher solution effectiveness. During the early stage of a space project, not only the knowledge about the system but also the decision outcomes often are unknown. The scenario is exacerbated by the fact that decisions taken in this stage imply delayed costs associated with them. Hence, it is necessary to have a clear definition of the problem under analysis, especially in the initial definition. This can be obtained thanks to a robust generation and exploration of design alternatives. This process must consider that design usually involves various individuals, who take decisions affecting one another. An effective coordination among these decision-makers is critical. Finding mutual agreement solution will reduce the iterations involved in the design process. To handle this scenario, the paper proposes a design methodology which, aims to speed-up the process of pushing the mission’s concept maturity level. This push up is obtained thanks to a guided negotiation space exploration, which involves autonomously exploration and optimization of trade opportunities among stakeholders via Artificial Intelligence algorithms. The negotiation space is generated via a multidisciplinary collaborative optimization method, infused by game theory and multi-attribute utility theory. In particular, game theory is able to model the negotiation process to reach the equilibria among stakeholder needs. Because of the huge dimension of the negotiation space, a collaborative optimization framework with evolutionary algorithm has been integrated in order to guide the game process to efficiently and rapidly searching for the Pareto equilibria among stakeholders. At last, the concept of utility constituted the mechanism to bridge the language barrier between experts of different backgrounds and differing needs, using the elicited and modeled needs to evaluate a multitude of alternatives. To highlight the benefits of the proposed methodology, the paper presents the design of a CubeSat mission for the observation of lunar radiation environment. The derived solution results able to balance all stakeholders needs and guaranteeing the effectiveness of the selection mission concept thanks to its robustness in valuable changeability. The benefits provided by the proposed design methodology are highlighted, and further development proposed.Keywords: concurrent engineering, artificial intelligence, negotiation in engineering design, multidisciplinary optimization
Procedia PDF Downloads 137402 Initializing E-Classroom in a Multigrade School in the Philippines
Authors: Karl Erickson I. Ebora
Abstract:
Science and technology are two inseparable terms which bring wonders to all aspects of life such as education, medicine, food production and even the environment. In education, technology has become an integral part as it brings many benefits to the teaching-learning process. However, in the Philippines, being one of the developing countries resources are scarce and not all schools enjoy the fruits brought by technology. Much of this ordeal impacts that of multigrade instruction. These schools are often the last priority in resources allocation since these have limited number of students. In fact, it is not surprising that these schools do not have even a single computer unit much more a computer laboratory. This paper sought to present a plan on how public schools would receive its e-classroom. Specifically, this paper sought to answer questions like the level of the school readiness in terms of facilities and equipment; the attitude of the respondents towards the use of e-classroom; level of teacher’s familiarity in using different e-classroom software and the plans of interventions undertaken by the school to make it e-classroom ready. After gathering and analysing the necessary data, this paper came up with the following conclusions that in terms of facilities and equipment, Guisguis Talon Elementary School (Main), though a multigrade school, is ready to receive e-classroom.; that the respondents show positive disposition in technology utilization in teaching after they strongly agree that technology plays essential role in the teaching-learning process. Also, they strongly agree that technology is a good motivator; it makes the teaching and learning more interesting and effective; it makes teaching easy; and that technology enhances student’s learning. Additionally, Teacher-respondents in Guisguis Talon Elementary School (Main) show familiarity in using software. They are very familiar with MS Word; MS Excel; MS PowerPoint; and internet and email. Moreover, they are very familiar with basic e-classroom computer operations and basic application software. They are very familiar with MS office and can do simple editing and formatting; in accessing and saving information from CD/DVD, external hard drives, USB and the like; and in browsing effectively different search engines and educational sites, download and upload files. Likewise respondents strongly agree to the interventions undertaken by the school to make it e-classroom ready. They strongly agree that funding and support are needed by the school; that stakeholders should be encouraged to consider donating of equipment; and that school and community should try to mobilize their resources in order to help the school; that the teachers should be provided with trainings in order for them to be technologically competent; and that principals and administrators should motivate their teachers to undergo continuous professional development.Keywords: e-classroom, multi-grade school, DCP, classroom computers
Procedia PDF Downloads 202401 Pixel Façade: An Idea for Programmable Building Skin
Authors: H. Jamili, S. Shakiba
Abstract:
Today, one of the main concerns of human beings is facing the unpleasant changes of the environment. Buildings are responsible for a significant amount of natural resources consumption and carbon emissions production. In such a situation, this thought comes to mind that changing each building into a phenomenon of benefit to the environment. A change in a way that each building functions as an element that supports the environment, and construction, in addition to answering the need of humans, is encouraged, the way planting a tree is, and it is no longer seen as a threat to alive beings and the planet. Prospect: Today, different ideas of developing materials that can smartly function are realizing. For instance, Programmable Materials, which in different conditions, can respond appropriately to the situation and have features of modification in shape, size, physical properties and restoration, and repair quality. Studies are to progress having this purpose to plan for these materials in a way that they are easily available, and to meet this aim, there is no need to use expensive materials and high technologies. In these cases, physical attributes of materials undertake the role of sensors, wires and actuators then materials will become into robots itself. In fact, we experience robotics without robots. In recent decades, AI and technology advances have dramatically improving the performance of materials. These achievements are a combination of software optimizations and physical productions such as multi-materials 3D printing. These capabilities enable us to program materials in order to change shape, appearance, and physical properties to interact with different situations. nIt is expected that further achievements like Memory Materials and Self-learning Materials are also added to the Smart Materials family, which are affordable, available, and of use for a variety of applications and industries. From the architectural standpoint, the building skin is significantly considered in this research, concerning the noticeable surface area the buildings skin have in urban space. The purpose of this research would be finding a way that the programmable materials be used in building skin with the aim of having an effective and positive interaction. A Pixel Façade would be a solution for programming a building skin. The Pixel Facadeincludes components that contain a series of attributes that help buildings for their needs upon their environmental criteria. A PIXEL contains series of smart materials and digital controllers together. It not only benefits its physical properties, such as control the amount of sunlight and heat, but it enhances building performance by providing a list of features, depending on situation criteria. The features will vary depending on locations and have a different function during the daytime and different seasons. The primary role of a PIXEL FAÇADE can be defined as filtering pollutions (for inside and outside of the buildings) and providing clean energy as well as interacting with other PIXEL FACADES to estimate better reactions.Keywords: building skin, environmental crisis, pixel facade, programmable materials, smart materials
Procedia PDF Downloads 89400 Variations in Spatial Learning and Memory across Natural Populations of Zebrafish, Danio rerio
Authors: Tamal Roy, Anuradha Bhat
Abstract:
Cognitive abilities aid fishes in foraging, avoiding predators & locating mates. Factors like predation pressure & habitat complexity govern learning & memory in fishes. This study aims to compare spatial learning & memory across four natural populations of zebrafish. Zebrafish, a small cyprinid inhabits a diverse range of freshwater habitats & this makes it amenable to studies investigating role of native environment in spatial cognitive abilities. Four populations were collected across India from waterbodies with contrasting ecological conditions. Habitat complexity of the water-bodies was evaluated as a combination of channel substrate diversity and diversity of vegetation. Experiments were conducted on populations under controlled laboratory conditions. A square shaped spatial testing arena (maze) was constructed for testing the performance of adult zebrafish. The square tank consisted of an inner square shaped layer with the edges connected to the diagonal ends of the tank-walls by connections thereby forming four separate chambers. Each of the four chambers had a main door in the centre. Each chamber had three sections separated by two windows. A removable coloured window-pane (red, yellow, green or blue) identified each main door. A food reward associated with an artificial plant was always placed inside the left-hand section of the red-door chamber. The position of food-reward and plant within the red-door chamber was fixed. A test fish would have to explore the maze by taking turns and locate the food inside the right-side section of the red-door chamber. Fishes were sorted from each population stock and kept individually in separate containers for identification. At a time, a test fish was released into the arena and allowed 20 minutes to explore in order to find the food-reward. In this way, individual fishes were trained through the maze to locate the food reward for eight consecutive days. The position of red door, with the plant and the reward, was shuffled every day. Following training, an intermission of four days was given during which the fishes were not subjected to trials. Post-intermission, the fishes were re-tested on the 13th day following the same protocol for their ability to remember the learnt task. Exploratory tendencies and latency of individuals to explore on 1st day of training, performance time across trials, and number of mistakes made each day were recorded. Additionally, mechanism used by individuals to solve the maze each day was analyzed across populations. Fishes could be expected to use algorithm (sequence of turns) or associative cues in locating the food reward. Individuals of populations did not differ significantly in latencies and tendencies to explore. No relationship was found between exploration and learning across populations. High habitat-complexity populations had higher rates of learning & stronger memory while low habitat-complexity populations had lower rates of learning and much reduced abilities to remember. High habitat-complexity populations used associative cues more than algorithm for learning and remembering while low habitat-complexity populations used both equally. The study, therefore, helped understand the role of natural ecology in explaining variations in spatial learning abilities across populations.Keywords: algorithm, associative cue, habitat complexity, population, spatial learning
Procedia PDF Downloads 290399 Keeping Education Non-Confessional While Teaching Children about Religion
Authors: Tünde Puskás, Anita Andersson
Abstract:
This study is part of a research project about whether religion is considered as part of Swedish cultural heritage in Swedish preschools. Our aim in this paper is to explore how a Swedish preschool balance between keeping the education non-confessional and at the same time teaching children about a particular tradition, Easter.The paper explores how in a Swedish preschool with a religious profile teachers balance between keeping education non-confessional and teaching about a tradition with religious roots. The point of departure for the theoretical frame of our study is that practical considerations in pedagogical situations are inherently dilemmatic. The dilemmas that are of interest for our study evolve around formalized, intellectual ideologies, such us multiculturalism and secularism that have an impact on everyday practice. Educational dilemmas may also arise in the intersections of the formalized ideology of non-confessionalism, prescribed in policy documents and the common sense understandings of what is included in what is understood as Swedish cultural heritage. In this paper, religion is treated as a human worldview that, similarly to secular ideologies, can be understood as a system of thought. We make use of Ninian Smart's theoretical framework according to which in modern Western world religious and secular ideologies, as human worldviews, can be studied from the same analytical framework. In order to be able to study the distinctive character of human worldviews Smart introduced a multi-dimensional model within which the different dimensions interact with each other in various ways and to different degrees. The data for this paper is drawn from fieldwork carried out in 2015-2016 in the form of video ethnography. The empirical material chosen consists of a video recording of a specific activity during which the preschool group took part in an Easter play performed in the local church. The analysis shows that the policy of non-confessionalism together with the idea that teaching covering religious issues must be purely informational leads in everyday practice to dilemmas about what is considered religious. At the same time what the adults actually do with religion fulfills six of seven dimensions common to religious traditions as outlined by Smart. What we can also conclude from the analysis is that whether it is religion or a cultural tradition that is thought through the performance the children watched in the church depends on how the concept of religion is defined. The analysis shows that the characters of the performance themselves understood religion as the doctrine of Jesus' resurrection from the dead. This narrow understanding of religion enabled them indirectly to teach about the traditions and narratives surrounding Easter while avoiding teaching religion as a belief system.Keywords: non-confessional education, preschool, religion, tradition
Procedia PDF Downloads 159398 High Purity Germanium Detector Characterization by Means of Monte Carlo Simulation through Application of Geant4 Toolkit
Authors: Milos Travar, Jovana Nikolov, Andrej Vranicar, Natasa Todorovic
Abstract:
Over the years, High Purity Germanium (HPGe) detectors proved to be an excellent practical tool and, as such, have established their today's wide use in low background γ-spectrometry. One of the advantages of gamma-ray spectrometry is its easy sample preparation as chemical processing and separation of the studied subject are not required. Thus, with a single measurement, one can simultaneously perform both qualitative and quantitative analysis. One of the most prominent features of HPGe detectors, besides their excellent efficiency, is their superior resolution. This feature virtually allows a researcher to perform a thorough analysis by discriminating photons of similar energies in the studied spectra where otherwise they would superimpose within a single-energy peak and, as such, could potentially scathe analysis and produce wrongly assessed results. Naturally, this feature is of great importance when the identification of radionuclides, as well as their activity concentrations, is being practiced where high precision comes as a necessity. In measurements of this nature, in order to be able to reproduce good and trustworthy results, one has to have initially performed an adequate full-energy peak (FEP) efficiency calibration of the used equipment. However, experimental determination of the response, i.e., efficiency curves for a given detector-sample configuration and its geometry, is not always easy and requires a certain set of reference calibration sources in order to account for and cover broader energy ranges of interest. With the goal of overcoming these difficulties, a lot of researches turned towards the application of different software toolkits that implement the Monte Carlo method (e.g., MCNP, FLUKA, PENELOPE, Geant4, etc.), as it has proven time and time again to be a very powerful tool. In the process of creating a reliable model, one has to have well-established and described specifications of the detector. Unfortunately, the documentation that manufacturers provide alongside the equipment is rarely sufficient enough for this purpose. Furthermore, certain parameters tend to evolve and change over time, especially with older equipment. Deterioration of these parameters consequently decreases the active volume of the crystal and can thus affect the efficiencies by a large margin if they are not properly taken into account. In this study, the optimisation method of two HPGe detectors through the implementation of the Geant4 toolkit developed by CERN is described, with the goal of further improving simulation accuracy in calculations of FEP efficiencies by investigating the influence of certain detector variables (e.g., crystal-to-window distance, dead layer thicknesses, inner crystal’s void dimensions, etc.). Detectors on which the optimisation procedures were carried out were a standard traditional co-axial extended range detector (XtRa HPGe, CANBERRA) and a broad energy range planar detector (BEGe, CANBERRA). Optimised models were verified through comparison with experimentally obtained data from measurements of a set of point-like radioactive sources. Acquired results of both detectors displayed good agreement with experimental data that falls under an average statistical uncertainty of ∼ 4.6% for XtRa and ∼ 1.8% for BEGe detector within the energy range of 59.4−1836.1 [keV] and 59.4−1212.9 [keV], respectively.Keywords: HPGe detector, γ spectrometry, efficiency, Geant4 simulation, Monte Carlo method
Procedia PDF Downloads 121397 Staphylococcus Aureus Septic Arthritis and Necrotizing Fasciitis in a Patient With Undiagnosed Diabetes Mellitus.
Authors: Pedro Batista, André Vinha, Filipe Castelo, Bárbara Costa, Ricardo Sousa, Raquel Ricardo, André Pinto
Abstract:
Background: Septic arthritis is a diagnosis that must be considered in any patient presenting with acute joint swelling and fever. Among the several risk factors for septic arthritis, such as age, rheumatoid arthritis, recent surgery, or skin infection, diabetes mellitus can sometimes be the main risk factor. Staphylococcus aureus is the most common pathogen isolated in septic arthritis; however, it is uncommon in monomicrobial necrotizing fasciitis. Objectives: A case report of concomitant septic arthritis and necrotizing fasciitis in a patient with undiagnosed diabetes based on clinical history. Study Design & Methods: We report a case of a 58-year-old Portuguese previously healthy man who presented to the emergency department with fever and left knee swelling and pain for two days. The blood work revealed ketonemia of 6.7 mmol/L and glycemia of 496 mg/dL. The vital signs were significant for a temperature of 38.5 ºC and 123 bpm of heart rate. The left knee had edema and inflammatory signs. Computed tomography of the left knee showed diffuse edema of the subcutaneous cellular tissue and soft tissue air bubbles. A diagnosis of septic arthritis and necrotising fasciitis was made. He was taken to the operating room for surgical debridement. The samples collected intraoperatively were sent for microbiological analysis, revealing infection by multi-sensitive Staphylococcus aureus. Given this result, the empiric flucloxacillin (500 mg IV) and clindamycin (1000 mg IV) were maintained for 3 weeks. On the seventh day of hospitalization, there was a significant improvement in subcutaneous and musculoskeletal tissues. After two weeks of hospitalization, there was no purulent content and partial closure of the wounds was possible. After 3 weeks, he was switched to oral antibiotics (flucloxacillin 500 mg). A week later, a urinary infection by Pseudomonas aeruginosa was diagnosed and ciprofloxacin 500 mg was administered for 7 days without complications. After 30 days of hospital admission, the patient was discharged home and recovered. Results: The final diagnosis of concomitant septic arthritis and necrotizing fasciitis was made based on the imaging findings, surgical exploration and microbiological tests results. Conclusions: Early antibiotic administration and surgical debridement are key in the management of septic arthritis and necrotizing fasciitis. Furthermore, risk factors control (euglycemic blood glucose levels) must always be taken into account given the crucial role in the patient's recovery.Keywords: septic arthritis, Necrotizing fasciitis, diabetes, Staphylococcus Aureus
Procedia PDF Downloads 316396 Transmedia and Platformized Political Discourse in a Growing Democracy: A Study of Nigeria’s 2023 General Elections
Authors: Tunde Ope-Davies
Abstract:
Transmediality and platformization as online content-sharing protocols have continued to accentuate the growing impact of the unprecedented digital revolution across the world. The rapid transformation across all sectors as a result of this revolution has continued to spotlight the increasing importance of new media technologies in redefining and reshaping the rhythm and dynamics of our private and public discursive practices. Equally, social and political activities are being impacted daily through the creation and transmission of political discourse content through multi-channel platforms such as mobile telephone communication, social media networks and the internet. It has been observed that digital platforms have become central to the production, processing, and distribution of multimodal social data and cultural content. The platformization paradigm thus underpins our understanding of how digital platforms enhance the production and heterogenous distribution of media and cultural content through these platforms and how this process facilitates socioeconomic and political activities. The use of multiple digital platforms to share and transmit political discourse material synchronously and asynchronously has gained some exciting momentum in the last few years. Nigeria’s 2023 general elections amplified the usage of social media and other online platforms as tools for electioneering campaigns, socio-political mobilizations and civic engagement. The study, therefore, focuses on transmedia and platformed political discourse as a new strategy to promote political candidates and their manifesto in order to mobilize support and woo voters. This innovative transmedia digital discourse model involves a constellation of online texts and images transmitted through different online platforms almost simultaneously. The data for the study was extracted from the 2023 general elections campaigns in Nigeria between January- March 2023 through media monitoring, manual download and the use of software to harvest the online electioneering campaign material. I adopted a discursive-analytic qualitative technique with toolkits drawn from a computer-mediated multimodal discourse paradigm. The study maps the progressive development of digital political discourse in this young democracy. The findings also demonstrate the inevitable transformation of modern democratic practice through platform-dependent and transmedia political discourse. Political actors and media practitioners now deploy layers of social media network platforms to convey messages and mobilize supporters in order to aggregate and maximize the impact of their media campaign projects and audience reach.Keywords: social media, digital humanities, political discourse, platformized discourse, multimodal discourse
Procedia PDF Downloads 88