Search results for: comparison of algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6957

Search results for: comparison of algorithms

1257 Concurrent Validity of Synchronous Tele-Audiology Hearing Screening

Authors: Thidilweli Denga, Bessie Malila, Lucretia Petersen

Abstract:

The Coronavirus Disease of 2019 (COVID-19) pandemic should be taken as a wake-up call on the importance of hearing health care considering amongst other things the electronic methods of communication used. The World Health Organization (WHO) estimated that by 2050, there will be more than 2.5 billion people living with hearing loss. These numbers show that more people will need rehabilitation services. Studies have shown that most people living with hearing loss reside in Low-Middle Income Countries (LIMC). Innovative technological solutions such as digital health interventions that can be used to deliver hearing health services to remote areas now exist. Tele-audiology implementation can potentially enable the delivery of hearing loss services to rural and remote areas. This study aimed to establish the concurrent validity of the tele-audiology practice in school-based hearing screening. The study employed a cross-sectional design with a within-group comparison. The portable KUDUwave Audiometer was used to conduct hearing screening from 50 participants (n=50). In phase I of the study, the audiologist conducted on-site hearing screening, while the synchronous remote hearing screening (tele-audiology) using a 5G network was done in phase II. On-site hearing screening results were obtained for the first 25 participants (aged between 5-6 years). The second half started with the synchronous tele-audiology model to avoid order-effect. Repeated sample t-tests compared threshold results obtained in the left and right ears for onsite and remote screening. There was a good correspondence between the two methods with a threshold average within ±5 dB (decibels). The synchronous tele-audiology model has the potential to reduce the audiologists' case overload, while at the same time reaching populations that lack access due to distance, and shortage of hearing professionals in their areas of reach. With reliable and broadband connectivity, tele-audiology delivers the same service quality as the conventional method while reducing the travel costs of audiologists.

Keywords: hearing screening, low-resource communities, portable audiometer, tele-audiology

Procedia PDF Downloads 116
1256 Applying Computer Simulation Methods to a Molecular Understanding of Flaviviruses Proteins towards Differential Serological Diagnostics and Therapeutic Intervention

Authors: Sergio Alejandro Cuevas, Catherine Etchebest, Fernando Luis Barroso Da Silva

Abstract:

The flavivirus genus has several organisms responsible for generating various diseases in humans. Special in Brazil, Zika (ZIKV), Dengue (DENV) and Yellow Fever (YFV) viruses have raised great health concerns due to the high number of cases affecting the area during the last years. Diagnostic is still a difficult issue since the clinical symptoms are highly similar. The understanding of their common structural/dynamical and biomolecular interactions features and differences might suggest alternative strategies towards differential serological diagnostics and therapeutic intervention. Due to their immunogenicity, the primary focus of this study was on the ZIKV, DENV and YFV non-structural proteins 1 (NS1) protein. By means of computational studies, we calculated the main physical chemical properties of this protein from different strains that are directly responsible for the biomolecular interactions and, therefore, can be related to the differential infectivity of the strains. We also mapped the electrostatic differences at both the sequence and structural levels for the strains from Uganda to Brazil that could suggest possible molecular mechanisms for the increase of the virulence of ZIKV. It is interesting to note that despite the small changes in the protein sequence due to the high sequence identity among the studied strains, the electrostatic properties are strongly impacted by the pH which also impact on their biomolecular interactions with partners and, consequently, the molecular viral biology. African and Asian strains are distinguishable. Exploring the interfaces used by NS1 to self-associate in different oligomeric states, and to interact with membranes and the antibody, we could map the strategy used by the ZIKV during its evolutionary process. This indicates possible molecular mechanisms that can explain the different immunological response. By the comparison with the known antibody structure available for the West Nile virus, we demonstrated that the antibody would have difficulties to neutralize the NS1 from the Brazilian strain. The present study also opens up perspectives to computationally design high specificity antibodies.

Keywords: zika, biomolecular interactions, electrostatic interactions, molecular mechanisms

Procedia PDF Downloads 132
1255 The Artificial Intelligence Driven Social Work

Authors: Avi Shrivastava

Abstract:

Our world continues to grapple with a lot of social issues. Economic growth and scientific advancements have not completely eradicated poverty, homelessness, discrimination and bias, gender inequality, health issues, mental illness, addiction, and other social issues. So, how do we improve the human condition in a world driven by advanced technology? The answer is simple: we will have to leverage technology to address some of the most important social challenges of the day. AI, or artificial intelligence, has emerged as a critical tool in the battle against issues that deprive marginalized and disadvantaged groups of the right to enjoy benefits that a society offers. Social work professionals can transform their lives by harnessing it. The lack of reliable data is one of the reasons why a lot of social work projects fail. Social work professionals continue to rely on expensive and time-consuming primary data collection methods, such as observation, surveys, questionnaires, and interviews, instead of tapping into AI-based technology to generate useful, real-time data and necessary insights. By leveraging AI’s data-mining ability, we can gain a deeper understanding of how to solve complex social problems and change lives of people. We can do the right work for the right people and at the right time. For example, AI can enable social work professionals to focus their humanitarian efforts on some of the world’s poorest regions, where there is extreme poverty. An interdisciplinary team of Stanford scientists, Marshall Burke, Stefano Ermon, David Lobell, Michael Xie, and Neal Jean, used AI to spot global poverty zones – identifying such zones is a key step in the fight against poverty. The scientists combined daytime and nighttime satellite imagery with machine learning algorithms to predict poverty in Nigeria, Uganda, Tanzania, Rwanda, and Malawi. In an article published by Stanford News, Stanford researchers use dark of night and machine learning, Ermon explained that they provided the machine-learning system, an application of AI, with the high-resolution satellite images and asked it to predict poverty in the African region. “The system essentially learned how to solve the problem by comparing those two sets of images [daytime and nighttime].” This is one example of how AI can be used by social work professionals to reach regions that need their aid the most. It can also help identify sources of inequality and conflict, which could reduce inequalities, according to Nature’s study, titled The role of artificial intelligence in achieving the Sustainable Development Goals, published in 2020. The report also notes that AI can help achieve 79 percent of the United Nation’s (UN) Sustainable Development Goals (SDG). AI is impacting our everyday lives in multiple amazing ways, yet some people do not know much about it. If someone is not familiar with this technology, they may be reluctant to use it to solve social issues. So, before we talk more about the use of AI to accomplish social work objectives, let’s put the spotlight on how AI and social work can complement each other.

Keywords: social work, artificial intelligence, AI based social work, machine learning, technology

Procedia PDF Downloads 101
1254 Discerning Divergent Nodes in Social Networks

Authors: Mehran Asadi, Afrand Agah

Abstract:

In data mining, partitioning is used as a fundamental tool for classification. With the help of partitioning, we study the structure of data, which allows us to envision decision rules, which can be applied to classification trees. In this research, we used online social network dataset and all of its attributes (e.g., Node features, labels, etc.) to determine what constitutes an above average chance of being a divergent node. We used the R statistical computing language to conduct the analyses in this report. The data were found on the UC Irvine Machine Learning Repository. This research introduces the basic concepts of classification in online social networks. In this work, we utilize overfitting and describe different approaches for evaluation and performance comparison of different classification methods. In classification, the main objective is to categorize different items and assign them into different groups based on their properties and similarities. In data mining, recursive partitioning is being utilized to probe the structure of a data set, which allow us to envision decision rules and apply them to classify data into several groups. Estimating densities is hard, especially in high dimensions, with limited data. Of course, we do not know the densities, but we could estimate them using classical techniques. First, we calculated the correlation matrix of the dataset to see if any predictors are highly correlated with one another. By calculating the correlation coefficients for the predictor variables, we see that density is strongly correlated with transitivity. We initialized a data frame to easily compare the quality of the result classification methods and utilized decision trees (with k-fold cross validation to prune the tree). The method performed on this dataset is decision trees. Decision tree is a non-parametric classification method, which uses a set of rules to predict that each observation belongs to the most commonly occurring class label of the training data. Our method aggregates many decision trees to create an optimized model that is not susceptible to overfitting. When using a decision tree, however, it is important to use cross-validation to prune the tree in order to narrow it down to the most important variables.

Keywords: online social networks, data mining, social cloud computing, interaction and collaboration

Procedia PDF Downloads 154
1253 Simulation of the FDA Centrifugal Blood Pump Using High Performance Computing

Authors: Mehdi Behbahani, Sebastian Rible, Charles Moulinec, Yvan Fournier, Mike Nicolai, Paolo Crosetto

Abstract:

Computational Fluid Dynamics blood-flow simulations are increasingly used to develop and validate blood-contacting medical devices. This study shows that numerical simulations can provide additional and accurate estimates of relevant hemodynamic indicators (e.g., recirculation zones or wall shear stresses), which may be difficult and expensive to obtain from in-vivo or in-vitro experiments. The most recent FDA (Food and Drug Administration) benchmark consisted of a simplified centrifugal blood pump model that contains fluid flow features as they are commonly found in these devices with a clear focus on highly turbulent phenomena. The FDA centrifugal blood pump study is composed of six test cases with different volumetric flow rates ranging from 2.5 to 7.0 liters per minute, pump speeds, and Reynolds numbers ranging from 210,000 to 293,000. Within the frame of this study different turbulence models were tested including RANS models, e.g. k-omega, k-epsilon and a Reynolds Stress Model (RSM) and, LES. The partitioners Hilbert, METIS, ParMETIS and SCOTCH were used to create an unstructured mesh of 76 million elements and compared in their efficiency. Computations were performed on the JUQUEEN BG/Q architecture applying the highly parallel flow solver Code SATURNE and typically using 32768 or more processors in parallel. Visualisations were performed by means of PARAVIEW. Different turbulence models including all six flow situations could be successfully analysed and validated against analytical considerations and from comparison to other data-bases. It showed that an RSM represents an appropriate choice with respect to modeling high-Reynolds number flow cases. Especially, the Rij-SSG (Speziale, Sarkar, Gatzki) variant turned out to be a good approach. Visualisation of complex flow features could be obtained and the flow situation inside the pump could be characterized.

Keywords: blood flow, centrifugal blood pump, high performance computing, scalability, turbulence

Procedia PDF Downloads 381
1252 Analysis of Electric Mobility in the European Union: Forecasting 2035

Authors: Domenico Carmelo Mongelli

Abstract:

The context is that of great uncertainty in the 27 countries belonging to the European Union which has adopted an epochal measure: the elimination of internal combustion engines for the traction of road vehicles starting from 2035 with complete replacement with electric vehicles. If on the one hand there is great concern at various levels for the unpreparedness for this change, on the other the Scientific Community is not preparing accurate studies on the problem, as the scientific literature deals with single aspects of the issue, moreover addressing the issue at the level of individual countries, losing sight of the global implications of the issue for the entire EU. The aim of the research is to fill these gaps: the technological, plant engineering, environmental, economic and employment aspects of the energy transition in question are addressed and connected to each other, comparing the current situation with the different scenarios that could exist in 2035 and in the following years until total disposal of the internal combustion engine vehicle fleet for the entire EU. The methodologies adopted by the research consist in the analysis of the entire life cycle of electric vehicles and batteries, through the use of specific databases, and in the dynamic simulation, using specific calculation codes, of the application of the results of this analysis to the entire EU electric vehicle fleet from 2035 onwards. Energy balance sheets will be drawn up (to evaluate the net energy saved), plant balance sheets (to determine the surplus demand for power and electrical energy required and the sizing of new plants from renewable sources to cover electricity needs), economic balance sheets (to determine the investment costs for this transition, the savings during the operation phase and the payback times of the initial investments), the environmental balances (with the different energy mix scenarios in anticipation of 2035, the reductions in CO2eq and the environmental effects are determined resulting from the increase in the production of lithium for batteries), the employment balances (it is estimated how many jobs will be lost and recovered in the reconversion of the automotive industry, related industries and in the refining, distribution and sale of petroleum products and how many will be products for technological innovation, the increase in demand for electricity, the construction and management of street electric columns). New algorithms for forecast optimization are developed, tested and validated. Compared to other published material, the research adds an overall picture of the energy transition, capturing the advantages and disadvantages of the different aspects, evaluating the entities and improvement solutions in an organic overall picture of the topic. The results achieved allow us to identify the strengths and weaknesses of the energy transition, to determine the possible solutions to mitigate these weaknesses and to simulate and then evaluate their effects, establishing the most suitable solutions to make this transition feasible.

Keywords: engines, Europe, mobility, transition

Procedia PDF Downloads 60
1251 Internationalization Using Strategic Alliances: A Comparative Study between Family and Non-Family Businesses

Authors: Guadalupe Fuentes-Lombardo, Manuel Carlos Vallejo-Martos, Rubén Fernández-Ortiz, Miriam Cano-Rubio

Abstract:

The different ways in which companies enter foreign markets, exporting their products and direct investment and using strategic alliances or not, are influenced by a series of peculiarities specific to family businesses. In these companies, different systems, such as the family, property, and business overlap; giving them unique and specific characteristics which on occasions can enhance the development of cooperation agreements and in other situations can hinder them. Previous research has shown that these companies are more likely to enter into strategic alliances with certain specific features, and are more reluctant to take part in others in which some of the advantages of the family business are put at risk, such as control of ownership and decision-making over the company by the family, among others. These arguments show that there is a wide range of interesting aspects and peculiarities in the process of internationalization of the family business, although the research objectives of this paper focus on three in particular. Our first objective will be to discover why family businesses decide to establish or not strategic alliances in their internationalization processes in comparison with other companies that are not family owned. Secondly we will be identifying the idiosyncratic aspects of family businesses that favor or hinder the use of strategic alliances as a means of entering foreign markets. Our third and final objective will be to define the types of strategic alliance most commonly used by family businesses and the reasons why they choose these particular forms of alliance rather than others. We chose these research objectives for three main reasons. Firstly because research on this subject shows that alliances are the best way to begin the international expansion process, among other reasons because they provide the partners with different kinds of resources and capacity, so increasing the probability of successful internationalization. Secondly, because family and non-family businesses are often equipped with different types of resources and strategic alliances, offer them the chance to acquire resources less frequently found in family businesses. Thirdly, because the strengths and weaknesses of these companies could affect their decisions whether or not to use strategic alliances in their international expansion process and the success achieved in these alliances. As a result, these companies prefer to enter into cooperation agreements with conditions that do not put their specific status as family companies at risk.

Keywords: family business, internationalization, strategic alliances, olive-oil and wine industry

Procedia PDF Downloads 449
1250 Seafloor and Sea Surface Modelling in the East Coast Region of North America

Authors: Magdalena Idzikowska, Katarzyna Pająk, Kamil Kowalczyk

Abstract:

Seafloor topography is a fundamental issue in geological, geophysical, and oceanographic studies. Single-beam or multibeam sonars attached to the hulls of ships are used to emit a hydroacoustic signal from transducers and reproduce the topography of the seabed. This solution provides relevant accuracy and spatial resolution. Bathymetric data from ships surveys provides National Centers for Environmental Information – National Oceanic and Atmospheric Administration. Unfortunately, most of the seabed is still unidentified, as there are still many gaps to be explored between ship survey tracks. Moreover, such measurements are very expensive and time-consuming. The solution is raster bathymetric models shared by The General Bathymetric Chart of the Oceans. The offered products are a compilation of different sets of data - raw or processed. Indirect data for the development of bathymetric models are also measurements of gravity anomalies. Some forms of seafloor relief (e.g. seamounts) increase the force of the Earth's pull, leading to changes in the sea surface. Based on satellite altimetry data, Sea Surface Height and marine gravity anomalies can be estimated, and based on the anomalies, it’s possible to infer the structure of the seabed. The main goal of the work is to create regional bathymetric models and models of the sea surface in the area of the east coast of North America – a region of seamounts and undulating seafloor. The research includes an analysis of the methods and techniques used, an evaluation of the interpolation algorithms used, model thickening, and the creation of grid models. Obtained data are raster bathymetric models in NetCDF format, survey data from multibeam soundings in MB-System format, and satellite altimetry data from Copernicus Marine Environment Monitoring Service. The methodology includes data extraction, processing, mapping, and spatial analysis. Visualization of the obtained results was carried out with Geographic Information System tools. The result is an extension of the state of the knowledge of the quality and usefulness of the data used for seabed and sea surface modeling and knowledge of the accuracy of the generated models. Sea level is averaged over time and space (excluding waves, tides, etc.). Its changes, along with knowledge of the topography of the ocean floor - inform us indirectly about the volume of the entire water ocean. The true shape of the ocean surface is further varied by such phenomena as tides, differences in atmospheric pressure, wind systems, thermal expansion of water, or phases of ocean circulation. Depending on the location of the point, the higher the depth, the lower the trend of sea level change. Studies show that combining data sets, from different sources, with different accuracies can affect the quality of sea surface and seafloor topography models.

Keywords: seafloor, sea surface height, bathymetry, satellite altimetry

Procedia PDF Downloads 78
1249 Row Detection and Graph-Based Localization in Tree Nurseries Using a 3D LiDAR

Authors: Ionut Vintu, Stefan Laible, Ruth Schulz

Abstract:

Agricultural robotics has been developing steadily over recent years, with the goal of reducing and even eliminating pesticides used in crops and to increase productivity by taking over human labor. The majority of crops are arranged in rows. The first step towards autonomous robots, capable of driving in fields and performing crop-handling tasks, is for robots to robustly detect the rows of plants. Recent work done towards autonomous driving between plant rows offers big robotic platforms equipped with various expensive sensors as a solution to this problem. These platforms need to be driven over the rows of plants. This approach lacks flexibility and scalability when it comes to the height of plants or distance between rows. This paper proposes instead an algorithm that makes use of cheaper sensors and has a higher variability. The main application is in tree nurseries. Here, plant height can range from a few centimeters to a few meters. Moreover, trees are often removed, leading to gaps within the plant rows. The core idea is to combine row detection algorithms with graph-based localization methods as they are used in SLAM. Nodes in the graph represent the estimated pose of the robot, and the edges embed constraints between these poses or between the robot and certain landmarks. This setup aims to improve individual plant detection and deal with exception handling, like row gaps, which are falsely detected as an end of rows. Four methods were developed for detecting row structures in the fields, all using a point cloud acquired with a 3D LiDAR as an input. Comparing the field coverage and number of damaged plants, the method that uses a local map around the robot proved to perform the best, with 68% covered rows and 25% damaged plants. This method is further used and combined with a graph-based localization algorithm, which uses the local map features to estimate the robot’s position inside the greater field. Testing the upgraded algorithm in a variety of simulated fields shows that the additional information obtained from localization provides a boost in performance over methods that rely purely on perception to navigate. The final algorithm achieved a row coverage of 80% and an accuracy of 27% damaged plants. Future work would focus on achieving a perfect score of 100% covered rows and 0% damaged plants. The main challenges that the algorithm needs to overcome are fields where the height of the plants is too small for the plants to be detected and fields where it is hard to distinguish between individual plants when they are overlapping. The method was also tested on a real robot in a small field with artificial plants. The tests were performed using a small robot platform equipped with wheel encoders, an IMU and an FX10 3D LiDAR. Over ten runs, the system achieved 100% coverage and 0% damaged plants. The framework built within the scope of this work can be further used to integrate data from additional sensors, with the goal of achieving even better results.

Keywords: 3D LiDAR, agricultural robots, graph-based localization, row detection

Procedia PDF Downloads 139
1248 Adsorptive Removal of Methylene Blue Dye from Aqueous Solutions by Leaf and Stem Biochar Derived from Lantana camara: Adsorption Kinetics, Equilibrium, Thermodynamics and Possible Mechanism

Authors: Deepa Kundu, Prabhakar Sharma, Sayan Bhattacharya, Jianying Shang

Abstract:

The discharge of dye-containing effluents in the water bodies has raised concern due to the potential hazards related to their toxicity in the environment. There are various treatment technologies available for the removal of dyes from wastewaters. The use of biosorbent to remove dyes from wastewater is one of the effective and inexpensive techniques. In the study, the adsorption of phenothiazine dye methylene blue onto biosorbent prepared from Lantana camara L. has been studied in aqueous solutions. The batch adsorption experiments were conducted and the effects of various parameters such as pH (3-12), contact time, adsorbent dose (100-400 mg/L), initial dye concentration (5-20 mg/L), and temperature (303, 313 and 323 K) were investigated. The prepared leaf (BCL600) and shoot (BCS600) biochar of Lantana were characterized using FTIR, SEM, elemental analysis, and zeta potential (pH~7). A comparison between the adsorption potential of both the biosorbent was also evaluated. The results indicated that the amount of methylene blue dye (mg/g) adsorbed onto the surface of biochar was highly dependent on the pH of the dye solutions as it increased with an increase in pH from 3 to 12. It was observed that the dye treated with BCS600 and BCL600 attained an equilibrium within 60 and 100 minutes, respectively. The rate of the adsorption process was determined by performing the Lagergren pseudo-first-order and pseudo-second-order kinetics. It was found that dye treated with both BCS600 and BCL600 followed pseudo-second-order kinetics implying the multi-step nature of the adsorption process involving external adsorption and diffusion of dye molecules into the interior of the adsorbents. The data obtained from batch experiments were fitted well with Langmuir and Freundlich isotherms (R² > 0.98) to indicate the multilayer adsorption of dye over the biochar surfaces. The thermodynamic studies revealed that the adsorption process is favourable, spontaneous, and endothermic in nature. Based on the results, the inexpensive and easily available Lantana camara biomass can be used to remove methylene blue dye from wastewater. It can also help in managing the growth of the notorious weed in the environment.

Keywords: adsorption kinetics, biochar, Lantana camara, methylene blue dye, possible mechanism, thermodynamics

Procedia PDF Downloads 134
1247 Utilization of Antenatal Care Services by Domestic Workers in Delhi

Authors: Meenakshi

Abstract:

Background: The complications during pregnancy are the major cause of morbidity and deaths among women in the reproductive age group. Childbearing is the most important phase in women’s lives that occur mainly in the adolescent and adult years. Maternal health, thus is an important issue as this as this is important phase is also productive time for women as they strive fulfill their capabilities as an individual, mothers, family members and also as a citizen. The objective of the study is to document the coverage of ANC and its determinants among domestic workers. Method: A survey of 300 domestic workers were carried in Delhi. Only respondents in the age group (15-49) and whose recent birth was of 5 years preceding the survey were included. Socio-demographic data and information on maternal health was collected from these respondents Information on ANC was collected from total 300 respondents. Standard of living index were composed based on households assists and similarly autonomy index was computed based on women decision making power in the households taking certain key variables. Cross tabulations were performed to obtain frequency and percentages. Potential socio-economic determinants of utilization of ANC among domestic workers were examined using binary logistic regressions. Results: Out of 300 domestic workers survey, only 70.7 per cent per cent received ANC. Domestic workers who married at age 18 years and above are 4 times more likely to utilize antenatal services during their last birth (***p< 0.01). Comparison to domestic workers with number of living children two or less, domestic workers with number of living children more than two are less likely to utilize antenatal care services (**p< 0.05). Domestic workers belonging to Other Backward Castes are more likely to utilize antenatal care services than domestic workers belonging to scheduled tribes ((**p< 0.05). Conclusion: The level of utilization of maternal health services are less among domestic workers is less, as they spend most of their time at the employers household. Though demonstration effect do have impact on their life styles but utilization of maternal health services is poor. Strategies and action are needed to improve the utilization of maternal health services among this section of workers as they are vulnerable because of no proper labour legislations.

Keywords: antenatal care, domestic workers, health services, maternal health, women’s health

Procedia PDF Downloads 197
1246 Assessing Moisture Adequacy over Semi-arid and Arid Indian Agricultural Farms using High-Resolution Thermography

Authors: Devansh Desai, Rahul Nigam

Abstract:

Crop water stress (W) at a given growth stage starts to set in as moisture availability (M) to roots falls below 75% of maximum. It has been found that ratio of crop evapotranspiration (ET) and reference evapotranspiration (ET0) is an indicator of moisture adequacy and is strongly correlated with ‘M’ and ‘W’. The spatial variability of ET0 is generally less over an agricultural farm of 1-5 ha than ET, which depends on both surface and atmospheric conditions, while the former depends only on atmospheric conditions. Solutions from surface energy balance (SEB) and thermal infrared (TIR) remote sensing are now known to estimate latent heat flux of ET. In the present study, ET and moisture adequacy index (MAI) (=ET/ET0) have been estimated over two contrasting western India agricultural farms having rice-wheat system in semi-arid climate and arid grassland system, limited by moisture availability. High-resolution multi-band TIR sensing observations at 65m from ECOSTRESS (ECOsystemSpaceborne Thermal Radiometer Experiment on Space Station) instrument on-board International Space Station (ISS) were used in an analytical SEB model, STIC (Surface Temperature Initiated Closure) to estimate ET and MAI. The ancillary variables used in the ET modeling and MAI estimation were land surface albedo, NDVI from close-by LANDSAT data at 30m spatial resolution, ET0 product at 4km spatial resolution from INSAT 3D, meteorological forcing variables from short-range weather forecast on air temperature and relative humidity from NWP model. Farm-scale ET estimates at 65m spatial resolution were found to show low RMSE of 16.6% to 17.5% with R2 >0.8 from 18 datasets as compared to reported errors (25 – 30%) from coarser-scale ET at 1 to 8 km spatial resolution when compared to in situ measurements from eddy covariance systems. The MAI was found to show lower (<0.25) and higher (>0.5) magnitudes in the contrasting agricultural farms. The study showed the potential need of high-resolution high-repeat spaceborne multi-band TIR payloads alongwith optical payload in estimating farm-scale ET and MAI for estimating consumptive water use and water stress. A set of future high-resolution multi-band TIR sensors are planned on-board Indo-French TRISHNA, ESA’s LSTM, NASA’s SBG space-borne missions to address sustainable irrigation water management at farm-scale to improve crop water productivity. These will provide precise and fundamental variables of surface energy balance such as LST (Land Surface Temperature), surface emissivity, albedo and NDVI. A synchronization among these missions is needed in terms of observations, algorithms, product definitions, calibration-validation experiments and downstream applications to maximize the potential benefits.

Keywords: thermal remote sensing, land surface temperature, crop water stress, evapotranspiration

Procedia PDF Downloads 69
1245 O2 Saturation Comparison Between Breast Milk Feeding and Tube Feeding in Very Low Birth Weight Neonates

Authors: Ashraf Mohammadzadeh, Ahmad Shah Farhat, Azin Vaezi, Aradokht Vaezi

Abstract:

Background & Aim: Preterm infants born at less than 34 weeks postconceptional age are not as neurologically mature as their term counterparts and thus have difficulty coordinating sucking, swallowing and breathing. As a result, they are traditionally gavage fed until they are able to oral feed successfully. The aim of study was to evaluate comparative effect of orogastric and breast feeding on oxygen saturation in very low birth weight infant (<1500gm). Patients and Methods: In this clinical trial all babies admitted in the Neonatal Research Center of Imamreza Hospital, Mashhad during a 4 months period were elected. Criteria for entrance to study included birth weight ≤ 1500 grams, exclusive breastfeeding, having no special problem after 48 hours, receivinge only routine care and intake of milk was 100cc/kg/day. Each neonate received two rounds of orogastric and breast feeding in the morning and in the afternoon, during which mean oxygen saturation was measured by pulse-oxymetry. During the study the heart rate and temperature of the neonates were monitored, and in case of hypothermia, bradycardia(less than 100 per minute) or apnea the feeding was discontinued and the study was repeated the following day. Data analysis was carried out using SPSS. Results: Fifty neonates were studied. The average birth weight was 1267.20±165.42 grams and average gestational age was 31.81±1.92 and female/male ratio was 1.2. There was no significant statistical difference in arterial oxygen saturation in orogastric and breast feeding in the morning and in the afternoon. (p=0.16 in the morning and p=0.6 in the afternoon). There was no complication of apnea, hypothermia or bradycardia. Conclusion: There was no significant statistical difference between the two methods in arterial oxygen saturation. It seems that oral feeding (which is a natural route) and skin contact between the mother and neonate causes a strong emotional bonding between the two and brings about better social adaptation for the neonate. Also shorter period of stay in hospital is more preferred, and breast feeding should be started at the earliest possible time after birth.

Keywords: Very low birth weight (V.L.B.W), O2 Saturation, Breast Feeding, Tube Feeding

Procedia PDF Downloads 84
1244 A Preliminary Kinematic Comparison of Vive and Vicon Systems for the Accurate Tracking of Lumbar Motion

Authors: Yaghoubi N., Moore Z., Van Der Veen S. M., Pidcoe P. E., Thomas J. S., Dexheimer B.

Abstract:

Optoelectronic 3D motion capture systems, such as the Vicon kinematic system, are widely utilized in biomedical research to track joint motion. These systems are considered powerful and accurate measurement tools with <2 mm average error. However, these systems are costly and may be difficult to implement and utilize in a clinical setting. 3D virtual reality (VR) is gaining popularity as an affordable and accessible tool to investigate motor control and perception in a controlled, immersive environment. The HTC Vive VR system includes puck-style trackers that seamlessly integrate into its VR environments. These affordable, wireless, lightweight trackers may be more feasible for clinical kinematic data collection. However, the accuracy of HTC Vive Trackers (3.0), when compared to optoelectronic 3D motion capture systems, remains unclear. In this preliminary study, we compared the HTC Vive Tracker system to a Vicon kinematic system in a simulated lumbar flexion task. A 6-DOF robot arm (SCORBOT ER VII, Eshed Robotec/RoboGroup, Rosh Ha’Ayin, Israel) completed various reaching movements to mimic increasing levels of hip flexion (15°, 30°, 45°). Light reflective markers, along with one HTC Vive Tracker (3.0), were placed on the rigid segment separating the elbow and shoulder of the robot. We compared position measures simultaneously collected from both systems. Our preliminary analysis shows no significant differences between the Vicon motion capture system and the HTC Vive tracker in the Z axis, regardless of hip flexion. In the X axis, we found no significant differences between the two systems at 15 degrees of hip flexion but minimal differences at 30 and 45 degrees, ranging from .047 cm ± .02 SE (p = .03) at 30 degrees hip flexion to .194 cm ± .024 SE (p < .0001) at 45 degrees of hip flexion. In the Y axis, we found a minimal difference for 15 degrees of hip flexion only (.743 cm ± .275 SE; p = .007). This preliminary analysis shows that the HTC Vive Tracker may be an appropriate, affordable option for gross motor motion capture when the Vicon system is not available, such as in clinical settings. Further research is needed to compare these two motion capture systems in different body poses and for different body segments.

Keywords: lumbar, vivetracker, viconsystem, 3dmotion, ROM

Procedia PDF Downloads 100
1243 Initial Palaeotsunami and Historical Tsunami in the Makran Subduction Zone of the Northwest Indian Ocean

Authors: Mohammad Mokhtari, Mehdi Masoodi, Parvaneh Faridi

Abstract:

history of tsunami generating earthquakes along the Makran Subduction Zone provides evidence of the potential tsunami hazard for the whole coastal area. In comparison with other subduction zone in the world, the Makran region of southern Pakistan and southeastern Iran remains low seismicity. Also, it is one of the least studied area in the northwest of the Indian Ocean regarding tsunami studies. We present a review of studies dealing with the historical /and ongoing palaeotsunamis supported by IGCP of UNESCO in the Makran Subduction Zone. The historical tsunami presented here includes about nine tsunamis in the Makran Subduction Zone, of which over 7 tsunamis occur in the eastern Makran. Tsunami is not as common in the western Makran as in the eastern Makran, where a database of historical events exists. The historically well-documented event is related to the 1945 earthquake with a magnitude of 8.1moment magnitude and tsunami in the western and eastern Makran. There are no details as to whether a tsunami was generated by a seismic event before 1945 off western Makran. But several potentially large tsunamigenic events in the MSZ before 1945 occurred in 325 B.C., 1008, 1483, 1524, 1765, 1851, 1864, and 1897. Here we will present new findings from a historical point of view, immediately, we would like to emphasize that the area needs to be considered with higher research investigation. As mentioned above, a palaeotsunami (geological evidence) is now being planned, and here we will present the first phase result. From a risk point of view, the study shows as preliminary achievement within 20 minutes the wave reaches to Iranian as well Pakistan and Oman coastal zone with very much destructive tsunami waves capable of inundating destructive effect. It is important to note that all the coastal areas of all states surrounding the MSZ are being developed very rapidly, so any event would have a devastating effect on this region. Although several papers published about modelling, seismology, tsunami deposits in the last decades; as Makran is a forgotten subduction zone, more data such as the main crustal structure, fault location, and its related parameter are required.

Keywords: historical tsunami, Indian ocean, makran subduction zone, palaeotsunami

Procedia PDF Downloads 129
1242 Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines

Authors: Alexander Guzman Urbina, Atsushi Aoyama

Abstract:

The sustainability of traditional technologies employed in energy and chemical infrastructure brings a big challenge for our society. Making decisions related with safety of industrial infrastructure, the values of accidental risk are becoming relevant points for discussion. However, the challenge is the reliability of the models employed to get the risk data. Such models usually involve large number of variables and with large amounts of uncertainty. The most efficient techniques to overcome those problems are built using Artificial Intelligence (AI), and more specifically using hybrid systems such as Neuro-Fuzzy algorithms. Therefore, this paper aims to introduce a hybrid algorithm for risk assessment trained using near-miss accident data. As mentioned above the sustainability of traditional technologies related with energy and chemical infrastructure constitutes one of the major challenges that today’s societies and firms are facing. Besides that, the adaptation of those technologies to the effects of the climate change in sensible environments represents a critical concern for safety and risk management. Regarding this issue argue that social consequences of catastrophic risks are increasing rapidly, due mainly to the concentration of people and energy infrastructure in hazard-prone areas, aggravated by the lack of knowledge about the risks. Additional to the social consequences described above, and considering the industrial sector as critical infrastructure due to its large impact to the economy in case of a failure the relevance of industrial safety has become a critical issue for the current society. Then, regarding the safety concern, pipeline operators and regulators have been performing risk assessments in attempts to evaluate accurately probabilities of failure of the infrastructure, and consequences associated with those failures. However, estimating accidental risks in critical infrastructure involves a substantial effort and costs due to number of variables involved, complexity and lack of information. Therefore, this paper aims to introduce a well trained algorithm for risk assessment using deep learning, which could be capable to deal efficiently with the complexity and uncertainty. The advantage point of the deep learning using near-miss accidents data is that it could be employed in risk assessment as an efficient engineering tool to treat the uncertainty of the risk values in complex environments. The basic idea of using a Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines is focused in the objective of improve the validity of the risk values learning from near-miss accidents and imitating the human expertise scoring risks and setting tolerance levels. In summary, the method of Deep Learning for Neuro-Fuzzy Risk Assessment involves a regression analysis called group method of data handling (GMDH), which consists in the determination of the optimal configuration of the risk assessment model and its parameters employing polynomial theory.

Keywords: deep learning, risk assessment, neuro fuzzy, pipelines

Procedia PDF Downloads 291
1241 Therapeutic Drug Monitoring by Dried Blood Spot and LC-MS/MS: Novel Application to Carbamazepine and Its Metabolite in Paediatric Population

Authors: Giancarlo La Marca, Engy Shokry, Fabio Villanelli

Abstract:

Epilepsy is one of the most common neurological disorders, with an estimated prevalence of 50 million people worldwide. Twenty five percent of the epilepsy population is represented in children under the age of 15 years. For antiepileptic drugs (AED), there is a poor correlation between plasma concentration and dose especially in children. This was attributed to greater pharmacokinetic variability than adults. Hence, therapeutic drug monitoring (TDM) is recommended in controlling toxicity while drug exposure is maintained. Carbamazepine (CBZ) is a first-line AED and the drug of first choice in trigeminal neuralgia. CBZ is metabolised in the liver into carbamazepine-10,11-epoxide (CBZE), its major metabolite which is equipotent. This develops the need for an assay able to monitor the levels of both CBZ and CBZE. The aim of the present study was to develop and validate a LC-MS/MS method for simultaneous quantification of CBZ and CBZE in dried blood spots (DBS). DBS technique overcomes many logistical problems, ethical issues and technical challenges faced by classical plasma sampling. LC-MS/MS has been regarded as superior technique over immunoassays and HPLC/UV methods owing to its better specificity and sensitivity, lack of interference or matrix effects. Our method combines advantages of DBS technique and LC-MS/MS in clinical practice. The extraction process was done using methanol-water-formic acid (80:20:0.1, v/v/v). The chromatographic elution was achieved by using a linear gradient with a mobile phase consisting of acetonitrile-water-0.1% formic acid at a flow rate of 0.50 mL/min. The method was linear over the range 1-40 mg/L and 0.25-20 mg/L for CBZ and CBZE respectively. The limit of quantification was 1.00 mg/L and 0.25 mg/L for CBZ and CBZE, respectively. Intra-day and inter-day assay precisions were found to be less than 6.5% and 11.8%. An evaluation of DBS technique was performed, including effect of extraction solvent, spot homogeneity and stability in DBS. Results from a comparison with the plasma assay are also presented. The novelty of the present work lies in being the first to quantify CBZ and its metabolite from only one 3.2 mm DBS disc finger-prick sample (3.3-3.4 µl blood) by LC-MS/MS in a 10 min. chromatographic run.

Keywords: carbamazepine, carbamazepine-10, 11-epoxide, dried blood spots, LC-MS/MS, therapeutic drug monitoring

Procedia PDF Downloads 415
1240 An Efficient Hardware/Software Workflow for Multi-Cores Simulink Applications

Authors: Asma Rebaya, Kaouther Gasmi, Imen Amari, Salem Hasnaoui

Abstract:

Over these last years, applications such as telecommunications, signal processing, digital communication with advanced features (Multi-antenna, equalization..) witness a rapid evaluation accompanied with an increase of user exigencies in terms of latency, the power of computation… To satisfy these requirements, the use of hardware/software systems is a common solution; where hardware is composed of multi-cores and software is represented by models of computation, synchronous data flow (SDF) graph for instance. Otherwise, the most of the embedded system designers utilize Simulink for modeling. The issue is how to simplify the c code generation, for a multi-cores platform, of an application modeled by Simulink. To overcome this problem, we propose a workflow allowing an automatic transformation from the Simulink model to the SDF graph and providing an efficient schedule permitting to optimize the number of cores and to minimize latency. This workflow goes from a Simulink application and a hardware architecture described by IP.XACT language. Based on the synchronous and hierarchical behavior of both models, the Simulink block diagram is automatically transformed into an SDF graph. Once this process is successfully achieved, the scheduler calculates the optimal cores’ number needful by minimizing the maximum density of the whole application. Then, a core is chosen to execute a specific graph task in a specific order and, subsequently, a compatible C code is generated. In order to perform this proposal, we extend Preesm, a rapid prototyping tool, to take the Simulink model as entry input and to support the optimal schedule. Afterward, we compared our results to this tool results, using a simple illustrative application. The comparison shows that our results strictly dominate the Preesm results in terms of number of cores and latency. In fact, if Preesm needs m processors and latency L, our workflow need processors and latency L'< L.

Keywords: hardware/software system, latency, modeling, multi-cores platform, scheduler, SDF graph, Simulink model, workflow

Procedia PDF Downloads 266
1239 Comparison of Methods for the Detection of Biofilm Formation in Yeast and Lactic Acid Bacteria Species Isolated from Dairy Products

Authors: Goksen Arik, Mihriban Korukluoglu

Abstract:

Lactic acid bacteria (LAB) and some yeast species are common microorganisms found in dairy products and most of them are responsible for the fermentation of foods. Such cultures are isolated and used as a starter culture in the food industry because of providing standardisation of the final product during the food processing. Choice of starter culture is the most important step for the production of fermented food. Isolated LAB and yeast cultures which have the ability to create a biofilm layer can be preferred as a starter in the food industry. The biofilm formation could be beneficial to extend the period of usage time of microorganisms as a starter. On the other hand, it is an undesirable property in pathogens, since biofilm structure allows a microorganism become more resistant to stress conditions such as antibiotic presence. It is thought that the resistance mechanism could be turned into an advantage by promoting the effective microorganisms which are used in the food industry as starter culture and also which have potential to stimulate the gastrointestinal system. Development of the biofilm layer is observed in some LAB and yeast strains. The resistance could make LAB and yeast strains dominant microflora in the human gastrointestinal system; thus, competition against pathogen microorganisms can be provided more easily. Based on this circumstance, in the study, 10 LAB and 10 yeast strains were isolated from various dairy products, such as cheese, yoghurt, kefir, and cream. Samples were obtained from farmer markets and bazaars in Bursa, Turkey. As a part of this research, all isolated strains were identified and their ability of biofilm formation was detected with two different methods and compared with each other. The first goal of this research was to determine whether isolates have the potential for biofilm production, and the second was to compare the validity of two different methods, which are known as “Tube method” and “96-well plate-based method”. This study may offer an insight into developing a point of view about biofilm formation and its beneficial properties in LAB and yeast cultures used as a starter in the food industry.

Keywords: biofilm, dairy products, lactic acid bacteria, yeast

Procedia PDF Downloads 261
1238 Predictions for the Anisotropy in Thermal Conductivity in Polymers Subjected to Model Flows by Combination of the eXtended Pom-Pom Model and the Stress-Thermal Rule

Authors: David Nieto Simavilla, Wilco M. H. Verbeeten

Abstract:

The viscoelastic behavior of polymeric flows under isothermal conditions has been extensively researched. However, most of the processing of polymeric materials occurs under non-isothermal conditions and understanding the linkage between the thermo-physical properties and the process state variables remains a challenge. Furthermore, the cost and energy required to manufacture, recycle and dispose polymers is strongly affected by the thermo-physical properties and their dependence on state variables such as temperature and stress. Experiments show that thermal conductivity in flowing polymers is anisotropic (i.e. direction dependent). This phenomenon has been previously omitted in the study and simulation of industrially relevant flows. Our work combines experimental evidence of a universal relationship between thermal conductivity and stress tensors (i.e. the stress-thermal rule) with differential constitutive equations for the viscoelastic behavior of polymers to provide predictions for the anisotropy in thermal conductivity in uniaxial, planar, equibiaxial and shear flow in commercial polymers. A particular focus is placed on the eXtended Pom-Pom model which is able to capture the non-linear behavior in both shear and elongation flows. The predictions provided by this approach are amenable to implementation in finite elements packages, since viscoelastic and thermal behavior can be described by a single equation. Our results include predictions for flow-induced anisotropy in thermal conductivity for low and high density polyethylene as well as confirmation of our method through comparison with a number of thermoplastic systems for which measurements of anisotropy in thermal conductivity are available. Remarkably, this approach allows for universal predictions of anisotropy in thermal conductivity that can be used in simulations of complex flows in which only the most fundamental rheological behavior of the material has been previously characterized (i.e. there is no need for additional adjusting parameters other than those in the constitutive model). Accounting for polymers anisotropy in thermal conductivity in industrially relevant flows benefits the optimization of manufacturing processes as well as the mechanical and thermal performance of finalized plastic products during use.

Keywords: anisotropy, differential constitutive models, flow simulations in polymers, thermal conductivity

Procedia PDF Downloads 180
1237 Application of MALDI-MS to Differentiate SARS-CoV-2 and Non-SARS-CoV-2 Symptomatic Infections in the Early and Late Phases of the Pandemic

Authors: Dmitriy Babenko, Sergey Yegorov, Ilya Korshukov, Aidana Sultanbekova, Valentina Barkhanskaya, Tatiana Bashirova, Yerzhan Zhunusov, Yevgeniya Li, Viktoriya Parakhina, Svetlana Kolesnichenko, Yeldar Baiken, Aruzhan Pralieva, Zhibek Zhumadilova, Matthew S. Miller, Gonzalo H. Hortelano, Anar Turmuhambetova, Antonella E. Chesca, Irina Kadyrova

Abstract:

Introduction: The rapidly evolving COVID-19 pandemic, along with the re-emergence of pathogens causing acute respiratory infections (ARI), has necessitated the development of novel diagnostic tools to differentiate various causes of ARI. MALDI-MS, due to its wide usage and affordability, has been proposed as a potential instrument for diagnosing SARS-CoV-2 versus non-SARS-CoV-2 ARI. The aim of this study was to investigate the potential of MALDI-MS in conjunction with a machine learning model to accurately distinguish between symptomatic infections caused by SARS-CoV-2 and non-SARS-CoV-2 during both the early and later phases of the pandemic. Furthermore, this study aimed to analyze mass spectrometry (MS) data obtained from nasal swabs of healthy individuals. Methods: We gathered mass spectra from 252 samples, comprising 108 SARS-CoV-2-positive samples obtained in 2020 (Covid 2020), 7 SARS-CoV- 2-positive samples obtained in 2023 (Covid 2023), 71 samples from symptomatic individuals without SARS-CoV-2 (Control non-Covid ARVI), and 66 samples from healthy individuals (Control healthy). All the samples were subjected to RT-PCR testing. For data analysis, we employed the caret R package to train and test seven machine-learning algorithms: C5.0, KNN, NB, RF, SVM-L, SVM-R, and XGBoost. We conducted a training process using a five-fold (outer) nested repeated (five times) ten-fold (inner) cross-validation with a randomized stratified splitting approach. Results: In this study, we utilized the Covid 2020 dataset as a case group and the non-Covid ARVI dataset as a control group to train and test various machine learning (ML) models. Among these models, XGBoost and SVM-R demonstrated the highest performance, with accuracy values of 0.97 [0.93, 0.97] and 0.95 [0.95; 0.97], specificity values of 0.86 [0.71; 0.93] and 0.86 [0.79; 0.87], and sensitivity values of 0.984 [0.984; 1.000] and 1.000 [0.968; 1.000], respectively. When examining the Covid 2023 dataset, the Naive Bayes model achieved the highest classification accuracy of 43%, while XGBoost and SVM-R achieved accuracies of 14%. For the healthy control dataset, the accuracy of the models ranged from 0.27 [0.24; 0.32] for k-nearest neighbors to 0.44 [0.41; 0.45] for the Support Vector Machine with a radial basis function kernel. Conclusion: Therefore, ML models trained on MALDI MS of nasopharyngeal swabs obtained from patients with Covid during the initial phase of the pandemic, as well as symptomatic non-Covid individuals, showed excellent classification performance, which aligns with the results of previous studies. However, when applied to swabs from healthy individuals and a limited sample of patients with Covid in the late phase of the pandemic, ML models exhibited lower classification accuracy.

Keywords: SARS-CoV-2, MALDI-TOF MS, ML models, nasopharyngeal swabs, classification

Procedia PDF Downloads 106
1236 Development and Validation of Cylindrical Linear Oscillating Generator

Authors: Sungin Jeong

Abstract:

This paper presents a linear oscillating generator of cylindrical type for hybrid electric vehicle application. The focus of the study is the suggestion of the optimal model and the design rule of the cylindrical linear oscillating generator with permanent magnet in the back-iron translator. The cylindrical topology is achieved using equivalent magnetic circuit considering leakage elements as initial modeling. This topology with permanent magnet in the back-iron translator is described by number of phases and displacement of stroke. For more accurate analysis of an oscillating machine, it will be compared by moving just one-pole pitch forward and backward the thrust of single-phase system and three-phase system. Through the analysis and comparison, a single-phase system of cylindrical topology as the optimal topology is selected. Finally, the detailed design of the optimal topology takes the magnetic saturation effects into account by finite element analysis. Besides, the losses are examined to obtain more accurate results; copper loss in the conductors of machine windings, eddy-current loss of permanent magnet, and iron-loss of specific material of electrical steel. The considerations of thermal performances and mechanical robustness are essential, because they have an effect on the entire efficiency and the insulations of the machine due to the losses of the high temperature generated in each region of the generator. Besides electric machine with linear oscillating movement requires a support system that can resist dynamic forces and mechanical masses. As a result, the fatigue analysis of shaft is achieved by the kinetic equations. Also, the thermal characteristics are analyzed by the operating frequency in each region. The results of this study will give a very important design rule in the design of linear oscillating machines. It enables us to more accurate machine design and more accurate prediction of machine performances.

Keywords: equivalent magnetic circuit, finite element analysis, hybrid electric vehicle, linear oscillating generator

Procedia PDF Downloads 194
1235 The Impact of Total Parenteral Nutrition on Pediatric Stem Cell Transplantation and Its Complications

Authors: R. Alramyan, S. Alsalamah, R. Alrashed, R. Alakel, F. Altheyeb, M. Alessa

Abstract:

Background: Nutritional support with total parenteral nutrition (TPN) is usually commenced with hematopoietic stem cell transplantation (HSCT) patients. However, it has its benefits and risks. Complications related to central venous catheter such as infections, and metabolic disturbances, including abnormal liver function, is usually of concern in such patients. Methods: A retrospective charts review of all pediatric patients who underwent HSCT between the period 2015-2018 in a tertiary hospital in Riyadh, Saudi Arabia. Patients' demographics, types of conditioning, type of nutrition, and patients' outcomes were collected. Statistical analysis was conducted using SPSS version 22. Frequencies and percentages were used to describe categorical variables. Mean, and standard deviation were used for continuous variables. A P value of less than 0.05 was considered as statically significant. Results: a total of 162 HSCTs were identified during the period mentioned. Indication of allogenic transplant included hemoglobinopathy in 50 patients (31%), acute lymphoblastic leukemia in 21 patients (13%). TPN was used in 96 patients (59.30%) for a median of 14 days, nasogastric tube feeding (NGT) in 16 (9.90%) patients for a median of 11 days, and 71 of patients (43.80%) were able to tolerate oral feeding. Out of the 96 patients (59.30%) who were dependent on TPN, 64 patients (66.7%) had severe mucositis in comparison to 17 patients (25.8%) who were either on NGT or tolerated oral intake. (P-value= 0.00). Sinusoidal obstruction syndrome (SOS) was seen in 14 patients (14.6%) who were receiving TPN compared to none in non-TPN patients (P=value 0.001). Moreover, majority of patients who had SOS received myeloablative conditioning therapy for non-malignant disease (hemoglobinopathy). However, there were no statistically significant differences in Graft-vs-Host Disease (both acute and chronic), bacteremia, and patient outcome between both groups. Conclusions: Nutritional support using TPN is used in majority of patients, especially post-myeloablative conditioning associated with severe mucositis. TPN was associated with VOD, especially in hemoglobinopathy patients who received myeloablative therapy. This may emphasize on use of preventative measures such as fluid restriction, use of diuretics, or defibrotide in high-risk patients.

Keywords: hematopoeitic stem cell transplant, HSCT, stem cell transplant, sinusoidal obstruction syndrome, total parenteral nutrition

Procedia PDF Downloads 156
1234 Management of Acute Appendicitis with Preference on Delayed Primary Suturing of Surgical Incision

Authors: N. A. D. P. Niwunhella, W. G. R. C. K. Sirisena

Abstract:

Appendicitis is one of the most encountered abdominal emergencies worldwide. Proper clinical diagnosis and appendicectomy with minimal post operative complications are therefore priorities. Aim of this study was to ascertain the overall management of acute appendicitis in Sri Lanka in special preference to delayed primary suturing of the surgical site, comparing other local and international treatment outcomes. Data were collected prospectively from 155 patients who underwent appendicectomy following clinical and radiological diagnosis with ultrasonography. Histological assessment was done for all the specimens. All perforated appendices were managed with delayed primary closure. Patients were followed up for 28 days to assess complications. Mean age of patient presentation was 27 years; mean pre-operative waiting time following admission was 24 hours; average hospital stay was 72 hours; accuracy of clinical diagnosis of appendicitis as confirmed by histology was 87.1%; post operative wound infection rate was 8.3%, and among them 5% had perforated appendices; 4 patients had post operative complications managed without re-opening. There was no fistula formation or mortality reported. Current study was compared with previously published data: a comparison on management of acute appendicitis in Sri Lanka vs. United Kingdom (UK). The diagnosis of current study was equally accurate, but post operative complications were significantly reduced - (current study-9.6%, compared Sri Lankan study-16.4%; compared UK study-14.1%). During the recent years, there has been an exponential rise in the use of Computerised Tomography (CT) imaging in the assessment of patients with acute appendicitis. Even though, the diagnostic accuracy without using CT, and treatment outcome of acute appendicitis in this study match other local studies as well as with data compared to UK. Therefore CT usage has not increased the diagnostic accuracy of acute appendicitis significantly. Especially, delayed primary closure may have reduced post operative wound infection rate for ruptured appendices, therefore suggest this approach for further evaluation as a safer and an effective practice in other hospitals worldwide as well.

Keywords: acute appendicitis, computerised tomography, diagnostic accuracy, delayed primary closure

Procedia PDF Downloads 164
1233 Comparison of Different Reanalysis Products for Predicting Extreme Precipitation in the Southern Coast of the Caspian Sea

Authors: Parvin Ghafarian, Mohammadreza Mohammadpur Panchah, Mehri Fallahi

Abstract:

Synoptic patterns from surface up to tropopause are very important for forecasting the weather and atmospheric conditions. There are many tools to prepare and analyze these maps. Reanalysis data and the outputs of numerical weather prediction models, satellite images, meteorological radar, and weather station data are used in world forecasting centers to predict the weather. The forecasting extreme precipitating on the southern coast of the Caspian Sea (CS) is the main issue due to complex topography. Also, there are different types of climate in these areas. In this research, we used two reanalysis data such as ECMWF Reanalysis 5th Generation Description (ERA5) and National Centers for Environmental Prediction /National Center for Atmospheric Research (NCEP/NCAR) for verification of the numerical model. ERA5 is the latest version of ECMWF. The temporal resolution of ERA5 is hourly, and the NCEP/NCAR is every six hours. Some atmospheric parameters such as mean sea level pressure, geopotential height, relative humidity, wind speed and direction, sea surface temperature, etc. were selected and analyzed. Some different type of precipitation (rain and snow) was selected. The results showed that the NCEP/NCAR has more ability to demonstrate the intensity of the atmospheric system. The ERA5 is suitable for extract the value of parameters for specific point. Also, ERA5 is appropriate to analyze the snowfall events over CS (snow cover and snow depth). Sea surface temperature has the main role to generate instability over CS, especially when the cold air pass from the CS. Sea surface temperature of NCEP/NCAR product has low resolution near coast. However, both data were able to detect meteorological synoptic patterns that led to heavy rainfall over CS. However, due to the time lag, they are not suitable for forecast centers. The application of these two data is for research and verification of meteorological models. Finally, ERA5 has a better resolution, respect to NCEP/NCAR reanalysis data, but NCEP/NCAR data is available from 1948 and appropriate for long term research.

Keywords: synoptic patterns, heavy precipitation, reanalysis data, snow

Procedia PDF Downloads 122
1232 A Six-Year Case Study Evaluating the Stakeholders’ Requirements and Satisfaction in Higher Educational Establishments

Authors: Ioannis I. Αngeli

Abstract:

Worldwide and mainly in the European Union, many standards, regulations, models and systems exists for the evaluation and identification of stakeholders’ requirements of individual universities and higher education (HE) in general. All systems are targeting to measure or evaluate the Universities’ Quality Assurance Systems and the services offered to the recipients of HE, mainly the students. Numerous surveys were conducted in the past either by each university or by organized bodies to identify the students’ satisfaction or to evaluate to what extent these requirements are fulfilled. In this paper, the main results of an ongoing 6-year joint research will be presented very briefly. This research deals with an in depth investigation of student’s satisfaction, students personal requirements, a cup analysis among these two parameters and compares different universities. Through this research an attempt will be made to address four very important questions in higher education establishments (HEE): (1) Are there any common requirements, parameters, good practices or questions that apply to a large number of universities that will assure that students’ requirements are fulfilled? (2) Up to what extent the individual programs of HEE fulfil the requirements of the stakeholders? (3) Are there any similarities on specific programs among European HEE? (4) To what extent the knowledge acquired in a specific course program is utilized or used in a specific country? For the execution of the research an internationally accepted questionnaire(s) was used to evaluate up to what extent the students’ requirements and satisfaction were fulfilled in 2012 and five years later (2017). Samples of students and or universities were taken from many European Universities. The questionnaires used, the sampling method and methodology adopted, as well as the comparison tables and results will be very valuable to any university that is willing to follow the same route and methodology or compare the results with their own HHE. Apart from the unique methodology, valuable results are demonstrated from the four case studies. There is a great difference between the student’s expectations or importance from what they are getting from their universities (in all parameters they are getting less). When there is a crisis or budget cut in HEE there is a direct impact to students. There are many differences on subjects taught in European universities.

Keywords: quality in higher education, students' requirements, education standards, student's survey, stakeholder's requirements, mechanical engineering courses

Procedia PDF Downloads 155
1231 Quality Assurance in Translation Crowdsourcing: The TED Open Translation Project

Authors: Ya-Mei Chen

Abstract:

The participatory culture enabled by Web 2.0 technologies has led to the emergence of online translation crowdsourcing, which mainly relies on the collective intelligence of volunteer translators. Due to the fact that many volunteer translators do not have formal translator training, concerns have been raised about the quality of crowdsourced translations. Some empirical research has been done to examine the translation quality of for-profit crowdsourcing initiatives. However, quality assurance of non-profit translation crowdsourcing has rarely been explored in detail. Using the TED Open Translation Project as a case study, this paper investigates how the translation-review-approval method adopted by TED can (1) direct the volunteer translators’ use of translation strategies as well as the reviewers’ adoption of revising strategies and (2) shape the final translation products. To well examine the actual effect of TED’s translation-review-approval method, this paper will focus on its two major quality assurance mechanisms, that is, TED’s style guidelines and quality review. Based on an anonymous questionnaire, this research will first explore whether the volunteer translators and reviewers are aware of the style guidelines and whether their use of translation strategies is similar to that advised in the guidelines. The questionnaire, which will be posted online, will consist of two parts: demographic information and translation strategies. The invitations to complete it will then be distributed through TED Translator Facebook groups. With an aim to investigate if the style guidelines have any substantial impacts on actual subtitling practices, a comparison will be made between the original English subtitles of 20 TED talks (each around 5 to 7 minutes) and their Chinese subtitle translations to identify regularly adopted strategies. Concerning the function of the reviewing stage, a comparative study will be conducted between the drafts of Chinese subtitles for 10 short English talks and the revised versions of these drafts so as to examine the actual revising strategies and their effect on translation quality. According to the results obtained from the questionnaire and textual comparisons, this paper will provide in-depth analysis of quality assurance of the TED Open Translation Project. It is hoped that this research, through a detailed investigation of non-profit translation crowdsourcing, can enable translation researchers and practitioners to have a better understanding of quality control in translation crowdsourcing in the digital age.

Keywords: quality assurance, TED, translation crowdsourcing, volunteer translators

Procedia PDF Downloads 229
1230 Statistical Characteristics of Code Formula for Design of Concrete Structures

Authors: Inyeol Paik, Ah-Ryang Kim

Abstract:

In this research, a statistical analysis is carried out to examine the statistical properties of the formula given in the design code for concrete structures. The design formulas of the Korea highway bridge design code - the limit state design method (KHBDC) which is the current national bridge design code and the design code for concrete structures by Korea Concrete Institute (KCI) are applied for the analysis. The safety levels provided by the strength formulas of the design codes are defined based on the probabilistic and statistical theory.KHBDC is a reliability-based design code. The load and resistance factors of this code were calibrated to attain the target reliability index. It is essential to define the statistical properties for the design formulas in this calibration process. In general, the statistical characteristics of a member strength are due to the following three factors. The first is due to the difference between the material strength of the actual construction and that used in the design calculation. The second is the difference between the actual dimensions of the constructed sections and those used in design calculation. The third is the difference between the strength of the actual member and the formula simplified for the design calculation. In this paper, the statistical study is focused on the third difference. The formulas for calculating the shear strength of concrete members are presented in different ways in KHBDC and KCI. In this study, the statistical properties of design formulas were obtained through comparison with the database which comprises the experimental results from the reference publications. The test specimen was either reinforced with the shear stirrup or not. For an applied database, the bias factor was about 1.12 and the coefficient of variation was about 0.18. By applying the statistical properties of the design formula to the reliability analysis, it is shown that the resistance factors of the current design codes satisfy the target reliability indexes of both codes. Also, the minimum resistance factors of the KHBDC which is written in the material resistance factor format and KCE which is in the member resistance format are obtained and the results are presented. A further research is underway to calibrate the resistance factors of the high strength and high-performance concrete design guide.

Keywords: concrete design code, reliability analysis, resistance factor, shear strength, statistical property

Procedia PDF Downloads 319
1229 Evaluation of Non-Pharmacological Method-Transcervical Foley Catheter and Misoprostol to Intravaginal Misoprostol for Preinduction Cervical Ripening

Authors: Krishna Dahiya, Esha Charaya

Abstract:

Induction of labour is a common obstetrical intervention. Around 1 in every 4 patient undergo induction of labour for different indications Purpose: To study the efficacy of the combination of Foley bulb and vaginal misoprostol in comparison to vaginal misoprostol alone for cervical ripening and induction of labour. Methods: A prospective randomised study was conducted on 150 patients with term singleton pregnancy admitted for induction of labour. Seventy-five patients were induced with both Foley bulb, and vaginal misoprostol and another 75 were given vaginal misoprostol alone for induction of labour. Both groups were then compared with respect to change in Bishop score, induction to the active phase of labour interval, induction delivery interval, duration of labour, maternal complications and neonatal outcomes. Data was analysed using statistical software SPSS version 11.5. Tests with P,.05 were considered significant. Results: The two groups were comparable with respect to maternal age, parity, gestational age, indication for induction, and initial Bishop scores. Both groups had a significant change in Bishop score (2.99 ± 1.72 and 2.17 ± 1.48 respectively with statistically significant difference (p=0.001 S, 95% C.I. -0.1978 to 0.8378). Mean induction to delivery interval was significantly lower in the combination group (11.76 ± 5.89 hours) than misoprostol group (14.54 ± 7.32 hours). Difference was of 2.78 hours (p=0.018,S, 95% CI -5.1042 to -0.4558). Induction to delivery interval was significantly lower in nulliparous women of combination group (13.64 ± 5.75 hours) than misoprostol group (18.4±7.09 hours), and the difference was of 4.76 hours (p=0.002, S, 95% CI 1.0465 to 14.7335). There was no difference between the groups in the mode of delivery, infant weight, Apgar score and intrapartum complications. Conclusion: From the present study it was concluded that addition of Foley catheter to vaginal misoprostol have the synergistic effect and results in early cervical ripening and delivery. These results suggest that the combination may be used to achieve timely and safe delivery in the presence of an unfavorable cervix. A combination of the Foley bulb and vaginal misoprostol resulted in a shorter induction-to-delivery time when compared with vaginal misoprostol alone without increasing labor complications.

Keywords: Bishop score, Foley catheter, induction of labor, misoprostol

Procedia PDF Downloads 305
1228 Formula Student Car: Design, Analysis and Lap Time Simulation

Authors: Rachit Ahuja, Ayush Chugh

Abstract:

Aerodynamic forces and moments, as well as tire-road forces largely affects the maneuverability of the vehicle. Car manufacturers are largely fascinated and influenced by various aerodynamic improvements made in formula cars. There is constant effort of applying these aerodynamic improvements in road vehicles. In motor racing, the key differentiating factor in a high performance car is its ability to maintain highest possible acceleration in appropriate direction. One of the main areas of concern in motor racing is balance of aerodynamic forces and stream line the flow of air across the body of the vehicle. At present, formula racing cars are regulated by stringent FIA norms, there are constrains for dimensions of the vehicle, engine capacity etc. So one of the fields in which there is a large scope of improvement is aerodynamics of the vehicle. In this project work, an attempt has been made to design a formula- student (FS) car, improve its aerodynamic characteristics through steady state CFD simulations and simultaneously calculate its lap time. Initially, a CAD model of a formula student car is made using SOLIDWORKS as per the given dimensions and a steady-state external air-flow simulation is performed on the baseline model of the formula student car without any add on device to evaluate and analyze the air-flow pattern around the car and aerodynamic forces using FLUENT Solver. A detailed survey on different add-on devices used in racing application like: - front wing, diffuser, shark pin, T- wing etc. is made and geometric model of these add-on devices are created. These add-on devices are assembled with the baseline model. Steady state CFD simulations are done on the modified car to evaluate the aerodynamic effects of these add-on devices on the car. Later comparison of lap time simulation of the formula student car with and without the add-on devices is done with the help of MATLAB. Aerodynamic performances like: - lift, drag and their coefficients are evaluated for different configuration and design of the add-on devices at different speed of the vehicle. From parametric CFD simulations on formula student car attached with add-on devices, there is a considerable amount of drag and lift force reduction besides streamlining the airflow across the car. The best possible configuration of these add-on devices is obtained from these CFD simulations and also use of these add-on devices have shown an improvement in performance of the car which can be compared by various lap time simulations of the car.

Keywords: aerodynamic performance, front wing, laptime simulation, t-wing

Procedia PDF Downloads 196