Search results for: computed laminography
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 892

Search results for: computed laminography

232 Evaluation of Weather Risk Insurance for Agricultural Products Using a 3-Factor Pricing Model

Authors: O. Benabdeljelil, A. Karioun, S. Amami, R. Rouger, M. Hamidine

Abstract:

A model for preventing the risks related to climate conditions in the agricultural sector is presented. It will determine the yearly optimum premium to be paid by a producer in order to reach his required turnover. The model is based on both climatic stability and 'soft' responses of usually grown species to average climate variations at the same place and inside a safety ball which can be determined from past meteorological data. This allows the use of linear regression expression for dependence of production result in terms of driving meteorological parameters, the main ones of which are daily average sunlight, rainfall and temperature. By simple best parameter fit from the expert table drawn with professionals, optimal representation of yearly production is determined from records of previous years, and yearly payback is evaluated from minimum yearly produced turnover. The model also requires accurate pricing of commodity at N+1. Therefore, a pricing model is developed using 3 state variables, namely the spot price, the difference between the mean-term and the long-term forward price, and the long-term structure of the model. The use of historical data enables to calibrate the parameters of state variables, and allows the pricing of commodity. Application to beet sugar underlines pricer precision. Indeed, the percentage of accuracy between computed result and real world is 99,5%. Optimal premium is then deduced and gives the producer a useful bound for negotiating an offer by insurance companies to effectively protect its harvest. The application to beet production in French Oise department illustrates the reliability of present model with as low as 6% difference between predicted and real data. The model can be adapted to almost any agricultural field by changing state parameters and calibrating their associated coefficients.

Keywords: agriculture, production model, optimal price, meteorological factors, 3-factor model, parameter calibration, forward price

Procedia PDF Downloads 377
231 Evaluating the Impact of Urban Green Spaces on Urban Microclimate of Lahore: A Rapidly Urbanizing Metropolis of the Punjab-Pakistan

Authors: Muhammad Nasar-U-Minallah, Dagmar Haase, Salman Qureshi, Safdar Ali Shirazi

Abstract:

Urban green spaces (UGS) play a key role in the urban ecology of an area since they provide significant ecological services to compensate for natural environment functions damaged by the rapid growth of urbanization. The transformation of urban green specs to impervious landscapes has been recognized as a key factor prompting the distinctive urban heat and associated microclimatic changes. There is no doubt that urban green spaces offer a range of ecosystem services that can help to mitigate the ill effects of urbanization, heat anomalies, and climate change. The present study attempts to appraise the impact of urban green spaces on the urban thermal environment for the development of the microclimatic conditions in Lahore, Pakistan. The influence of urban heat has been studied through Landsat 8 data. The land surface temperature (LST) of Lahore was computed through the Radiative transfer method (RTM). The spatial variation of land surface temperature is retrieved to describe their local heat effect on urban microclimate. The association between the LST, normalized difference vegetation index, and the normalized difference built-up index are investigated to explore the impact of the urban green spaces and impervious surfaces on urban microclimate. The results of this study show significant changes in (impervious land surface 18% increase) land use within the study area. However, conversion of natural green cover to commercial and residential uses considerably increases the LST. Furthermore, results show that green spaces were the major heat sinks while impervious landscapes were the major heat source in the study area. Urban green spaces reveal 1 to 3℃ lower LST associated with their surrounding urban built-up area. This study shows that urban green spaces will help to mitigate the effect of urban microclimate and it is significant for the sustainable urban environment as well as to improve the quality of life of the urban inhabitants.

Keywords: thermal environmental, urban green space, cooling effect, microclimate, Lahore

Procedia PDF Downloads 106
230 Comparison of Receiver Operating Characteristic Curve Smoothing Methods

Authors: D. Sigirli

Abstract:

The Receiver Operating Characteristic (ROC) curve is a commonly used statistical tool for evaluating the diagnostic performance of screening and diagnostic test with continuous or ordinal scale results which aims to predict the presence or absence probability of a condition, usually a disease. When the test results were measured as numeric values, sensitivity and specificity can be computed across all possible threshold values which discriminate the subjects as diseased and non-diseased. There are infinite numbers of possible decision thresholds along the continuum of the test results. The ROC curve presents the trade-off between sensitivity and the 1-specificity as the threshold changes. The empirical ROC curve which is a non-parametric estimator of the ROC curve is robust and it represents data accurately. However, especially for small sample sizes, it has a problem of variability and as it is a step function there can be different false positive rates for a true positive rate value and vice versa. Besides, the estimated ROC curve being in a jagged form, since the true ROC curve is a smooth curve, it underestimates the true ROC curve. Since the true ROC curve is assumed to be smooth, several smoothing methods have been explored to smooth a ROC curve. These include using kernel estimates, using log-concave densities, to fit parameters for the specified density function to the data with the maximum-likelihood fitting of univariate distributions or to create a probability distribution by fitting the specified distribution to the data nd using smooth versions of the empirical distribution functions. In the present paper, we aimed to propose a smooth ROC curve estimation based on the boundary corrected kernel function and to compare the performances of ROC curve smoothing methods for the diagnostic test results coming from different distributions in different sample sizes. We performed simulation study to compare the performances of different methods for different scenarios with 1000 repetitions. It is seen that the performance of the proposed method was typically better than that of the empirical ROC curve and only slightly worse compared to the binormal model when in fact the underlying samples were generated from the normal distribution.

Keywords: empirical estimator, kernel function, smoothing, receiver operating characteristic curve

Procedia PDF Downloads 152
229 Experimental Modeling of Spray and Water Sheet Formation Due to Wave Interactions with Vertical and Slant Bow-Shaped Model

Authors: Armin Bodaghkhani, Bruce Colbourne, Yuri S. Muzychka

Abstract:

The process of spray-cloud formation and flow kinematics produced from breaking wave impact on vertical and slant lab-scale bow-shaped models were experimentally investigated. Bubble Image Velocimetry (BIV) and Image Processing (IP) techniques were applied to study the various types of wave-model impacts. Different wave characteristics were generated in a tow tank to investigate the effects of wave characteristics, such as wave phase velocity, wave steepness on droplet velocities, and behavior of the process of spray cloud formation. The phase ensemble-averaged vertical velocity and turbulent intensity were computed. A high-speed camera and diffused LED backlights were utilized to capture images for further post processing. Various pressure sensors and capacitive wave probes were used to measure the wave impact pressure and the free surface profile at different locations of the model and wave-tank, respectively. Droplet sizes and velocities were measured using BIV and IP techniques to trace bubbles and droplets in order to measure their velocities and sizes by correlating the texture in these images. The impact pressure and droplet size distributions were compared to several previously experimental models, and satisfactory agreements were achieved. The distribution of droplets in front of both models are demonstrated. Due to the highly transient process of spray formation, the drag coefficient for several stages of this transient displacement for various droplet size ranges and different Reynolds number were calculated based on the ensemble average method. From the experimental results, the slant model produces less spray in comparison with the vertical model, and the droplet velocities generated from the wave impact with the slant model have a lower velocity as compared with the vertical model.

Keywords: spray charachteristics, droplet size and velocity, wave-body interactions, bubble image velocimetry, image processing

Procedia PDF Downloads 300
228 Field Prognostic Factors on Discharge Prediction of Traumatic Brain Injuries

Authors: Mohammad Javad Behzadnia, Amir Bahador Boroumand

Abstract:

Introduction: Limited facility situations require allocating the most available resources for most casualties. Accordingly, Traumatic Brain Injury (TBI) is the one that may need to transport the patient as soon as possible. In a mass casualty event, deciding when the facilities are restricted is hard. The Extended Glasgow Outcome Score (GOSE) has been introduced to assess the global outcome after brain injuries. Therefore, we aimed to evaluate the prognostic factors associated with GOSE. Materials and Methods: In a multicenter cross-sectional study conducted on 144 patients with TBI admitted to trauma emergency centers. All the patients with isolated TBI who were mentally and physically healthy before the trauma entered the study. The patient’s information was evaluated, including demographic characteristics, duration of hospital stays, mechanical ventilation on admission laboratory measurements, and on-admission vital signs. We recorded the patients’ TBI-related symptoms and brain computed tomography (CT) scan findings. Results: GOSE assessments showed an increasing trend by the comparison of on-discharge (7.47 ± 1.30), within a month (7.51 ± 1.30), and within three months (7.58 ± 1.21) evaluations (P < 0.001). On discharge, GOSE was positively correlated with Glasgow Coma Scale (GCS) (r = 0.729, P < 0.001) and motor GCS (r = 0.812, P < 0.001), and inversely with age (r = −0.261, P = 0.002), hospitalization period (r = −0.678, P < 0.001), pulse rate (r = −0.256, P = 0.002) and white blood cell (WBC). Among imaging signs and trauma-related symptoms in univariate analysis, intracranial hemorrhage (ICH), interventricular hemorrhage (IVH) (P = 0.006), subarachnoid hemorrhage (SAH) (P = 0.06; marginally at P < 0.1), subdural hemorrhage (SDH) (P = 0.032), and epidural hemorrhage (EDH) (P = 0.037) were significantly associated with GOSE at discharge in multivariable analysis. Conclusion: Our study showed some predictive factors that could help to decide which casualty should transport earlier to a trauma center. According to the current study findings, GCS, pulse rate, WBC, and among imaging signs and trauma-related symptoms, ICH, IVH, SAH, SDH, and EDH are significant independent predictors of GOSE at discharge in TBI patients.

Keywords: field, Glasgow outcome score, prediction, traumatic brain injury.

Procedia PDF Downloads 76
227 Study of Three-Dimensional Computed Tomography of Frontoethmoidal Cells Using International Frontal Sinus Anatomy Classification

Authors: Prabesh Karki, Shyam Thapa Chettri, Bajarang Prasad Sah, Manoj Bhattarai, Sudeep Mishra

Abstract:

Introduction: Frontal sinus is frequently described as the most difficult sinus to access surgically due to its proximity to the cribriform plate, orbit, and anterior ethmoid artery. Frontal sinus surgery requires a detailed understanding of the cellular structure and FSDP unique to each patient, making high-resolution CT scans an indispensable tool to assess the difficulty of planned sinus surgery. International Frontal Sinus Anatomy Classification (IFAC) was developed to provide a more precise nomenclature for cells in the frontal recess, classifying cells based on their anatomic origin. Objectives: To assess the proportion of frontal cell variants defined by IFAC, variation with respect to age and gender. Methods: 54 cases were enrolled after a detailed clinical history, thorough general and physical examinations, and CT a report ordered in a film. Assessment and tabulation of the presence of frontal cells according to the IFAC analyzed. The prevalence of each cell type was calculated, and data were entered in MS Excel and analyzed using Statistical Package for the Social Sciences (SPSS). Descriptive statistics and frequencies were defined for categorical and numerical variables. Frequency, percentage, the mean and standard deviation were calculated. Result: Among 54 patients, 30 (55.6%) were male and 24 (44.4%) were female. The patient enrolled ranged from 18 to 78 years. Majority33.3% (n=18) were in age group of >50 years.According to IFAC, Agger nasi cells (92.6%) were most common, whereas supraorbital ethmoidal cells were least common 16 (29.6%). Prevalence of other frontoethmoidal cells was SAC- 57.4%, SAFC- 38.9%, SBC- 74.1%, SBFC- 33.3%, FSC- 38.9% of 54 cases. Conclusion: IFAC is an international consensus document that describes an anatomically precise nomenclature for classifying frontoethmoidal cells' anatomy. This study has defined the prevalence, symmetry and reliability of frontoethmoidal cells as established by the IFAC system as in other parts of the world.

Keywords: frontal sinus, frontoethmoidal cells, international frontal sinus anatomy classification

Procedia PDF Downloads 100
226 Quality Assurances for an On-Board Imaging System of a Linear Accelerator: Five Months Data Analysis

Authors: Liyun Chang, Cheng-Hsiang Tsai

Abstract:

To ensure the radiation precisely delivering to the target of cancer patients, the linear accelerator equipped with the pretreatment on-board imaging system is introduced and through it the patient setup is verified before the daily treatment. New generation radiotherapy using beam-intensity modulation, usually associated the treatment with steep dose gradients, claimed to have achieved both a higher degree of dose conformation in the targets and a further reduction of toxicity in normal tissues. However, this benefit is counterproductive if the beam is delivered imprecisely. To avoid shooting critical organs or normal tissues rather than the target, it is very important to carry out the quality assurance (QA) of this on-board imaging system. The QA of the On-Board Imager® (OBI) system of one Varian Clinac-iX linear accelerator was performed through our procedures modified from a relevant report and AAPM TG142. Two image modalities, 2D radiography and 3D cone-beam computed tomography (CBCT), of the OBI system were examined. The daily and monthly QA was executed for five months in the categories of safety, geometrical accuracy and image quality. A marker phantom and a blade calibration plate were used for the QA of geometrical accuracy, while the Leeds phantom and Catphan 504 phantom were used in the QA of radiographic and CBCT image quality, respectively. The reference images were generated through a GE LightSpeed CT simulator with an ADAC Pinnacle treatment planning system. Finally, the image quality was analyzed via an OsiriX medical imaging system. For the geometrical accuracy test, the average deviations of the OBI isocenter in each direction are less than 0.6 mm with uncertainties less than 0.2 mm, while all the other items have the displacements less than 1 mm. For radiographic image quality, the spatial resolution is 1.6 lp/cm with contrasts less than 2.2%. The spatial resolution, low contrast, and HU homogenous of CBCT are larger than 6 lp/cm, less than 1% and within 20 HU, respectively. All tests are within the criteria, except the HU value of Teflon measured with the full fan mode exceeding the suggested value that could be due to itself high HU value and needed to be rechecked. The OBI system in our facility was then demonstrated to be reliable with stable image quality. The QA of OBI system is really necessary to achieve the best treatment for a patient.

Keywords: CBCT, image quality, quality assurance, OBI

Procedia PDF Downloads 300
225 Milling Simulations with a 3-DOF Flexible Planar Robot

Authors: Hoai Nam Huynh, Edouard Rivière-Lorphèvre, Olivier Verlinden

Abstract:

Manufacturing technologies are becoming continuously more diversified over the years. The increasing use of robots for various applications such as assembling, painting, welding has also affected the field of machining. Machining robots can deal with larger workspaces than conventional machine-tools at a lower cost and thus represent a very promising alternative for machining applications. Furthermore, their inherent structure ensures them a great flexibility of motion to reach any location on the workpiece with the desired orientation. Nevertheless, machining robots suffer from a lack of stiffness at their joints restricting their use to applications involving low cutting forces especially finishing operations. Vibratory instabilities may also happen while machining and deteriorate the precision leading to scrap parts. Some researchers are therefore concerned with the identification of optimal parameters in robotic machining. This paper continues the development of a virtual robotic machining simulator in order to find optimized cutting parameters in terms of depth of cut or feed per tooth for example. The simulation environment combines an in-house milling routine (DyStaMill) achieving the computation of cutting forces and material removal with an in-house multibody library (EasyDyn) which is used to build a dynamic model of a 3-DOF planar robot with flexible links. The position of the robot end-effector submitted to milling forces is controlled through an inverse kinematics scheme while controlling the position of its joints separately. Each joint is actuated through a servomotor for which the transfer function has been computed in order to tune the corresponding controller. The output results feature the evolution of the cutting forces when the robot structure is deformable or not and the tracking errors of the end-effector. Illustrations of the resulting machined surfaces are also presented. The consideration of the links flexibility has highlighted an increase of the cutting forces magnitude. This proof of concept will aim to enrich the database of results in robotic machining for potential improvements in production.

Keywords: control, milling, multibody, robotic, simulation

Procedia PDF Downloads 249
224 Designing Stochastic Non-Invasively Applied DC Pulses to Suppress Tremors in Multiple Sclerosis by Computational Modeling

Authors: Aamna Lawrence, Ashutosh Mishra

Abstract:

Tremors occur in 60% of the patients who have Multiple Sclerosis (MS), the most common demyelinating disease that affects the central and peripheral nervous system, and are the primary cause of disability in young adults. While pharmacological agents provide minimal benefits, surgical interventions like Deep Brain Stimulation and Thalamotomy are riddled with dangerous complications which make non-invasive electrical stimulation an appealing treatment of choice for dealing with tremors. Hence, we hypothesized that if the non-invasive electrical stimulation parameters (mainly frequency) can be computed by mathematically modeling the nerve fibre to take into consideration the minutest details of the axon morphologies, tremors due to demyelination can be optimally alleviated. In this computational study, we have modeled the random demyelination pattern in a nerve fibre that typically manifests in MS using the High-Density Hodgkin-Huxley model with suitable modifications to account for the myelin. The internode of the nerve fibre in our model could have up to ten demyelinated regions each having random length and myelin thickness. The arrival time of action potentials traveling the demyelinated and the normally myelinated nerve fibre between two fixed points in space was noted, and its relationship with the nerve fibre radius ranging from 5µm to 12µm was analyzed. It was interesting to note that there were no overlaps between the arrival time for action potentials traversing the demyelinated and normally myelinated nerve fibres even when a single internode of the nerve fibre was demyelinated. The study gave us an opportunity to design DC pulses whose frequency of application would be a function of the random demyelination pattern to block only the delayed tremor-causing action potentials. The DC pulses could be delivered to the peripheral nervous system non-invasively by an electrode bracelet that would suppress any shakiness beyond it thus paving the way for wearable neuro-rehabilitative technologies.

Keywords: demyelination, Hodgkin-Huxley model, non-invasive electrical stimulation, tremor

Procedia PDF Downloads 130
223 An Inquiry on Imaging of Soft Tissues in Micro-Computed Tomography

Authors: Matej Patzelt, Jana Mrzilkova, Jan Dudak, Frantisek Krejci, Jan Zemlicka, Zdenek Wurst, Petr Zach, Vladimir Musil

Abstract:

Introduction: Micro-CT is well used for examination of bone structures and teeth. On the other hand visualization of the soft tissues is still limited. The goal of our study was to elaborate methodology for soft tissue samples imaging in micro-CT. Methodology: We used organs of rats and mice. We either did a preparation of the organs and fixation in contrast solution or we did cannulation of blood vessels and their injection for imaging of the vascular system. First, we scanned native specimens, then we created corrosive specimens by resins. In the next step, we injected vascular system either by Aurovist contrast agent or by Exitron. In the next step, we focused on soft tissues contrast increase. We scanned samples fixated in Lugol solution, samples fixated in pure ethanol and in formaldehyde solution. All used methods were afterwards compared. Results: Native specimens did not provide sufficient contrast of the tissues in any of organs. Corrosive samples of the blood stream provided great contrast and details; on the other hand, it was necessary to destroy the organ. Further examined possibility was injection of the AuroVist contrast that leads to the great bloodstream contrast. Injection of Exitron contrast agent comparing to Aurovist did not provide such a great contrast. The soft tissues (kidney, heart, lungs, brain, and liver) were best visualized after fixation in ethanol. This type of fixation showed best results in all studied tissues. Lugol solution had great results in muscle tissue. Fixation by formaldehyde solution showed similar quality of contrast in the tissues like ethanol. Conclusion: Before imaging, we need to, first, determinate which structures of the soft tissues we want to visualize. In the case of the bloodstream, the best was AuroVist and corrosive specimens. Muscle tissue is best visualized by Lugol solution. In the case of the organs containing cavities, like kidneys or brain, the best way was ethanol fixation.

Keywords: experimental imaging, fixation, micro-CT, soft tissues

Procedia PDF Downloads 326
222 The Role of Group Dynamics in Creativity: A Study Case from Italy

Authors: Sofya Komarova, Frashia Ndungu, Alessia Gavazzoli, Roberta Mineo

Abstract:

Modern society requires people to be flexible and to develop innovative solutions to unexpected situations. Creativity refers to the “interaction among aptitude, process, and the environment by which an individual or group produces a perceptible product that is both novel and useful as defined within a social context”. It allows humans to produce novel ideas, generate new solutions, and express themselves uniquely. Only a few scientific studies have examined group dynamics' influence on individuals' creativity. There exist some gaps in the research on creative thinking, such as the fact that collaborative effort frequently results in the enhanced production of new information and knowledge. Therefore, it is critical to evaluate creativity via social settings. The study aimed at exploring the group dynamics of young adults in small group settings and the influence of these dynamics on their creativity. The study included 30 participants aged 20 to 25 who were attending university after completing a bachelor's degree. The participants were divided into groups of three, in gender homogenous and heterogeneous groups. The groups’ creative task was tied to the Lego mosaic created for the Scintillae laboratory at the Reggio Children Foundation. Group dynamics were operationalized into patterns of behaviors classified into three major categories: 1) Social Interactions, 2) Play, and 3) Distraction. Data were collected through audio and video recording and observation. The qualitative data were converted into quantitative data using the observational coding system; then, they were analyzed, revealing correlations between behaviors using median points and averages. For each participant and group, the percentages of represented behavior signals were computed. The findings revealed a link between social interaction, creative thinking, and creative activities. Other findings revealed that the more intense the social interaction, the lower the amount of creativity demonstrated. This study bridges the research gap between group dynamics and creativity. The approach calls for further research on the relationship between creativity and social interaction.

Keywords: group dynamics, creative thinking, creative action, social interactions, group play

Procedia PDF Downloads 128
221 Delineating Floodplain along the Nasia River in Northern Ghana Using HAND Contour

Authors: Benjamin K. Ghansah, Richard K. Appoh, Iliya Nababa, Eric K. Forkuo

Abstract:

The Nasia River is an important source of water for domestic and agricultural purposes to the inhabitants of its catchment. Major farming activities takes place within the floodplain of the river and its network of tributaries. The actual inundation extent of the river system is; however, unknown. Reasons for this lack of information include financial constraints and inadequate human resources as flood modelling is becoming increasingly complex by the day. Knowledge of the inundation extent will help in the assessment of risk posed by the annual flooding of the river, and help in the planning of flood recession agricultural activities. This study used a simple terrain based algorithm, Height Above Nearest Drainage (HAND), to delineate the floodplain of the Nasia River and its tributaries. The HAND model is a drainage normalized digital elevation model, which has its height reference based on the local drainage systems rather than the average mean sea level (AMSL). The underlying principle guiding the development of the HAND model is that hillslope flow paths behave differently when the reference gradient is to the local drainage network as compared to the seaward gradient. The new terrain model of the catchment was created using the NASA’s SRTM Digital Elevation Model (DEM) 30m as the only data input. Contours (HAND Contour) were then generated from the normalized DEM. Based on field flood inundation survey, historical information of flooding of the area as well as satellite images, a HAND Contour of 2m was found to best correlates with the flood inundation extent of the river and its tributaries. A percentage accuracy of 75% was obtained when the surface area created by the 2m contour was compared with surface area of the floodplain computed from a satellite image captured during the peak flooding season in September 2016. It was estimated that the flooding of the Nasia River and its tributaries created a floodplain area of 1011 km².

Keywords: digital elevation model, floodplain, HAND contour, inundation extent, Nasia River

Procedia PDF Downloads 457
220 Relationship between the Level of Perceived Self-Efficacy of Children with Learning Disability and Their Mother’s Perception about the Efficacy of Their Child, and Children’s Academic Achievement

Authors: Payal Maheshwari, Maheaswari Brindavan

Abstract:

The present study aimed at studying the level of perceived self-efficacy of children with learning disability and their mother’s perception about the efficacy of the child and the relationship between the two. The study further aimed at finding out the relationship between the level of perceived self-efficacy of children with learning disability and their academic achievement and their mother’s perception about the Efficacy of the child and child’s Academic Achievement. The sample comprised of 80 respondents (40 children with learning disability and their mothers). Children with learning disability as their primary condition, belonging to middle or upper middle class, living with both the parents, residing in Mumbai and their mothers were selected. Purposive or judgmental and snowball sampling technique was used to select the sample for the present study. Proformas in the form of questionnaires were used to obtain the background information of the children with learning disability and their mother’s. A self-constructed Mother’s Perceived Efficacy of their Child Assessment Scale was used to measure mothers perceived level of efficacy of their child with learning disability. Self-constructed Child’s Perceived Self-Efficacy Assessment Scale was used to measure the level of child’s perceived self-efficacy. Academic scores of the child were collected from the child’s parents or teachers and were converted into percentage. The data were analyzed quantitatively using frequencies, mean and standard deviation. Correlations were computed to ascertain the relationships between the different variables. The findings revealed that majority of the mother’s perceived efficacy about their child with learning disability was above average as well as majority of the children with learning disability also perceived themselves as having above average level of self-efficacy. Further in the domains of self-regulated learning and emotional self-efficacy majority of the mothers perceived their child as having average or below average efficacy, 50% of the children also perceived their self-efficacy in the two domains at average or below average level. A significant (r=.322, p < .05) weak correlation (Spearman’s rho) was found between mother’s perceived efficacy about their child, and child’s perceived self-efficacy and a significant (r=.377, p < .01) weak correlation (Pearson Correlation) was also found between mother’s perceived efficacy about their child and child’s academic achievement. Significant weak positive correlation was found between child’s perceived self-efficacy and academic achievement (r=.332, p < .05). Based on the findings, the study discussed the need for intervention program for children in non-academic skills like self-regulation and emotional competence.

Keywords: learning disability, perceived self efficacy, academic achievement, mothers, children

Procedia PDF Downloads 321
219 Two-Level Graph Causality to Detect and Predict Random Cyber-Attacks

Authors: Van Trieu, Shouhuai Xu, Yusheng Feng

Abstract:

Tracking attack trajectories can be difficult, with limited information about the nature of the attack. Even more difficult as attack information is collected by Intrusion Detection Systems (IDSs) due to the current IDSs having some limitations in identifying malicious and anomalous traffic. Moreover, IDSs only point out the suspicious events but do not show how the events relate to each other or which event possibly cause the other event to happen. Because of this, it is important to investigate new methods capable of performing the tracking of attack trajectories task quickly with less attack information and dependency on IDSs, in order to prioritize actions during incident responses. This paper proposes a two-level graph causality framework for tracking attack trajectories in internet networks by leveraging observable malicious behaviors to detect what is the most probable attack events that can cause another event to occur in the system. Technically, given the time series of malicious events, the framework extracts events with useful features, such as attack time and port number, to apply to the conditional independent tests to detect the relationship between attack events. Using the academic datasets collected by IDSs, experimental results show that the framework can quickly detect the causal pairs that offer meaningful insights into the nature of the internet network, given only reasonable restrictions on network size and structure. Without the framework’s guidance, these insights would not be able to discover by the existing tools, such as IDSs. It would cost expert human analysts a significant time if possible. The computational results from the proposed two-level graph network model reveal the obvious pattern and trends. In fact, more than 85% of causal pairs have the average time difference between the causal and effect events in both computed and observed data within 5 minutes. This result can be used as a preventive measure against future attacks. Although the forecast may be short, from 0.24 seconds to 5 minutes, it is long enough to be used to design a prevention protocol to block those attacks.

Keywords: causality, multilevel graph, cyber-attacks, prediction

Procedia PDF Downloads 157
218 Covid-19 Pandemic: Another Lesson Learned by a Military Hospital

Authors: Mariana Floria, Elena-Diana Năfureanu, Diana-Mihaela Gălăţanu, Anca-Ecaterina Grumeza, Cristina Gorea-Bocîncă, Diana-Elena Iov, Aurelian-Corneliu Moraru, Dragoș-Marian Popescu

Abstract:

SARS-CoV-2 is the most deadly and devastating virus of the last one hundred years, being more highly contagious than EBOLA, HIV, Swine Influenza, Severe Acute Respiratory Syndrome, or Middle Eastern Respiratory Syndrome. After two years of pandemic, planning and budgeting for use of healthcare resources and services is very important. The aim of this study was to analyze the costs for hospital stay in patients with predominantly moderate forms of COVID-19 in a support military hospital located in Nord-East of Romania. Inpatient COVID-19 hospitalizations costs, regardless of ICD-10 procedure codes (DRG payment), in a Covid-19 support military hospital were analyzed. From August 2020 through June 2021, 241 patientswere hospitalized. Our national protocol for the treatment of Covid-19 infection was applied. The main COVID-19 manifestations were: 69% respiratory (18% with severe pneumonia, 2.9% with pulmonary embolism, diagnosed by angio-computed tomography), 3.3% cardiac, 28% digestive, and 33% psychiatric (most common anxiety) manifestations. According to COVID-19 severity, most of the patients had moderate (104 patients – 43%) and severe (50 patients - 21%) forms. Seven patients with severe form died because of multiple comorbidities, and 30 patients were transferred in hospitals with COVID-19 intensive care units.Only two patients have had procalcitonin>10 ng/mL (high probability of severe sepsis or septic shock), and 1 patient had moderate risk for septic shock (0.5 - 2 ng/mL). The average estimated costs were about 3000€/patient, without significantly differences depending on disease severity. Equipment costs were 2 times higher than for drugs and 4 times than for laboratory tests. In a Covid-19 support military hospital that took care for predominantly moderate forms of COVID-19, the costs for equipment were much higher than that for treatment. Therefore, new criteria for hospitalization of these forms of COVID-19 deserve to be analyzed to avoid useless costs.

Keywords: Covid-19, costs, hospital stay, military hospital

Procedia PDF Downloads 178
217 Effect of Halo Protection Device on the Aerodynamic Performance of Formula Racecar

Authors: Mark Lin, Periklis Papadopoulos

Abstract:

This paper explores the aerodynamics of the formula racecar when a ‘halo’ driver-protection device is added to the chassis. The halo protection device was introduced at the start of the 2018 racing season as a safety measure against foreign object impacts that a driver may encounter when driving an open-wheel racecar. In the one-year since its introduction, the device has received wide acclaim for protecting the driver on two separate occasions. The benefit of such a safety device certainly cannot be disputed. However, by adding the halo device to a car, it changes the airflow around the vehicle, and most notably, to the engine air-intake and the rear wing. These negative effects in the air supply to the engine, and equally to the downforce created by the rear wing are studied in this paper using numerical technique, and the resulting CFD outputs are presented and discussed. Comparing racecar design prior to and after the introduction of the halo device, it is shown that the design of the air intake and the rear wing has not followed suit since the addition of the halo device. The reduction of engine intake mass flow due to the halo device is computed and presented for various speeds the car may be going. Because of the location of the halo device in relation to the air intake, airflow is directed away from the engine, making the engine perform less than optimal. The reduction is quantified in this paper to show the correspondence to reduce the engine output when compared to a similar car without the halo device. This paper shows that through aerodynamic arguments, the engine in a halo car will not receive unobstructed, clean airflow that a non-halo car does. Another negative effect is on the downforce created by the rear wing. Because the amount of downforce created by the rear wing is influenced by every component that comes before it, when a halo device is added upstream to the rear wing, airflow is obstructed, and less is available for making downforce. This reduction in downforce is especially dramatic as the speed is increased. This paper presents a graph of downforce over a range of speeds for a car with and without the halo device. Acknowledging that although driver safety is paramount, the negative effect of this safety device on the performance of the car should still be well understood so that any possible redesign to mitigate these negative effects can be taken into account in next year’s rules regulation.

Keywords: automotive aerodynamics, halo device, downforce. engine intake

Procedia PDF Downloads 110
216 Factors Contributing to Farmers’ Attitude Towards Climate Adaptation Farming Practices: A Farm Level Study in Bangladesh

Authors: Md Rezaul Karim, Farha Taznin

Abstract:

The purpose of this study was to assess and describe the individual and household characteristics of farmers, to measure the attitude of farmers towards climate adaptation farming practices and to explore the individual and household factors contributing in predicting their attitude towards climate adaptation farming practices. Data were collected through personal interviews using a pre-tested interview schedule. The data collection was done at Biral Upazila under Dinajpur district in Bangladesh from 1st November to 15 December 2018. Besides descriptive statistical parameters, Pearson’s Product Moment Correlation Coefficient (r), multiple regression and step-wise multiple regression analysis were used for the statistical analysis. Findings indicated that the highest proportion (77.6 percent) of the farmers had moderately favorable attitudes, followed by only 11.2 percent with highly favorable attitudes and 11.2 percent with slightly favorable attitudes towards climate adaptation farming practices. According to the computed correlation coefficients (r), among the 10 selected factors, five of them, such as education of household head, farm size, annual household income, organizational participation, and information access by extension services, had a significant relationship with the attitude of farmers towards climate-smart practices. The step-wise multiple regression results showed that two characteristics as education of household head and information access by extension services, contributed 26.2% and 5.1%, respectively, in predicting farmers' attitudes towards climate adaptation farming practices. In addition, more than two-thirds of farmers cited their opinion to the problems in response to ‘price of vermi species is high and it is not easily available’ as 1st ranked problem, followed by ‘lack of information for innovative climate-smart technologies’. This study suggests that policy implications are necessary to promote extension education and information services and overcome the obstacles to climate adaptation farming practices. It further recommends that research study should be conducted in diverse contexts of nationally or globally.

Keywords: factors, attitude, climate adaptation, farming practices, Bangladesh

Procedia PDF Downloads 88
215 Geospatial Analysis of Hydrological Response to Forest Fires in Small Mediterranean Catchments

Authors: Bojana Horvat, Barbara Karleusa, Goran Volf, Nevenka Ozanic, Ivica Kisic

Abstract:

Forest fire is a major threat in many regions in Croatia, especially in coastal areas. Although they are often caused by natural processes, the most common cause is the human factor, intentional or unintentional. Forest fires drastically transform landscapes and influence natural processes. The main goal of the presented research is to analyse and quantify the impact of the forest fire on hydrological processes and propose the model that best describes changes in hydrological patterns in the analysed catchments. Keeping in mind the spatial component of the processes, geospatial analysis is performed to gain better insight into the spatial variability of the hydrological response to disastrous events. In that respect, two catchments that experienced severe forest fire were delineated, and various hydrological and meteorological data were collected both attribute and spatial. The major drawback is certainly the lack of hydrological data, common in small torrential karstic streams; hence modelling results should be validated with the data collected in the catchment that has similar characteristics and established hydrological monitoring. The event chosen for the modelling is the forest fire that occurred in July 2019 and burned nearly 10% of the analysed area. Surface (land use/land cover) conditions before and after the event were derived from the two Sentinel-2 images. The mapping of the burnt area is based on a comparison of the Normalized Burn Index (NBR) computed from both images. To estimate and compare hydrological behaviour before and after the event, curve number (CN) values are assigned to the land use/land cover classes derived from the satellite images. Hydrological modelling resulted in surface runoff generation and hence prediction of hydrological responses in the catchments to a forest fire event. The research was supported by the Croatian Science Foundation through the project 'Influence of Open Fires on Water and Soil Quality' (IP-2018-01-1645).

Keywords: Croatia, forest fire, geospatial analysis, hydrological response

Procedia PDF Downloads 137
214 Relationship between Left Ventricle Position and Hemodynamic Parameters during Cardiopulmonary Resuscitation in a Pig Model

Authors: Hyun Chang Kim, Yong Hun Jung, Kyung Woon Jeung

Abstract:

Background: From the viewpoint of cardiac pump theory, the area of the left ventricle (LV) subjected to compression increases as the LV lies closer to the sternum, possibly resulting in higher blood flow in patients with LV closer to the sternum. However, no study has evaluated LV position during cardiac arrest or its relationship with hemodynamic parameters during cardiopulmonary resuscitation (CPR). The objectives of this study were to determine whether the position of the LV relative to the anterior-posterior axis representing the direction of chest compression shifts during cardiac arrest and to examine the relationship between LV position and hemodynamic parameters during CPR. Methods: Subcostal view echocardiograms were obtained from 15 pigs with the transducer parallel to the long axis of the sternum before inducing ventricular fibrillation (VF) and during cardiac arrest. Computed tomography was performed in three pigs to objectively observe LV position during cardiac arrest. LV position parameters including the shortest distance between the anterior-posterior axis and the mid-point of the LV chamber (DAP-MidLV), the shortest distance between the anterior-posterior axis and the LV apex (DAP-Apex), and the area fraction of the LV located on the right side of the anterior-posterior axis (LVARight/LVATotal) were measured. Results: DAP-MidLV, DAP-Apex, and LVARight/LVATotal decreased progressively during untreated VF and basic life support (BLS), and then increased during advanced cardiovascular life support (ACLS). A repeated measures analysis of variance revealed significant time effects for these parameters. During BLS, the end-tidal carbon dioxide and systolic right atrial pressure were significantly correlated with the LV position parameters. During ACLS, systolic arterial pressure and systolic right atrial pressure were significantly correlated with DAP-MidLV and DAP-Apex. Conclusions: LV position changed significantly during cardiac arrest compared to the pre-arrest baseline. LV position during CPR had significant correlations with hemodynamic parameters.

Keywords: heart arrest, cardiopulmonary resuscitation, heart ventricle, hemodynamics

Procedia PDF Downloads 190
213 Reliability Levels of Reinforced Concrete Bridges Obtained by Mixing Approaches

Authors: Adrián D. García-Soto, Alejandro Hernández-Martínez, Jesús G. Valdés-Vázquez, Reyna A. Vizguerra-Alvarez

Abstract:

Reinforced concrete bridges designed by code are intended to achieve target reliability levels adequate for the geographical environment where the code is applicable. Several methods can be used to estimate such reliability levels. Many of them require the establishment of an explicit limit state function (LSF). When such LSF is not available as a close-form expression, the simulation techniques are often employed. The simulation methods are computing intensive and time consuming. Note that if the reliability of real bridges designed by code is of interest, numerical schemes, the finite element method (FEM) or computational mechanics could be required. In these cases, it can be quite difficult (or impossible) to establish a close-form of the LSF, and the simulation techniques may be necessary to compute reliability levels. To overcome the need for a large number of simulations when no explicit LSF is available, the point estimate method (PEM) could be considered as an alternative. It has the advantage that only the probabilistic moments of the random variables are required. However, in the PEM, fitting of the resulting moments of the LSF to a probability density function (PDF) is needed. In the present study, a very simple alternative which allows the assessment of the reliability levels when no explicit LSF is available and without the need of extensive simulations is employed. The alternative includes the use of the PEM, and its applicability is shown by assessing reliability levels of reinforced concrete bridges in Mexico when a numerical scheme is required. Comparisons with results by using the Monte Carlo simulation (MCS) technique are included. To overcome the problem of approximating the probabilistic moments from the PEM to a PDF, a well-known distribution is employed. The approach mixes the PEM and other classic reliability method (first order reliability method, FORM). The results in the present study are in good agreement whit those computed with the MCS. Therefore, the alternative of mixing the reliability methods is a very valuable option to determine reliability levels when no close form of the LSF is available, or if numerical schemes, the FEM or computational mechanics are employed.

Keywords: structural reliability, reinforced concrete bridges, combined approach, point estimate method, monte carlo simulation

Procedia PDF Downloads 346
212 Effects of Effort and Water Quality on Productivity (CPUE) of Hampal (Hampala macrolepidota) Resources in Jatiluhur Dam, West Java

Authors: Ririn Marinasari, S. Pi

Abstract:

Hampal (Hampala macrolepidota) is one of Citarum river indigenous fishes that still find in Jatiluhur dam. IUCN at 2013 said that hampal listed on redlist species category, this species was rare in Jatiluhur dam. This species more and more decreasing because change of habitats characteristic such as water quality and fishing effort. This study aims to determine and identify the influence of fishing effort and the quality of water on the productivity of fish resources hampal (Hampala macrolepidota) in Jatiluhur. The study was conducted from October to November 2013. Zones of research include lacustrine zone, transition and Riverin. Hampal fish productivity value computed by Hampal’s CPUE values. The results showed that fish MSY hampal obtained from surplus production model of Schaefer is equal to 0.2045 tons / quarterly. In the years 2011-2012 have occurred over fishing in 2013 while still under fishing. Total catches have exceeded the MSY during the year 2011 and the third quarterly of 2012 tons of fish that exceed 0.2045 hampal. The rate of utilization of fish resources hampal is equal to 80% of MSY or equal to the allowable catch (Total Allowable Catch) for fish in Jatiluhur hampal based Schaefer surplus production theory. Fishing effort, water quality parameters such as DO, turbidity and negatively correlated sulfide as H2S, while the temperature and pH positively correlated to productivity or unit catches fish hampal efforts in quarterly time series in the period 2011-2013. Shows that the higher fishing effort, DO, turbidity and sulfide in H2S and diminishing the temperature and pH of the productivity decreases. Variables that affect the productivity of fishing hampal only H2S only factor beta coefficient -0.834 which indicates a negative effect. It can be caused by H2S levels are toxic and have already exceeded the quality standard, while for other water quality parameters are still below the maximum standards allowed in the waters. Result of the study can be a reference of fishing regulation for hampal conservation in Jatiluhur dam.

Keywords: effort, hampal, productivity, water quality

Procedia PDF Downloads 299
211 Utilization of Antenatal Care Services by Domestic Workers in Delhi

Authors: Meenakshi

Abstract:

Background: The complications during pregnancy are the major cause of morbidity and deaths among women in the reproductive age group. Childbearing is the most important phase in women’s lives that occur mainly in the adolescent and adult years. Maternal health, thus is an important issue as this as this is important phase is also productive time for women as they strive fulfill their capabilities as an individual, mothers, family members and also as a citizen. The objective of the study is to document the coverage of ANC and its determinants among domestic workers. Method: A survey of 300 domestic workers were carried in Delhi. Only respondents in the age group (15-49) and whose recent birth was of 5 years preceding the survey were included. Socio-demographic data and information on maternal health was collected from these respondents Information on ANC was collected from total 300 respondents. Standard of living index were composed based on households assists and similarly autonomy index was computed based on women decision making power in the households taking certain key variables. Cross tabulations were performed to obtain frequency and percentages. Potential socio-economic determinants of utilization of ANC among domestic workers were examined using binary logistic regressions. Results: Out of 300 domestic workers survey, only 70.7 per cent per cent received ANC. Domestic workers who married at age 18 years and above are 4 times more likely to utilize antenatal services during their last birth (***p< 0.01). Comparison to domestic workers with number of living children two or less, domestic workers with number of living children more than two are less likely to utilize antenatal care services (**p< 0.05). Domestic workers belonging to Other Backward Castes are more likely to utilize antenatal care services than domestic workers belonging to scheduled tribes ((**p< 0.05). Conclusion: The level of utilization of maternal health services are less among domestic workers is less, as they spend most of their time at the employers household. Though demonstration effect do have impact on their life styles but utilization of maternal health services is poor. Strategies and action are needed to improve the utilization of maternal health services among this section of workers as they are vulnerable because of no proper labour legislations.

Keywords: antenatal care, domestic workers, health services, maternal health, women’s health

Procedia PDF Downloads 198
210 Comparative Evaluation of a Dynamic Navigation System Versus a Three-Dimensional Microscope in Retrieving Separated Endodontic Files: An in Vitro Study

Authors: Mohammed H. Karim, Bestoon M. Faraj

Abstract:

Introduction: instrument separation is a common challenge in the endodontic field. Various techniques and technologies have been developed to improve the retrieval success rate. This study aimed to compare the effectiveness of a Dynamic Navigation System (DNS) and a three-dimensional microscope in retrieving broken rotary NiTi files when using trepan burs and the extractor system. Materials and Methods: Thirty maxillary first bicuspids with sixty separate roots were split into two comparable groups based on a comprehensive Cone-Beam Computed Tomography (CBCT) analysis of the root length and curvature. After standardised access opening, glide paths, and patency attainment with the K file (sizes 10 and 15), the teeth were arranged on 3D models (three per quadrant, six per model). Subsequently, controlled-memory heat-treated NiTi rotary files (#25/0.04) were notched 4 mm from the tips and fractured at the apical third of the roots. The C-FR1 Endo file removal system was employed under both guidance to retrieve the fragments, and the success rate, canal aberration, treatment time and volumetric changes were measured. The statistical analysis was performed using IBM SPSS software at a significance level of 0.05. Results: The microscope-guided group had a higher success rate than the DNS guidance, but the difference was insignificant (p > 0.05). In addition, the microscope-guided drills resulted in a substantially lower proportion of canal aberration, required less time to retrieve the fragments and caused a minor change in the root canal volume (p < 0.05). Conclusion: Although dynamically guided trephining with the extractor can retrieve separated instruments, it is inferior to three-dimensional microscope guidance regarding treatment time, procedural errors, and volume change.

Keywords: dynamic navigation system, separated instruments retrieval, trephine burs and extractor system, three-dimensional video microscope

Procedia PDF Downloads 99
209 Comparative Evaluation of Root Uptake Models for Developing Moisture Uptake Based Irrigation Schedules for Crops

Authors: Vijay Shankar

Abstract:

In the era of water scarcity, effective use of water via irrigation requires good methods for determining crop water needs. Implementation of irrigation scheduling programs requires an accurate estimate of water use by the crop. Moisture depletion from the root zone represents the consequent crop evapotranspiration (ET). A numerical model for simulating soil water depletion in the root zone has been developed by taking into consideration soil physical properties, crop and climatic parameters. The governing differential equation for unsaturated flow of water in the soil is solved numerically using the fully implicit finite difference technique. The water uptake by plants is simulated by using three different sink functions. The non-linear model predictions are in good agreement with field data and thus it is possible to schedule irrigations more effectively. The present paper describes irrigation scheduling based on moisture depletion from the different layers of the root zone, obtained using different sink functions for three cash, oil and forage crops: cotton, safflower and barley, respectively. The soil is considered at a moisture level equal to field capacity prior to planting. Two soil moisture regimes are then imposed for irrigated treatment, one wherein irrigation is applied whenever soil moisture content is reduced to 50% of available soil water; and other wherein irrigation is applied whenever soil moisture content is reduced to 75% of available soil water. For both the soil moisture regimes it has been found that the model incorporating a non-linear sink function which provides best agreement of computed root zone moisture depletion with field data, is most effective in scheduling irrigations. Simulation runs with this moisture uptake function result in saving 27.3 to 45.5% & 18.7 to 37.5%, 12.5 to 25% % &16.7 to 33.3% and 16.7 to 33.3% & 20 to 40% irrigation water for cotton, safflower and barley respectively, under 50 & 75% moisture depletion regimes over other moisture uptake functions considered in the study. Simulation developed can be used for an optimized irrigation planning for different crops, choosing a suitable soil moisture regime depending upon the irrigation water availability and crop requirements.

Keywords: irrigation water, evapotranspiration, root uptake models, water scarcity

Procedia PDF Downloads 332
208 High Heating Value Bio-Chars from a Bio-Oil Upgrading Process

Authors: Julius K. Gane, Mohamad N. Nahil, Paul T. Williams

Abstract:

In today’s world of rapid population growth and a changing climate, one way to mitigate various negative effects is via renewable energy solutions. Energy and power as basic requirements in almost all human endeavours are also the banes of the changing climate and the impacts thereof. Thus it is crucial to develop innovative and environmentally friendly energy options to ameliorate various negative repercussions. Upgrading of fast pyrolysis bio-oil via hydro-treatment offers such opportunities, as quality renewable liquid transportation fuels can be produced. The process, however, is typically accompanied by bio-char formation as a by-product. The goal of this work was to study the yield and some properties of bio-chars formed from a hydrotreatment process, with an overall aim to promote the valuable utilization of wastes or by-products from renewable energy technologies. It is assumed that bio-chars that have comparable energy contents with coals will be more desirable as solid energy materials due to renewability and environmental friendliness. Therefore, the analytical work in this study focused mainly on determining the higher heating value (HHV) of the chars. The method involved the reaction of bio-oil in an autoclave supplied by the Parr Instrument Company, IL, USA. Two main parameters (different temperatures and resident times) were investigated. The chars were characterized using a Thermo EA2000 CHNS analyser, then oxygen contents and HHVs computed based on the literature. From the results, these bio-chars can readily serve as feedstocks for the production of renewable solid fuels. Their HHVs ranged between 29.26-39.18 MJ/kg, affected by different temperatures and retention times. There was an inverse relationship between the oxygen content and the HHVs of the chars. It can, therefore, be concluded that it is possible to optimize the process efficiency of the hydrotreatment process used through the production of renewable energy materials from the 'waste’ char by-products. Future work should consider developing a suitable balance between the primary objective of bio-oil upgrading processes (which is to improve the quality of the liquid fuels) and the conversion of its solid wastes into value-added products such as smokeless briquettes.

Keywords: bio-char, renewable solid biofuels, valorisation, waste-to-energy

Procedia PDF Downloads 128
207 The Use of Ultrasound as a Safe and Cost-Efficient Technique to Assess Visceral Fat in Children with Obesity

Authors: Bassma A. Abdel Haleem, Ehab K. Emam, George E. Yacoub, Ashraf M. Salem

Abstract:

Background: Obesity is an increasingly common problem in childhood. Childhood obesity is considered the main risk factor for the development of metabolic syndrome (MetS) (diabetes type 2, dyslipidemia, and hypertension). Recent studies estimated that among children with obesity 30-60% will develop MetS. Visceral fat thickness is a valuable predictor of the development of MetS. Computed tomography and dual-energy X-ray absorptiometry are the main techniques to assess visceral fat. However, they carry the risk of radiation exposure and are expensive procedures. Consequently, they are seldom used in the assessment of visceral fat in children. Some studies explored the potential of ultrasound as a substitute to assess visceral fat in the elderly and found promising results. Given the vulnerability of children to radiation exposure, we sought to evaluate ultrasound as a safer and more cost-efficient alternative for measuring visceral fat in obese children. Additionally, we assessed the correlation between visceral fat and obesity indicators such as insulin resistance. Methods: A cross-sectional study was conducted on 46 children with obesity (aged 6–16 years). Their visceral fat was evaluated by ultrasound. Subcutaneous fat thickness (SFT), i.e., the measurement from the skin-fat interface to the linea alba, and visceral fat thickness (VFT), i.e., the thickness from the linea alba to the aorta, were measured and correlated with anthropometric measures, fasting lipid profile, homeostatic model assessment for insulin resistance (HOMA-IR) and liver enzymes (ALT). Results: VFT assessed via ultrasound was found to strongly correlate with the BMI, HOMA-IR with AUC for VFT as a predictor of insulin resistance of 0.858 and cut off point of >2.98. VFT also correlates positively with serum triglycerides and serum ALT. VFT correlates negatively with HDL. Conclusions: Ultrasound, a safe and cost-efficient technique, could be a useful tool for measuring the abdominal fat thickness in children with obesity. Ultrasound-measured VFT could be an appropriate prognostic factor for insulin resistance, hypertriglyceridemia, and elevated liver enzymes in obese children.

Keywords: metabolic syndrome, pediatric obesity, sonography, visceral fat

Procedia PDF Downloads 120
206 Prevention of Heart Failure Progression in Patients with Post-Infarction Cardiosclerosis After Coronavirus Infection

Authors: Sujayeva V. A., Karpova I. S., Koslataya O. V., Kolyadko M. G., Russkikh I. I., Vankovich E. A.

Abstract:

Objective: The goal of this study is to develop a method for the prevention of the progression of heart failure (HF) in patients with post-infarction cardiosclerosis who have suffered coronavirus infection. Methods: 135 patients with post-infarction cardiosclerosis were divided into 2 groups: Group I - patients who had suffered COVID-19 - 85 people, and Group II - patients who had not suffered COVID-19 - 50 people. Patients of group I, depending on the level of N-terminal fragment of natriuretic peptide (NTproBNP), were divided into 2 subgroups - subgroup A - with HF - 40 people, subgroup B - without HF - 45 people. All patients underwent a clinical examination, echocardiography, electrocardiotopography in 60 leads, computed angiography of the coronary arteries, heart magnetic resonance imaging, NTproBNP. Results: In the post-Covid period, in patients with post-infarction cardiosclerosis, remodeling of the left ventricle and right parts of the heart, deterioration of the systolic-diastolic function of both ventricles, increased pressure in the pulmonary artery, progression of coronary artery atherosclerosis, and an increase in the size of myocardial fibrosis were revealed. The consequence of these changes was the progression of heart failure. The developed method of medical prevention made it possible to improve the clinical course of coronary artery disease and prevent the progression of chronic heart failure in patients with post-infarction cardiosclerosis. Conclusions: In patients with post-infarction cardiosclerosis who initially had HF, after 1 year, according to laboratory and instrumental data, a slight decrease in its severity was revealed. In patients with post-infarction cardiosclerosis who did not have HF before COVID-19, HF developed 1 year after the coronavirus disease, which may be due to the identified process of myocardial fibrosis, which dictates the need to prevent the development of HF in patients with post-infarction cardiosclerosis, even those who did not initially have HF. The proposed method of medical prevention made it possible to improve the clinical course of coronary artery disease in patients with post-infarction cardiosclerosis after COVID-19, both in persons with and without HF, when included in the study. A method of medical prevention in people with post-infarction cardiosclerosis after COVID-19 infection, including spironolactone, loop diuretics, empagliflozin, sacubitril/valsartan, helped prevent the progression of HF.

Keywords: elderly, myocardial infarction, COVID-19, prevention

Procedia PDF Downloads 25
205 A Normalized Non-Stationary Wavelet Based Analysis Approach for a Computer Assisted Classification of Laryngoscopic High-Speed Video Recordings

Authors: Mona K. Fehling, Jakob Unger, Dietmar J. Hecker, Bernhard Schick, Joerg Lohscheller

Abstract:

Voice disorders origin from disturbances of the vibration patterns of the two vocal folds located within the human larynx. Consequently, the visual examination of vocal fold vibrations is an integral part within the clinical diagnostic process. For an objective analysis of the vocal fold vibration patterns, the two-dimensional vocal fold dynamics are captured during sustained phonation using an endoscopic high-speed camera. In this work, we present an approach allowing a fully automatic analysis of the high-speed video data including a computerized classification of healthy and pathological voices. The approach bases on a wavelet-based analysis of so-called phonovibrograms (PVG), which are extracted from the high-speed videos and comprise the entire two-dimensional vibration pattern of each vocal fold individually. Using a principal component analysis (PCA) strategy a low-dimensional feature set is computed from each phonovibrogram. From the PCA-space clinically relevant measures can be derived that quantify objectively vibration abnormalities. In the first part of the work it will be shown that, using a machine learning approach, the derived measures are suitable to distinguish automatically between healthy and pathological voices. Within the approach the formation of the PCA-space and consequently the extracted quantitative measures depend on the clinical data, which were used to compute the principle components. Therefore, in the second part of the work we proposed a strategy to achieve a normalization of the PCA-space by registering the PCA-space to a coordinate system using a set of synthetically generated vibration patterns. The results show that owing to the normalization step potential ambiguousness of the parameter space can be eliminated. The normalization further allows a direct comparison of research results, which bases on PCA-spaces obtained from different clinical subjects.

Keywords: Wavelet-based analysis, Multiscale product, normalization, computer assisted classification, high-speed laryngoscopy, vocal fold analysis, phonovibrogram

Procedia PDF Downloads 266
204 Physics-Based Earthquake Source Models for Seismic Engineering: Analysis and Validation for Dip-Slip Faults

Authors: Percy Galvez, Anatoly Petukhin, Paul Somerville, Ken Miyakoshi, Kojiro Irikura, Daniel Peter

Abstract:

Physics-based dynamic rupture modelling is necessary for estimating parameters such as rupture velocity and slip rate function that are important for ground motion simulation, but poorly resolved by observations, e.g. by seismic source inversion. In order to generate a large number of physically self-consistent rupture models, whose rupture process is consistent with the spatio-temporal heterogeneity of past earthquakes, we use multicycle simulations under the heterogeneous rate-and-state (RS) friction law for a 45deg dip-slip fault. We performed a parametrization study by fully dynamic rupture modeling, and then, a set of spontaneous source models was generated in a large magnitude range (Mw > 7.0). In order to validate rupture models, we compare the source scaling relations vs. seismic moment Mo for the modeled rupture area S, as well as average slip Dave and the slip asperity area Sa, with similar scaling relations from the source inversions. Ground motions were also computed from our models. Their peak ground velocities (PGV) agree well with the GMPE values. We obtained good agreement of the permanent surface offset values with empirical relations. From the heterogeneous rupture models, we analyzed parameters, which are critical for ground motion simulations, i.e. distributions of slip, slip rate, rupture initiation points, rupture velocities, and source time functions. We studied cross-correlations between them and with the friction weakening distance Dc value, the only initial heterogeneity parameter in our modeling. The main findings are: (1) high slip-rate areas coincide with or are located on an outer edge of the large slip areas, (2) ruptures have a tendency to initiate in small Dc areas, and (3) high slip-rate areas correlate with areas of small Dc, large rupture velocity and short rise-time.

Keywords: earthquake dynamics, strong ground motion prediction, seismic engineering, source characterization

Procedia PDF Downloads 144
203 A Relational Approach to Adverb Use in Interactions

Authors: Guillaume P. Fernandez

Abstract:

Individual language use is a matter of choice in particular interactions. The paper proposes a conceptual and theoretical framework with methodological consideration to develop how language produced in dyadic relations is to be considered and situated in the larger social configuration the interaction is embedded within. An integrated and comprehensive view is taken: social interactions are expected to be ruled by a normative context, defined by the chain of interdependences that structures the personal network. In this approach, the determinants of discursive practices are not only constrained by the moment of production and isolated from broader influences. Instead, the position the individual and the dyad have in the personal network influences the discursive practices in a twofold manner: on the one hand, the network limits the access to linguistic resources available within it, and, on the other hand, the structure of the network influences the agency of the individual, by the social control inherent to particular network characteristics. Concretely, we investigate how and to what extent consistent ego is from one interaction to another in his or her use of adverbs. To do so, social network analysis (SNA) methods are mobilized. Participants (N=130) are college students recruited in the french speaking part of Switzerland. The personal network of significant ones of each individual is created using name generators and edge interpreters, with a focus on social support and conflict. For the linguistic parts, respondents were asked to record themselves with five of their close relations. From the recordings, we computed an average similarity score based on the adverb used across interactions. In terms of analyses, two are envisaged: First, OLS regressions including network-level measures, such as density and reciprocity, and individual-level measures, such as centralities, are performed to understand the tenets of linguistic similarity from one interaction to another. The second analysis considers each social tie as nested within ego networks. Multilevel models are performed to investigate how the different types of ties may influence the likelihood to use adverbs, by controlling structural properties of the personal network. Primary results suggest that the more cohesive the network, the less likely is the individual to change his or her manner of speaking, and social support increases the use of adverbs in interactions. While promising results emerge, further research should consider a longitudinal approach to able the claim of causality.

Keywords: personal network, adverbs, interactions, social influence

Procedia PDF Downloads 68