Search results for: detection and estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5221

Search results for: detection and estimation

601 Forensic Medical Capacities of Research of Saliva Stains on Physical Evidence after Washing

Authors: Saule Mussabekova

Abstract:

Recent advances in genetics have allowed increasing acutely the capacities of the formation of reliable evidence in conducting forensic examinations. Thus, traces of biological origin are important sources of information about a crime. Currently, around the world, sexual offenses have increased, and among them are those in which the criminals use various detergents to remove traces of their crime. A feature of modern synthetic detergents is the presence of biological additives - enzymes. Enzymes purposefully destroy stains of biological origin. To study the nature and extent of the impact of modern washing powders on saliva stains on the physical evidence, specially prepared test specimens of different types of tissues to which saliva was applied have been examined. Materials and Methods: Washing machines of famous manufacturers of household appliances have been used with different production characteristics and advertised brands of washing powder for test washing. Over 3,500 experimental samples were tested. After washing, the traces of saliva were identified using modern research methods of forensic medicine. Results: The influence was tested and the dependence of the use of different washing programs, types of washing machines and washing powders in the process of establishing saliva trace and identify of the stains on the physical evidence while washing was revealed. The results of experimental and practical expert studies have shown that in most cases it is not possible to draw the conclusions in the identification of saliva traces on physical evidence after washing. This is a consequence of the effect of biological additives and other additional factors on traces of saliva during washing. Conclusions: On the basis of the results of the study, the feasibility of saliva traces of the stains on physical evidence after washing is established. The use of modern molecular genetic methods makes it possible to partially solve the problems arising in the study of unlaundered evidence. Additional study of physical evidence after washing facilitates detection and investigation of sexual offenses against women and children.

Keywords: saliva research, modern synthetic detergents, laundry detergents, forensic medicine

Procedia PDF Downloads 216
600 Tests for Zero Inflation in Count Data with Measurement Error in Covariates

Authors: Man-Yu Wong, Siyu Zhou, Zhiqiang Cao

Abstract:

In quality of life, health service utilization is an important determinant of medical resource expenditures on Colorectal cancer (CRC) care, a better understanding of the increased utilization of health services is essential for optimizing the allocation of healthcare resources to services and thus for enhancing the service quality, especially for high expenditure on CRC care like Hong Kong region. In assessing the association between the health-related quality of life (HRQOL) and health service utilization in patients with colorectal neoplasm, count data models can be used, which account for over dispersion or extra zero counts. In our data, the HRQOL evaluation is a self-reported measure obtained from a questionnaire completed by the patients, misreports and variations in the data are inevitable. Besides, there are more zero counts from the observed number of clinical consultations (observed frequency of zero counts = 206) than those from a Poisson distribution with mean equal to 1.33 (expected frequency of zero counts = 156). This suggests that excess of zero counts may exist. Therefore, we study tests for detecting zero-inflation in models with measurement error in covariates. Method: Under classical measurement error model, the approximate likelihood function for zero-inflation Poisson regression model can be obtained, then Approximate Maximum Likelihood Estimation(AMLE) can be derived accordingly, which is consistent and asymptotically normally distributed. By calculating score function and Fisher information based on AMLE, a score test is proposed to detect zero-inflation effect in ZIP model with measurement error. The proposed test follows asymptotically standard normal distribution under H0, and it is consistent with the test proposed for zero-inflation effect when there is no measurement error. Results: Simulation results show that empirical power of our proposed test is the highest among existing tests for zero-inflation in ZIP model with measurement error. In real data analysis, with or without considering measurement error in covariates, existing tests, and our proposed test all imply H0 should be rejected with P-value less than 0.001, i.e., zero-inflation effect is very significant, ZIP model is superior to Poisson model for analyzing this data. However, if measurement error in covariates is not considered, only one covariate is significant; if measurement error in covariates is considered, only another covariate is significant. Moreover, the direction of coefficient estimations for these two covariates is different in ZIP regression model with or without considering measurement error. Conclusion: In our study, compared to Poisson model, ZIP model should be chosen when assessing the association between condition-specific HRQOL and health service utilization in patients with colorectal neoplasm. and models taking measurement error into account will result in statistically more reliable and precise information.

Keywords: count data, measurement error, score test, zero inflation

Procedia PDF Downloads 288
599 Highly Responsive p-NiO/n-rGO Heterojunction Based Self-Powered UV Photodetectors

Authors: P. Joshna, Souvik Kundu

Abstract:

Detection of ultraviolet (UV) radiation is very important as it has exhibited a profound influence on humankind and other existences, including military equipment. In this work, a self-powered UV photodetector was reported based on oxides heterojunctions. The thin films of p-type nickel oxide (NiO) and n-type reduced graphene oxide (rGO) were used for the formation of p-n heterojunction. Low-Cost and low-temperature chemical synthesis was utilized to prepare the oxides, and the spin coating technique was employed to deposit those onto indium doped tin oxide (ITO) coated glass substrates. The top electrode platinum was deposited utilizing physical vapor evaporation technique. NiO offers strong UV absorption with high hole mobility, and rGO prevents the recombination rate by separating electrons out from the photogenerated carriers. Several structural characterizations such as x-ray diffraction, atomic force microscope, scanning electron microscope were used to study the materials crystallinity, microstructures, and surface roughness. On one side, the oxides were found to be polycrystalline in nature, and no secondary phases were present. On the other side, surface roughness was found to be low with no pit holes, which depicts the formation of high-quality oxides thin films. Whereas, x-ray photoelectron spectroscopy was employed to study the chemical compositions and oxidation structures. The electrical characterizations such as current-voltage and current response were also performed on the device to determine the responsivity, detectivity, and external quantum efficiency under dark and UV illumination. This p-n heterojunction device offered faster photoresponse and high on-off ratio under 365 nm UV light illumination of zero bias. The device based on the proposed architecture shows the efficacy of the oxides heterojunction for efficient UV photodetection under zero bias, which opens up a new path towards the development of self-powered photodetector for environment and health monitoring sector.

Keywords: chemical synthesis, oxides, photodetectors, spin coating

Procedia PDF Downloads 123
598 Generation of Roof Design Spectra Directly from Uniform Hazard Spectra

Authors: Amin Asgarian, Ghyslaine McClure

Abstract:

Proper seismic evaluation of Non-Structural Components (NSCs) mandates an accurate estimation of floor seismic demands (i.e. acceleration and displacement demands). Most of the current international codes incorporate empirical equations to calculate equivalent static seismic force for which NSCs and their anchorage system must be designed. These equations, in general, are functions of component mass and peak seismic acceleration to which NSCs are subjected to during the earthquake. However, recent studies have shown that these recommendations are suffered from several shortcomings such as neglecting the higher mode effect, tuning effect, NSCs damping effect, etc. which cause underestimation of the component seismic acceleration demand. This work is aimed to circumvent the aforementioned shortcomings of code provisions as well as improving them by proposing a simplified, practical, and yet accurate approach to generate acceleration Floor Design Spectra (FDS) directly from corresponding Uniform Hazard Spectra (UHS) (i.e. design spectra for structural components). A database of 27 Reinforced Concrete (RC) buildings in which Ambient Vibration Measurements (AVM) have been conducted. The database comprises 12 low-rise, 10 medium-rise, and 5 high-rise buildings all located in Montréal, Canada and designated as post-disaster buildings or emergency shelters. The buildings are subjected to a set of 20 compatible seismic records and Floor Response Spectra (FRS) in terms of pseudo acceleration are derived using the proposed approach for every floor of the building in both horizontal directions considering 4 different damping ratios of NSCs (i.e. 2, 5, 10, and 20% viscous damping). Several effective parameters on NSCs response are evaluated statistically. These parameters comprise NSCs damping ratios, tuning of NSCs natural period with one of the natural periods of supporting structure, higher modes of supporting structures, and location of NSCs. The entire spectral region is divided into three distinct segments namely short-period, fundamental period, and long period region. The derived roof floor response spectra for NSCs with 5% damping are compared with the 5% damping UHS and procedure are proposed to generate roof FDS for NSCs with 5% damping directly from 5% damped UHS in each spectral region. The generated FDS is a powerful, practical, and accurate tool for seismic design and assessment of acceleration-sensitive NSCs particularly in existing post-critical buildings which have to remain functional even after the earthquake and cannot tolerate any damage to NSCs.

Keywords: earthquake engineering, operational and functional components (OFCs), operational modal analysis (OMA), seismic assessment and design

Procedia PDF Downloads 236
597 Coupling Static Multiple Light Scattering Technique With the Hansen Approach to Optimize Dispersibility and Stability of Particle Dispersions

Authors: Guillaume Lemahieu, Matthias Sentis, Giovanni Brambilla, Gérard Meunier

Abstract:

Static Multiple Light Scattering (SMLS) has been shown to be a straightforward technique for the characterization of colloidal dispersions without dilution, as multiply scattered light in backscattered and transmitted mode is directly related to the concentration and size of scatterers present in the sample. In this view, the use of SMLS for stability measurement of various dispersion types has already been widely described in the literature. Indeed, starting from a homogeneous dispersion, the variation of backscattered or transmitted light can be attributed to destabilization phenomena, such as migration (sedimentation, creaming) or particle size variation (flocculation, aggregation). In a view to investigating more on the dispersibility of colloidal suspensions, an experimental set-up for “at the line” SMLS experiment has been developed to understand the impact of the formulation parameters on particle size and dispersibility. The SMLS experiment is performed with a high acquisition rate (up to 10 measurements per second), without dilution, and under direct agitation. Using such experimental device, SMLS detection can be combined with the Hansen approach to optimize the dispersing and stabilizing properties of TiO₂ particles. It appears that the dispersibility and the stability spheres generated are clearly separated, arguing that lower stability is not necessarily a consequence of poor dispersibility. Beyond this clarification, this combined SMLS-Hansen approach is a major step toward the optimization of dispersibility and stability of colloidal formulations by finding solvents having the best compromise between dispersing and stabilizing properties. Such study can be intended to find better dispersion media, greener and cheaper solvents to optimize particles suspensions, reduce the content of costly stabilizing additives or satisfy product regulatory requirements evolution in various industrial fields using suspensions (paints & inks, coatings, cosmetics, energy).

Keywords: dispersibility, stability, Hansen parameters, particles, solvents

Procedia PDF Downloads 110
596 Resting-State Functional Connectivity Analysis Using an Independent Component Approach

Authors: Eric Jacob Bacon, Chaoyang Jin, Dianning He, Shuaishuai Hu, Lanbo Wang, Han Li, Shouliang Qi

Abstract:

Objective: Refractory epilepsy is a complicated type of epilepsy that can be difficult to diagnose. Recent technological advancements have made resting-state functional magnetic resonance (rsfMRI) a vital technique for studying brain activity. However, there is still much to learn about rsfMRI. Investigating rsfMRI connectivity may aid in the detection of abnormal activities. In this paper, we propose studying the functional connectivity of rsfMRI candidates to diagnose epilepsy. Methods: 45 rsfMRI candidates, comprising 26 with refractory epilepsy and 19 healthy controls, were enrolled in this study. A data-driven approach known as independent component analysis (ICA) was used to achieve our goal. First, rsfMRI data from both patients and healthy controls were analyzed using group ICA. The components that were obtained were then spatially sorted to find and select meaningful ones. A two-sample t-test was also used to identify abnormal networks in patients and healthy controls. Finally, based on the fractional amplitude of low-frequency fluctuations (fALFF), a chi-square statistic test was used to distinguish the network properties of the patient and healthy control groups. Results: The two-sample t-test analysis yielded abnormal in the default mode network, including the left superior temporal lobe and the left supramarginal. The right precuneus was found to be abnormal in the dorsal attention network. In addition, the frontal cortex showed an abnormal cluster in the medial temporal gyrus. In contrast, the temporal cortex showed an abnormal cluster in the right middle temporal gyrus and the right fronto-operculum gyrus. Finally, the chi-square statistic test was significant, producing a p-value of 0.001 for the analysis. Conclusion: This study offers evidence that investigating rsfMRI connectivity provides an excellent diagnosis option for refractory epilepsy.

Keywords: ICA, RSN, refractory epilepsy, rsfMRI

Procedia PDF Downloads 76
595 Attitude and Knowledge of Primary Health Care Physicians and Local Inhabitants about Leishmaniasis and Sandfly in West Alexandria, Egypt

Authors: Randa M. Ali, Naguiba F. Loutfy, Osama M. Awad

Abstract:

Background: Leishmaniasis is a worldwide disease, affecting 88 countries, it is estimated that about 350 million people are at risk of leishmaniasis. Overall prevalence is 12 million people with annual mortality of about 60,000. Annual incidence is 1,500,000 cases of cutaneous leishmaniasis (CL) worldwide and half million cases of visceral Leishmaniasis (VL). Objectives: The objective of this study was to assess primary health care physicians knowledge (PHP) and attitude about leishmaniasis and to assess awareness of local inhabitants about the disease and its vector in four areas in west Alexandria, Egypt. Methods: This study was a cross sectional survey that was conducted in four PHC units in west Alexandria. All physicians currently working in these units during the study period were invited to participate in the study, only 20 PHP completed the questionnaire. 60 local inhabitant were selected randomly from the four areas of the study, 15 from each area; Data was collected through two different specially designed questionnaires. Results: 11(55%) percent of the physicians had satisfactory knowledge, they answered more than 9 (60%) questions out of a total 14 questions about leishmaniasis and sandfly. The second part of the questionnaire is concerned with attitude of the primary health care physicians about leishmaniasis, 17 (85%) had good attitude and 3 (15%) had poor attitude. The second questionnaire showed that the awareness of local inhabitants about leishmaniasis and sandly as a vector of the disease is poor and needs to be corrected. Most of the respondents (90%) had not heard about leishmaniasis, Only 3 (5%) of the interviewed inhabitants said they know sandfly and its role in transmission of leishmaniasis. Conclusions: knowledge and attitudes of physicians are acceptable. However, there is, room for improvement and could be done through formal training courses and distribution of guidelines. In addition to raising the awareness of primary health care physicians about the importance of early detection and notification of cases of lesihmaniasis. Moreover, health education for raising awareness of the public regarding the vector and the disease is necessary because related studies have demonstrated that if the inhabitants do not perceive mosquitoes to be responsible for diseases such as malaria they do not take enough measures to protect themselves against the vector.

Keywords: leishmaniasis, PHP, knowledge, attitude, local inhabitants

Procedia PDF Downloads 449
594 Controlled Growth of Au Hierarchically Ordered Crystals Architectures for Electrochemical Detection of Traces of Molecules

Authors: P. Bauer, K. Mougin, V. Vignal, A. Buch, P. Ponthiaux, D. Faye

Abstract:

Nowadays, noble metallic nanostructures with unique morphology are widely used as new sensors due to their fascinating optical, electronic and catalytic properties. Among various shapes, dendritic nanostructures have attracted much attention because of their large surface-to-volume ratio, high sensitivity and special texture with sharp tips and nanoscale junctions. Several methods have been developed to fabricate those specific structures such as electrodeposition, photochemical way, seed-mediated growth or wet chemical method. The present study deals with a novel approach for a controlled growth pattern-directed organisation of Au flower-like crystals (NFs) deposited onto stainless steel plates to achieve large-scale functional surfaces. This technique consists in the deposition of a soft nanoporous template on which Au NFs are grown by electroplating and seed-mediated method. Size, morphology, and interstructure distance have been controlled by a site selective nucleation process. Dendritic Au nanostructures have appeared as excellent Raman-active candidates due to the presence of very sharp tips of multi-branched Au nanoparticles that leads to a large local field enhancement and a good SERS sensitivity. In addition, these structures have also been used as electrochemical sensors to detect traces of molecules present in a solution. A correlation of the number of active sites on the surface and the current charge by both colorimetric method and cyclic voltammetry of gold structures have allowed a calibration of the system. This device represents a first step for the fabrication of MEMs platform that could ultimately be integrated into a lab-on-chip system. It also opens pathways to several technologically large-scale nanomaterials fabrication such as hierarchically ordered crystal architectures for sensor applications.

Keywords: dendritic, electroplating, gold, template

Procedia PDF Downloads 186
593 Vehicles Analysis, Assessment and Redesign Related to Ergonomics and Human Factors

Authors: Susana Aragoneses Garrido

Abstract:

Every day, the roads are scenery of numerous accidents involving vehicles, producing thousands of deaths and serious injuries all over the world. Investigations have revealed that Human Factors (HF) are one of the main causes of road accidents in modern societies. Distracted driving (including external or internal aspects of the vehicle), which is considered as a human factor, is a serious and emergent risk to road safety. Consequently, a further analysis regarding this issue is essential due to its transcendence on today’s society. The objectives of this investigation are the detection and assessment of the HF in order to provide solutions (including a better vehicle design), which might mitigate road accidents. The methodology of the project is divided in different phases. First, a statistical analysis of public databases is provided between Spain and The UK. Second, data is classified in order to analyse the major causes involved in road accidents. Third, a simulation between different paths and vehicles is presented. The causes related to the HF are assessed by Failure Mode and Effects Analysis (FMEA). Fourth, different car models are evaluated using the Rapid Upper Body Assessment (RULA). Additionally, the JACK SIEMENS PLM tool is used with the intention of evaluating the Human Factor causes and providing the redesign of the vehicles. Finally, improvements in the car design are proposed with the intention of reducing the implication of HF in traffic accidents. The results from the statistical analysis, the simulations and the evaluations confirm that accidents are an important issue in today’s society, especially the accidents caused by HF resembling distractions. The results explore the reduction of external and internal HF through the global analysis risk of vehicle accidents. Moreover, the evaluation of the different car models using RULA method and the JACK SIEMENS PLM prove the importance of having a good regulation of the driver’s seat in order to avoid harmful postures and therefore distractions. For this reason, a car redesign is proposed for the driver to acquire the optimum position and consequently reducing the human factors in road accidents.

Keywords: analysis vehicles, asssesment, ergonomics, car redesign

Procedia PDF Downloads 335
592 Development and Validation of a Carbon Dioxide TDLAS Sensor for Studies on Fermented Dairy Products

Authors: Lorenzo Cocola, Massimo Fedel, Dragiša Savić, Bojana Danilović, Luca Poletto

Abstract:

An instrument for the detection and evaluation of gaseous carbon dioxide in the headspace of closed containers has been developed in the context of Packsensor Italian-Serbian joint project. The device is based on Tunable Diode Laser Absorption Spectroscopy (TDLAS) with a Wavelength Modulation Spectroscopy (WMS) technique in order to accomplish a non-invasive measurement inside closed containers of fermented dairy products (yogurts and fermented cheese in cups and bottles). The purpose of this instrument is the continuous monitoring of carbon dioxide concentration during incubation and storage of products over a time span of the whole shelf life of the product, in the presence of different microorganisms. The instrument’s optical front end has been designed to be integrated in a thermally stabilized incubator. An embedded computer provides processing of spectral artifacts and storage of an arbitrary set of calibration data allowing a properly calibrated measurement on many samples (cups and bottles) of different shapes and sizes commonly found in the retail distribution. A calibration protocol has been developed in order to be able to calibrate the instrument on the field also on containers which are notoriously difficult to seal properly. This calibration protocol is described and evaluated against reference measurements obtained through an industry standard (sampling) carbon dioxide metering technique. Some sets of validation test measurements on different containers are reported. Two test recordings of carbon dioxide concentration evolution are shown as an example of instrument operation. The first demonstrates the ability to monitor a rapid yeast growth in a contaminated sample through the increase of headspace carbon dioxide. Another experiment shows the dissolution transient with a non-saturated liquid medium in presence of a carbon dioxide rich headspace atmosphere.

Keywords: TDLAS, carbon dioxide, cups, headspace, measurement

Procedia PDF Downloads 324
591 Evaluation of Antidiabetic Activity of a Combination Extract of Nigella Sativa & Cinnamomum Cassia in Streptozotocin Induced Type-I Diabetic Rats

Authors: Ginpreet Kaur, Mohammad Yasir Usmani, Mohammed Kamil Khan

Abstract:

Diabetes mellitus is a disease with a high global burden and results in significant morbidity and mortality. In India, the number of people suffering with diabetes is expected to rise from 19 to 57 million in 2025. At present, interest in herbal remedies is growing to reduce the side effects associated with conventional dosage form like oral hypoglycemic agents and insulin for the treatment of diabetes mellitus. Our aim was to investigate the antidiabetic activities of combinatorial extract of N. sativa & C. cassia in Streptozotocin induced type-I Diabetic Rats. Thus, the present study was undertaken to screen postprandial glucose excursion potential through α- glucosidase inhibitory activity (In Vitro) and effect of combinatorial extract of N. sativa & C. cassia in Streptozotocin induced type-I Diabetic Rats (In Vivo). In addition changes in body weight, plasma glucose, lipid profile and kidney profile were also determined. The IC50 values for both extract and Acarbose was calculated by extrapolation method. Combinatorial extract of N. sativa & C. cassia at different dosages (100 and 200 mg/kg orally) and Metformin (50 mg/kg orally) as the standard drug was administered for 28 days and then biochemical estimation, body weights and OGTT (Oral glucose tolerance test) were determined. Histopathological studies were also performed on kidney and pancreatic tissue. In In-Vitro the combinatorial extract shows much more inhibiting effect than the individual extracts. The results reveals that combinatorial extract of N. sativa & C. cassia has shown significant decrease in plasma glucose (p<0.0001), total cholesterol and LDL levels when compared with the STZ group The decreasing level of BUN and creatinine revealed the protection of N. sativa & C. cassia extracts against nephropathy associated with diabetes. Combination of N. sativa & C. cassia significantly improved glucose tolerance to exogenously administered glucose (2 g/kg) after 60, 90 and 120 min interval on OGTT in high dose streptozotocin induced diabetic rats compared with the untreated control group. Histopathological studies shown that treatment with N. sativa & C. cassia extract alone and in combination restored pancreatic tissue integrity and was able to regenerate the STZ damaged pancreatic β cells. Thus, the present study reveals that combination of N. sativa & C. cassia extract has significant α- glucosidase inhibitory activity and thus has great potential as a new source for diabetes treatment.

Keywords: lipid levels, OGTT, diabetes, herbs, glucosidase

Procedia PDF Downloads 431
590 Approach to Freight Trip Attraction Areas Classification, in Developing Countries

Authors: Adrián Esteban Ortiz-Valera, Angélica Lozano

Abstract:

In developing countries, informal trade is relevant, but it has been little studied in urban freight transport (UFT) context, although it is a challenge due to the non- contemplated demand it produces and the operational limitations it imposes. Hence, UFT operational improvements (initiatives) and freight attraction models must consider informal trade for developing countries. Afour phasesapproach for characterizing the commercial areas in developing countries (considering both formal and informal establishments) is proposed and applied to ten areas in Mexico City. This characterization is required to calculate real freight trip attraction and then select and/or adapt suitable initiatives. Phase 1 aims the delimitation of the study area. The following information is obtained for each establishment of a potential area: location or geographic coordinates, industrial sector, industrial subsector, and number of employees. Phase 2 characterizes the study area and proposes a set of indicators. This allows a broad view of the operations and constraints of UFT in the study area. Phase 3 classifies the study area according to seven indicators. Each indicator represents a level of conflict in the area due to the presence of formal (registered) and informal establishments on the sidewalks and streets, affecting urban freight transport (and other activities). Phase 4 determines preliminary initiatives which could be implemented in the study area to improve the operation of UFT. The indicators and initiatives relation allows a preliminary initiatives selection. This relation requires to know the following: a) the problems in the area (congested streets, lack of parking space for freight vehicles, etc.); b) the factors which limit initiatives due to informal establishments (reduced streets for freight vehicles; mobility and parking inability during a period, among others), c) the problems in the area due to its physical characteristics; and d) the factors which limit initiatives due to regulations of the area. Several differences in the study areas were observed. As the indicators increases, the areas tend to be less ordered, and the limitations for the initiatives become higher, causing a smaller number of susceptible initiatives. In ordered areas (similar to the commercial areas of developed countries), the current techniquesfor estimating freight trip attraction (FTA) can bedirectly applied, however, in the areas where the level of order is lower due to the presence of informal trade, this is not recommended because the real FTA would not be estimated. Therefore, a technique, which consider the characteristics of the areas in developing countries to obtain data and to estimate FTA, is required. This estimation can be the base for proposing feasible initiatives to such zones. The proposed approach provides a wide view of the needs of the commercial areas of developing countries. The knowledge of these needs would allow UFT´s operation to be improved and its negative impacts to be minimized.

Keywords: freight initiatives, freight trip attraction, informal trade, urban freight transport

Procedia PDF Downloads 141
589 Estimation of the Dynamic Fragility of Padre Jacinto Zamora Bridge Due to Traffic Loads

Authors: Kimuel Suyat, Francis Aldrine Uy, John Paul Carreon

Abstract:

The Philippines, composed of many islands, is connected with approximately 8030 bridges. Continuous evaluation of the structural condition of these bridges is needed to safeguard the safety of the general public. With most bridges reaching its design life, retrofitting and replacement may be needed. Concerned government agencies allocate huge costs for periodic monitoring and maintenance of these structures. The rising volume of traffic and aging of these infrastructures is challenging structural engineers to give rise for structural health monitoring techniques. Numerous techniques are already proposed and some are now being employed in other countries. Vibration Analysis is one way. The natural frequency and vibration of a bridge are design criteria in ensuring the stability, safety and economy of the structure. Its natural frequency must not be so high so as not to cause discomfort and not so low that the structure is so stiff causing it to be both costly and heavy. It is well known that the stiffer the member is, the more load it attracts. The frequency must not also match the vibration caused by the traffic loads. If this happens, a resonance occurs. Vibration that matches a systems frequency will generate excitation and when this exceeds the member’s limit, a structural failure will happen. This study presents a method for calculating dynamic fragility through the use of vibration-based monitoring system. Dynamic fragility is the probability that a structural system exceeds a limit state when subjected to dynamic loads. The bridge is modeled in SAP2000 based from the available construction drawings provided by the Department of Public Works and Highways. It was verified and adjusted based from the actual condition of the bridge. The bridge design specifications are also checked using nondestructive tests. The approach used in this method properly accounts the uncertainty of observed values and code-based structural assumptions. The vibration response of the structure due to actual loads is monitored using installed sensors on the bridge. From the determinacy of these dynamic characteristic of a system, threshold criteria can be established and fragility curves can be estimated. This study conducted in relation with the research project between Department of Science and Technology, Mapúa Institute of Technology, and the Department of Public Works and Highways also known as Mapúa-DOST Smart Bridge Project deploys Structural Health Monitoring Sensors at Zamora Bridge. The bridge is selected in coordination with the Department of Public Works and Highways. The structural plans for the bridge are also readily available.

Keywords: structural health monitoring, dynamic characteristic, threshold criteria, traffic loads

Procedia PDF Downloads 270
588 Intersection of Racial and Gender Microaggressions: Social Support as a Coping Strategy among Indigenous LGBTQ People in Taiwan

Authors: Ciwang Teyra, A. H. Y. Lai

Abstract:

Introduction: Indigenous LGBTQ individuals face with significant life stress such as racial and gender discrimination and microaggressions, which may lead to negative impacts of their mental health. Although studies relevant to Taiwanese indigenous LGBTQpeople gradually increase, most of them are primarily conceptual or qualitative in nature. This research aims to fulfill the gap by offering empirical quantitative evidence, especially investigating the impact of racial and gender microaggressions on mental health among Taiwanese indigenous LGBTQindividuals with an intersectional perspective, as well as examine whether social support can help them to cope with microaggressions. Methods: Participants were (n=200; mean age=29.51; Female=31%, Male=61%, Others=8%). A cross-sectional quantitative design was implemented using data collected in the year 2020. Standardised measurements was used, including Racial Microaggression Scale (10 items), Gender Microaggression Scale (9 items), Social Support Questionnaire-SF(6 items); Patient Health Questionnaire(9-item); and Generalised Anxiety Disorder(7-item). Covariates were age, gender, and perceived economic hardships. Structural equation modelling (SEM) was employed using Mplus 8.0 with the latent variables of depression and anxiety as outcomes. A main effect SEM model was first established (Model1).To test the moderation effects of perceived social support, an interaction effect model (Model 2) was created with interaction terms entered into Model1. Numerical integration was used with maximum likelihood estimation to estimate the interaction model. Results: Model fit statistics of the Model 1:X2(df)=1308.1 (795), p<.05; CFI/TLI=0.92/0.91; RMSEA=0.06; SRMR=0.06. For Model, the AIC and BIC values of Model 2 improved slightly compared to Model 1(AIC =15631 (Model1) vs. 15629 (Model2); BIC=16098 (Model1) vs. 16103 (Model2)). Model 2 was adopted as the final model. In main effect model 1, racialmicroaggressionand perceived social support were associated with depression and anxiety, but not sexual orientation microaggression(Indigenous microaggression: b = 0.27 for depression; b=0.38 for anxiety; Social support: b=-0.37 for depression; b=-0.34 for anxiety). Thus, an interaction term between social support and indigenous microaggression was added in Model 2. In the final Model 2, indigenous microaggression and perceived social support continues to be statistically significant predictors of both depression and anxiety. Social support moderated the effect of indigenous microaggression of depression (b=-0.22), but not anxiety. All covariates were not statistically significant. Implications: Results indicated that racial microaggressions have a significant impact on indigenous LGBTQ people’s mental health. Social support plays as a crucial role to buffer the negative impact of racial microaggression. To promote indigenous LGBTQ people’s wellbeing, it is important to consider how to support them to develop social support network systems.

Keywords: microaggressions, intersectionality, indigenous population, mental health, social support

Procedia PDF Downloads 146
587 Off-Line Text-Independent Arabic Writer Identification Using Optimum Codebooks

Authors: Ahmed Abdullah Ahmed

Abstract:

The task of recognizing the writer of a handwritten text has been an attractive research problem in the document analysis and recognition community with applications in handwriting forensics, paleography, document examination and handwriting recognition. This research presents an automatic method for writer recognition from digitized images of unconstrained writings. Although a great effort has been made by previous studies to come out with various methods, their performances, especially in terms of accuracy, are fallen short, and room for improvements is still wide open. The proposed technique employs optimal codebook based writer characterization where each writing sample is represented by a set of features computed from two codebooks, beginning and ending. Unlike most of the classical codebook based approaches which segment the writing into graphemes, this study is based on fragmenting a particular area of writing which are beginning and ending strokes. The proposed method starting with contour detection to extract significant information from the handwriting and the curve fragmentation is then employed to categorize the handwriting into Beginning and Ending zones into small fragments. The similar fragments of beginning strokes are grouped together to create Beginning cluster, and similarly, the ending strokes are grouped to create the ending cluster. These two clusters lead to the development of two codebooks (beginning and ending) by choosing the center of every similar fragments group. Writings under study are then represented by computing the probability of occurrence of codebook patterns. The probability distribution is used to characterize each writer. Two writings are then compared by computing distances between their respective probability distribution. The evaluations carried out on ICFHR standard dataset of 206 writers using Beginning and Ending codebooks separately. Finally, the Ending codebook achieved the highest identification rate of 98.23%, which is the best result so far on ICFHR dataset.

Keywords: off-line text-independent writer identification, feature extraction, codebook, fragments

Procedia PDF Downloads 512
586 Elevated Creatinine Clearance and Normal Glomerular Filtration Rate in Patients with Systemic Lupus erythematosus

Authors: Stoyanka Vladeva, Elena Kirilova, Nikola Kirilov

Abstract:

Background: The creatinine clearance is a widely used value to estimate the GFR. Increased creatinine clearance is often called hyperfiltration and is usually seen during pregnancy, patients with diabetes mellitus preceding the diabetic nephropathy. It may also occur with large dietary protein intake or with plasma volume expansion. Renal injury in lupus nephritis is known to affect the glomerular, tubulointerstitial, and vascular compartment. However high creatinine clearance has not been found in patients with SLE, Target: Follow-up of creatinine clearance values in patients with systemic lupus erythematosus without history of kidney injury. Material and methods: We observed the creatinine, creatinine clearance, GFR and dipstick protein values of 7 women (with a mean age of 42.71 years) with systemic lupus erythematosus. Patients with active lupus have been monthly tested in the period of 13 months. Creatinine clearance has been estimated by Cockcroft-Gault Equation formula in ml/sec. GFR has been estimated by MDRD formula (The Modification of Diet in renal Disease) in ml/min/1.73 m2. Proteinuria has been defined as present when dipstick protein > 1+.Results: In all patients without history of kidney injury we found elevated creatinine clearance levels, but GFRremained within the reference range. Two of the patients were in remission while the other five patients had clinically and immunologically active Lupus. Three of the patients had a permanent presence of high creatinine clearance levels and proteinuria. Two of the patients had periodically elevated creatinine clearance without proteinuria. These results show that kidney disturbances may be caused by the vascular changes typical for SLE. Glomerular hyperfiltration can be result of focal segmental glomerulosclerosis caused by a reduction in renal mass. Probably lupus nephropathy is preceded not only by glomerular vascular changes, but also by tubular vascular changes. Using only the GFR is not a sufficient method to detect these primary functional disturbances. Conclusion: For early detection of kidney injury in patients with SLE we determined that the follow up of creatinine clearance values could be helpful.

Keywords: systemic Lupus erythematosus, kidney injury, elevated creatinine clearance level, normal glomerular filtration rate

Procedia PDF Downloads 270
585 Corporate Performance and Balance Sheet Indicators: Evidence from Indian Manufacturing Companies

Authors: Hussain Bohra, Pradyuman Sharma

Abstract:

This study highlights the significance of Balance Sheet Indicators on the corporate performance in the case of Indian manufacturing companies. Balance sheet indicators show the actual financial health of the company and it helps to the external investors to choose the right company for their investment and it also help to external financing agency to give easy finance to the manufacturing companies. The period of study is 2000 to 2014 for 813 manufacturing companies for which the continuous data is available throughout the study period. The data is collected from PROWESS data base maintained by Centre for Monitoring Indian Economy Pvt. Ltd. Panel data methods like fixed effect and random effect methods are used for the analysis. The Likelihood Ratio test, Lagrange Multiplier test and Hausman test results proof the suitability of the fixed effect model for the estimation. Return on assets (ROA) is used as the proxy to measure corporate performance. ROA is the best proxy to measure corporate performance as it already used by the most of the authors who worked on the corporate performance. ROA shows return on long term investment projects of firms. Different ratios like Current Ratio, Debt-equity ratio, Receivable turnover ratio, solvency ratio have been used as the proxies for the Balance Sheet Indicators. Other firm specific variable like firm size, and sales as the control variables in the model. From the empirical analysis, it was found that all selected financial ratios have significant and positive impact on the corporate performance. Firm sales and firm size also found significant and positive impact on the corporate performance. To check the robustness of results, the sample was divided on the basis of different ratio like firm having high debt equity ratio and low debt equity ratio, firms having high current ratio and low current ratio, firms having high receivable turnover and low receivable ratio and solvency ratio in the form of firms having high solving ratio and low solvency ratio. We find that the results are robust to all types of companies having different form of selected balance sheet indicators ratio. The results for other variables are also in the same line as for the whole sample. These findings confirm that Balance sheet indicators play as significant role on the corporate performance in India. The findings of this study have the implications for the corporate managers to focus different ratio to maintain the minimum expected level of performance. Apart from that, they should also maintain adequate sales and total assets to improve corporate performance.

Keywords: balance sheet, corporate performance, current ratio, panel data method

Procedia PDF Downloads 264
584 Analytical Study of the Structural Response to Near-Field Earthquakes

Authors: Isidro Perez, Maryam Nazari

Abstract:

Numerous earthquakes, which have taken place across the world, led to catastrophic damage and collapse of structures (e.g., 1971 San Fernando; 1995 Kobe-Japan; and 2010 Chile earthquakes). Engineers are constantly studying methods to moderate the effect this phenomenon has on structures to further reduce damage, costs, and ultimately to provide life safety to occupants. However, there are regions where structures, cities, or water reservoirs are built near fault lines. When an earthquake occurs near the fault lines, they can be categorized as near-field earthquakes. In contrary, a far-field earthquake occurs when the region is further away from the seismic source. A near-field earthquake generally has a higher initial peak resulting in a larger seismic response, when compared to a far-field earthquake ground motion. These larger responses may result in serious consequences in terms of structural damage which can result in a high risk for the public’s safety. Unfortunately, the response of structures subjected to near-field records are not properly reflected in the current building design specifications. For example, in ASCE 7-10, the design response spectrum is mostly based on the far-field design-level earthquakes. This may result in the catastrophic damage of structures that are not properly designed for near-field earthquakes. This research investigates the knowledge that the effect of near-field earthquakes has on the response of structures. To fully examine this topic, a structure was designed following the current seismic building design specifications, e.g. ASCE 7-10 and ACI 318-14, being analytically modeled, utilizing the SAP2000 software. Next, utilizing the FEMA P695 report, several near-field and far-field earthquakes were selected, and the near-field earthquake records were scaled to represent the design-level ground motions. Upon doing this, the prototype structural model, created using SAP2000, was subjected to the scaled ground motions. A Linear Time History Analysis and Pushover analysis were conducted on SAP2000 for evaluation of the structural seismic responses. On average, the structure experienced an 8% and 1% increase in story drift and absolute acceleration, respectively, when subjected to the near-field earthquake ground motions. The pushover analysis was ran to find and aid in properly defining the hinge formation in the structure when conducting the nonlinear time history analysis. A near-field ground motion is characterized by a high-energy pulse, making it unique to other earthquake ground motions. Therefore, pulse extraction methods were used in this research to estimate the maximum response of structures subjected to near-field motions. The results will be utilized in the generation of a design spectrum for the estimation of design forces for buildings subjected to NF ground motions.

Keywords: near-field, pulse, pushover, time-history

Procedia PDF Downloads 146
583 Restricted Boltzmann Machines and Deep Belief Nets for Market Basket Analysis: Statistical Performance and Managerial Implications

Authors: H. Hruschka

Abstract:

This paper presents the first comparison of the performance of the restricted Boltzmann machine and the deep belief net on binary market basket data relative to binary factor analysis and the two best-known topic models, namely Dirichlet allocation and the correlated topic model. This comparison shows that the restricted Boltzmann machine and the deep belief net are superior to both binary factor analysis and topic models. Managerial implications that differ between the investigated models are treated as well. The restricted Boltzmann machine is defined as joint Boltzmann distribution of hidden variables and observed variables (purchases). It comprises one layer of observed variables and one layer of hidden variables. Note that variables of the same layer are not connected. The comparison also includes deep belief nets with three layers. The first layer is a restricted Boltzmann machine based on category purchases. Hidden variables of the first layer are used as input variables by the second-layer restricted Boltzmann machine which then generates second-layer hidden variables. Finally, in the third layer hidden variables are related to purchases. A public data set is analyzed which contains one month of real-world point-of-sale transactions in a typical local grocery outlet. It consists of 9,835 market baskets referring to 169 product categories. This data set is randomly split into two halves. One half is used for estimation, the other serves as holdout data. Each model is evaluated by the log likelihood for the holdout data. Performance of the topic models is disappointing as the holdout log likelihood of the correlated topic model – which is better than Dirichlet allocation - is lower by more than 25,000 compared to the best binary factor analysis model. On the other hand, binary factor analysis on its own is clearly surpassed by both the restricted Boltzmann machine and the deep belief net whose holdout log likelihoods are higher by more than 23,000. Overall, the deep belief net performs best. We also interpret hidden variables discovered by binary factor analysis, the restricted Boltzmann machine and the deep belief net. Hidden variables characterized by the product categories to which they are related differ strongly between these three models. To derive managerial implications we assess the effect of promoting each category on total basket size, i.e., the number of purchased product categories, due to each category's interdependence with all the other categories. The investigated models lead to very different implications as they disagree about which categories are associated with higher basket size increases due to a promotion. Of course, recommendations based on better performing models should be preferred. The impressive performance advantages of the restricted Boltzmann machine and the deep belief net suggest continuing research by appropriate extensions. To include predictors, especially marketing variables such as price, seems to be an obvious next step. It might also be feasible to take a more detailed perspective by considering purchases of brands instead of purchases of product categories.

Keywords: binary factor analysis, deep belief net, market basket analysis, restricted Boltzmann machine, topic models

Procedia PDF Downloads 199
582 Magnitude of Meconium Stained Amniotic Fluid and Associated Factors among Women Who Gave Birth in North Shoa Zone Hospital’s Amhara Region Ethiopia 2022

Authors: Mitiku Tefera

Abstract:

Background: Meconium-stained amniotic fluid is one of the primary causes of birth asphyxia. Each year, over five million neonatal deaths occur worldwide due to meconium-stained amniotic fluid, with 90% of these deaths due to birth asphyxia. In Ethiopia meconium-stained amniotic fluid is under investigated, specifically in North Shoa Zone Amhara region Ethiopia. Objective: The aim of this study was to assess the magnitude of meconium-stained amniotic fluid and associated factors among women who gave birth in the North Shoa Zone Hospital’s Amhara Region, Ethiopia, in 2022. Methods: An institutional-based, cross-sectional study was conducted among 628 women who gave birth at North Shoa Zone Hospitals, Amhara, Ethiopia. The study was conducted from 08/June-08/August 2022. Two-stage cluster sampling was used to recruit study participants. The data was collected by using a structured interview-administered questionnaire and chart review. The collected data was entered into Epi-Data Version 4.6 and exported to SPSS Version 25. Logistics regression was employed, and a p-value <0.05 was considered significant. Result: The magnitude of meconium-stained amniotic fluid was 30.3%. Women presented with normal hematocrit level 83% less likely develop meconium-stained amniotic fluid. Women had mid-upper arm circumference value was less than 22.9cm(AOR=1.9; 95% CI;1.18-3.20), obstructed labor(AOR=3.6; 95% CI;1.48-8.83), prolonged labor ≥ 15hr (AOR=7.5; 95% CI ;7.68-13.3), the premature rapture of the membrane (AOR=1.7; 95% CI; 3.22-7.40), fetal tachycardia(AOR=6.2; 95% CI; 2.41-16.3) and Bradycardia (AOR=3.1; 95% CI;1.93-5.28) were significant association with meconium stained amniotic fluid. Conclusion: The magnitude of meconium-stained amniotic fluid, which was high. In this study, MUAC value <22.9 cm, obstructed and prolonged labor, PROM, bradycardia, and tachycardia were factors associated with meconium-stained amniotic fluid. A follow-up study and pooled similar articles will be mentioned for better evidence, enhancing intrapartum services and strengthening early detection of meconium-stained amniotic fluid for the health of the mother and baby.

Keywords: women, meconium-staned amniotic fluid, magnitude, Ethiopia

Procedia PDF Downloads 128
581 Bioaccumulation and Forensic Relevance of Gunshot Residue in Forensically Relevant Blowflies

Authors: Michaela Storen, Michelle Harvey, Xavier Conlan

Abstract:

Gun violence internationally is increasing at an unprecedented level, becoming a favoured means for executing violence against another individual. Not only is this putting a strain on forensic scientists who attempt to determine the cause of death in circumstances where firearms have been involved in the death of an individual, but it also highlights the need for an alternative technique of identification of a gunshot wound when other established techniques have been exhausted. A corpse may be colonized by necrophagous insects following death, and this close association between the time of death and insect colonization makes entomological samples valuable evidence when remains become decomposed beyond toxicological utility. Entomotoxicology provides the potential for the identification of toxins in a decomposing corpse, with recent research uncovering the capabilities of entomotoxicology to detect gunshot residue (GSR) in a corpse. However, shortcomings of the limited literature available on this topic have not been addressed, with the bioaccumulation, detection limits, and sensitivity to gunshots not considered thus far, leaving questions as to the applicability of this new technique in the forensic context. Larvae were placed on meat contaminated with GSR at different concentrations and compared to a control meat sample to establish the uptake of GSR by the larvae, with bioaccumulation established by placing the larvae on fresh, uncontaminated meat for a period of time before analysis using ICP-MS. The findings of Pb, Ba, and Sb at each stage of the lifecycle and bioaccumulation in the larvae will be presented. In addition, throughout these previously mentioned experiments, larvae were washed once, twice and three times to evaluate the effectiveness of existing entomological practices in removing external toxins from specimens prior to entomotoxicologyical analysis. Analysis of these larval washes will be presented. By addressing these points, this research extends the utility of entomotoxicology in cause-of-death investigations and provides an additional source of evidence for forensic scientists in the circumstances involving a gunshot wound on a corpse, in addition to advising the effectiveness of current entomology collection protocols.

Keywords: bioaccumulation, chemistry, entomology, gunshot residue, toxicology

Procedia PDF Downloads 81
580 A Comprehensive Methodology for Voice Segmentation of Large Sets of Speech Files Recorded in Naturalistic Environments

Authors: Ana Londral, Burcu Demiray, Marcus Cheetham

Abstract:

Speech recording is a methodology used in many different studies related to cognitive and behaviour research. Modern advances in digital equipment brought the possibility of continuously recording hours of speech in naturalistic environments and building rich sets of sound files. Speech analysis can then extract from these files multiple features for different scopes of research in Language and Communication. However, tools for analysing a large set of sound files and automatically extract relevant features from these files are often inaccessible to researchers that are not familiar with programming languages. Manual analysis is a common alternative, with a high time and efficiency cost. In the analysis of long sound files, the first step is the voice segmentation, i.e. to detect and label segments containing speech. We present a comprehensive methodology aiming to support researchers on voice segmentation, as the first step for data analysis of a big set of sound files. Praat, an open source software, is suggested as a tool to run a voice detection algorithm, label segments and files and extract other quantitative features on a structure of folders containing a large number of sound files. We present the validation of our methodology with a set of 5000 sound files that were collected in the daily life of a group of voluntary participants with age over 65. A smartphone device was used to collect sound using the Electronically Activated Recorder (EAR): an app programmed to record 30-second sound samples that were randomly distributed throughout the day. Results demonstrated that automatic segmentation and labelling of files containing speech segments was 74% faster when compared to a manual analysis performed with two independent coders. Furthermore, the methodology presented allows manual adjustments of voiced segments with visualisation of the sound signal and the automatic extraction of quantitative information on speech. In conclusion, we propose a comprehensive methodology for voice segmentation, to be used by researchers that have to work with large sets of sound files and are not familiar with programming tools.

Keywords: automatic speech analysis, behavior analysis, naturalistic environments, voice segmentation

Procedia PDF Downloads 281
579 The Effect of Primary Treatment on Histopathological Patterns and Choice of Neck Dissection in Regional Failure of Nasopharyngeal Carcinoma Patients

Authors: Ralene Sim, Stefan Mueller, N. Gopalakrishna Iyer, Ngian Chye Tan, Khee Chee Soo, R. Shetty Mahalakshmi, Hiang Khoon Tan

Abstract:

Background: Regional failure in nasopharyngeal carcinoma (NPC) is managed by salvage treatment in the form of neck dissection. Radical neck dissection (RND) is preferred over modified radical neck dissection (MRND) since it is traditionally believed to offer better long-term disease control. However, with the advent of more advanced imaging modalities like high-resolution Magnetic Resonance Imaging, Computed Tomography, and Positron Emission Tomography-CT scans, earlier detection is achieved. Additionally, concurrent chemotherapy also contributes to reduced tumour burden. Hence, there may be a lesser need for an RND and a greater role for MRND. With this retrospective study, the primary aim is to ascertain whether MRND, as opposed to RND, has similar outcomes and hence, whether there would be more grounds to offer a less aggressive procedure to achieve lower patient morbidity. Methods: This is a retrospective study of 66 NPC patients treated at Singapore General Hospital between 1994 to 2016 for histologically proven regional recurrence, of which 41 patients underwent RND and 25 who underwent MRND, based on surgeon preference. The type of ND performed, primary treatment mode, adjuvant treatment, and pattern of recurrence were reviewed. Overall survival (OS) was calculated using Kaplan-Meier estimate and compared. Results: Overall, the disease parameters such as nodal involvement and extranodal extension were comparable between the two groups. Comparing MRND and RND, the median (IQR) OS is 1.76 (0.58 to 3.49) and 2.41 (0.78 to 4.11) respectively. However, the p-value found is 0.5301 and hence not statistically significant. Conclusion: RND is more aggressive and has been associated with greater morbidity. Hence, with similar outcomes, MRND could be an alternative salvage procedure for regional failure in selected NPC patients, allowing similar salvage rates with lesser mortality and morbidity.

Keywords: nasopharyngeal carcinoma, neck dissection, modified neck dissection, radical neck dissection

Procedia PDF Downloads 170
578 Scenario of Some Minerals and Impact of Promoter Hypermethylation of DAP-K Gene in Gastric Carcinoma Patients of Kashmir Valley

Authors: Showkat Ahmad Bhat, Iqra Reyaz, Falaque ul Afshan, Ahmad Arif Reshi, Muneeb U. Rehman, Manzoor R. Mir, Sabhiya Majid, Sonallah, Sheikh Bilal, Ishraq Hussain

Abstract:

Background: Gastric cancer is the fourth most common cancer and the second leading cause of worldwide cancer-related deaths, with a wide variation in incidence rates across different geographical areas. The current view of cancer is that a malignancy arises from a transformation of the genetic material of a normal cell, followed by successive mutations and by chain of alterations in genes such as DNA repair genes, oncogenes, Tumor suppressor genes. Minerals are necessary for the functioning of several transcriptional factors, proteins that recognize certain DNA sequences and have been found to play a role in gastric cancer. Material Methods:The present work was a case control study and its aim was to ascertain the role of minerals and promoter hypermethylation of CpG islands of DAP-K gene in Gastric cancer patients among the Kashmiri population. Serum was extracted from all the samples and mineral estimation was done by AAS from serum, DNA was also extracted and was modified using bisulphite modification kit. Methylation-specific PCR was used for the analysis of the promoter hypermethylation status of DAP-K gene. The epigenetic analysis revealed that unlike other high risk regions, Kashmiri population has a different promoter hypermethylation profile of DAP-K gene and has different mineral profile. Results: In our study mean serum copper levels were significantly different for the two genders (p<0.05), while as no significant differences were observed for iron and zinc levels. In Methylation-specific PCR the methylation status of the promoter region of DAP-K gene was as 67.50% (27/40) of the gastric cancer tissues showed methylated DAP-K promoter and 32.50% (13/40) of the cases however showed unmethylated DAP-K promoter. Almost all 85% (17/20) of the histopathologically confirmed normal tissues showed unmethylated DAP-K promoter except only in 3 cases where DAP-K promoter was found to be methylated. The association of promoter hypermethylation with gastric cancer was evaluated by χ2 (Chi square) test and was found to be significant (P=0.0006). Occurrence of DAP-K methylation was found to be unequally distributed in males and females with more frequency in males than in females but the difference was not statistically significant (P =0.7635, Odds ratio=1.368 and 95% C.I=0.4197 to 4.456). When the frequency of DAP-K promoter methylation was compared with clinical staging of the disease, DAP-K promoter methylation was found to be certainly higher in Stage III/IV (85.71%) compared to Stage I/ II (57.69%) but the difference was not statistically significant (P =0.0673). These results suggest that DAP-K aberrant promoter hypermethylation in Kashmiri population contributes to the process of carcinogenesis in Gastric cancer and is reportedly one of the commonest epigenetic changes in the development of Gastric cancer.

Keywords: gastric cancer, minerals, AAS, hypermethylation, CpG islands, DAP-K gene

Procedia PDF Downloads 517
577 Principal Well-Being at Hong Kong: A Quantitative Investigation

Authors: Junjun Chen, Yingxiu Li

Abstract:

The occupational well-being of school principals has played a vital role in the pursuit of individual and school wellness and success. However, principals’ well-being worldwide is under increasing threat because of the challenging and complex nature of their work and growing demands for school standardisation and accountability. Pressure is particularly acute in the post-pandemicfuture as principals attempt to deal with the impact of the pandemic on top of more regular demands. This is particularly true in Hong Kong, as school principals are increasingly wedged between unparalleled political, social, and academic responsibilities. Recognizing the semantic breadth of well-being, scholars have not determined a single, mutually agreeable definition but agreed that the concept of well-being has multiple dimensions across various disciplines. The multidimensional approach promises more precise assessments of the relationships between well-being and other concepts than the ‘affect-only’ approach or other single domains for capturing the essence of principal well-being. The multiple-dimension well-being concept is adopted in this project to understand principal well-being in this study. This study aimed to understand the situation of principal well-being and its influential drivers with a sample of 670 principals from Hong Kong and Mainland China. An online survey was sent to the participants after the breakout of COVID-19 by the researchers. All participants were well informed about the purposes and procedure of the project and the confidentiality of the data prior to filling in the questionnaire. Confirmatory factor analysis and structural equation modelling performed with Mplus were employed to deal with the dataset. The data analysis procedure involved the following three steps. First, the descriptive statistics (e.g., mean and standard deviation) were calculated. Second, confirmatory factor analysis (CFA) was used to trim principal well-being measurement performed with maximum likelihood estimation. Third, structural equation modelling (SEM) was employed to test the influential factors of principal well-being. The results of this study indicated that the overall of principal well-being were above the average mean score. The highest ranking in this study given by the principals was to their psychological and social well-being (M = 5.21). This was followed by spiritual (M = 5.14; SD = .77), cognitive (M = 5.14; SD = .77), emotional (M = 4.96; SD = .79), and physical well-being (M = 3.15; SD = .73). Participants ranked their physical well-being the lowest. Moreover, professional autonomy, supervisor and collegial support, school physical conditions, professional networking, and social media have showed a significant impact on principal well-being. The findings of this study will potentially enhance not only principal well-being, but also the functioning of an individual principal and a school without sacrificing principal well-being for quality education in the process. This will eventually move one step forward for a new future - a wellness society advocated by OECD. Importantly, well-being is an inside job that begins with choosing to have wellness, whilst supports to become a wellness principal are also imperative.

Keywords: well-being, school principals, quantitative, influential factors

Procedia PDF Downloads 83
576 Predicting Factors for Occurrence of Cardiac Arrest in Critical, Emergency and Urgency Patients in an Emergency Department

Authors: Angkrit Phitchayangkoon, Ar-Aishah Dadeh

Abstract:

Background: A key aim of triage is to identify the patients with high risk of cardiac arrest because they require intensive monitoring, resuscitation facilities, and early intervention. We aimed to identify the predicting factors such as initial vital signs, serum pH, serum lactate level, initial capillary blood glucose, and Modified Early Warning Score (MEWS) which affect the occurrence of cardiac arrest in an emergency department (ED). Methods: We conducted a retrospective data review of ED patients in an emergency department (ED) from 1 August 2014 to 31 July 2016. Significant variables in univariate analysis were used to create a multivariate analysis. Differentiation of predicting factors between cardiac arrest patient and non-cardiac arrest patients for occurrence of cardiac arrest in an emergency department (ED) was the primary outcome. Results: The data of 527 non-trauma patients with Emergency Severity Index (ESI) 1-3 were collected. The factors found to have a significant association (P < 0.05) in the non-cardiac arrest group versus the cardiac arrest group at the ED were systolic BP (mean [IQR] 135 [114,158] vs 120 [90,140] mmHg), oxygen saturation (mean [IQR] 97 [89,98] vs 82.5 [78,95]%), GCS (mean [IQR] 15 [15,15] vs 11.5 [8.815]), normal sinus rhythm (mean 59.8 vs 30%), sinus tachycardia (mean 46.7 vs 21.7%), pH (mean [IQR] 7.4 [7.3,7.4] vs 7.2 [7,7.3]), serum lactate (mean [IQR] 2 [1.1,4.2] vs 7 [5,10.8]), and MEWS score (mean [IQR] 3 [2,5] vs 5 [3,6]). A multivariate analysis was then performed. After adjusting for multiple factors, ESI level 2 patients were more likely to have cardiac arrest in the ER compared with ESI 1 (odds ratio [OR], 1.66; P < 0.001). Furthermore, ESI 2 patients were more likely than ESI 1 patients to have cardiovascular disease (OR, 1.89; P = 0.01), heart rate < 55 (OR, 6.83; P = 0.18), SBP < 90 (OR, 3.41; P = 0.006), SpO2 < 94 (OR, 4.76; P = 0.012), sinus tachycardia (OR, 4.32; P = 0.002), lactate > 4 (OR, 10.66; P = < 0.001), and MEWS > 4 (OR, 4.86; P = 0.028). These factors remained predictive of cardiac arrest at the ED. Conclusion: The factors related to cardiac arrest in the ED are ESI 1 patients, ESI 2 patients, patients diagnosed with cardiovascular disease, SpO2 < 94, lactate > 4, and a MEWS > 4. These factors can be used as markers in the event of simultaneous arrival of many patients and can help as a pre-state for patients who have a tendency to develop cardiac arrest. The hemodynamic status and vital signs of these patients should be closely monitored. Early detection of potentially critical conditions to prevent critical medical intervention is mandatory.

Keywords: cardiac arrest, predicting factor, emergency department, emergency patient

Procedia PDF Downloads 159
575 Optimization of Headspace Solid Phase Microextraction (SPME) Technique Coupled with GC MS for Identification of Volatile Organic Compounds Released by Trogoderma Variabile

Authors: Thamer Alshuwaili, Yonglin Ren, Bob Du, Manjree Agarwal

Abstract:

The warehouse beetle, Trogoderma variabile Ballion (Coleoptera: Dermestidae), is a major pest of packaged and processed stored products. Warehouse beetle is the common name which was given by Okumura (1972). This pest has been reported to infest 119 different commodities, and it is distributed throughout the tropical and subtropical parts of the world. Also, it is difficult to control because of the insect's ability to stay without food for long times, and it can survive for years under dry conditions and low-moisture food, and it has also developed resistance to many insecticides. The young larvae of these insects can cause damage to seeds, but older larvae prefer to feed on whole grains. The percentage of damage caused by these insects range between 30-70% in the storage. T. variabile is the species most responsible for causing significant damage in grain stores worldwide. Trogoderma spp. is a huge problem for cereal grains, and there are many countries, such as the USA, Australia, China, Kenya, Uganda and Tanzania who have specific quarantine regulations against possible importation. Also, grain stocks can be almost completely destroyed because of the massive populations the insect may develop. However, the purpose of the current research was to optimize conditions to collect volatile organic compound from Trogoderma variabile at different life stages by using headspace solid phase microextraction (SPME) coupled with gas chromatography-mass spectrometry (GC-MS) and flame ionization detection (FID). Using SPME technique to extract volatile from insects is an efficient, straightforward and nondestructive method. Result of the study shows that 15 insects were optimal number for larvae and adults. Selection of the number of insects depend on the height of the peak area and the number of peaks. Sixteen hours were optimized as the best extraction time for larvae and 8 hours was the optimal number of adults.

Keywords: Trogoderma variabile, warehouse beetle , GC-MS, Solid phase microextraction

Procedia PDF Downloads 129
574 A Mixed Method Approach for Modeling Entry Capacity at Rotary Intersections

Authors: Antonio Pratelli, Lorenzo Brocchini, Reginald Roy Souleyrette

Abstract:

A rotary is a traffic circle intersection where vehicles entering from branches give priority to circulating flow. Vehicles entering the intersection from converging roads move around the central island and weave out of the circle into their desired exiting branch. This creates merging and diverging conflicts among any entry and its successive exit, i.e., a section. Therefore, rotary capacity models are usually based on the weaving of the different movements in any section of the circle, and the maximum rate of flow value is then related to each weaving section of the rotary. Nevertheless, the single-section capacity value does not lead to the typical performance characteristics of the intersection, such as the entry average delay which is directly linked to its level of service. From another point of view, modern roundabout capacity models are based on the limitation of the flow entering from the single entrance due to the amount of flow circulating in front of the entrance itself. Modern roundabouts capacity models generally lead also to a performance evaluation. This paper aims to incorporate a modern roundabout capacity model into an old rotary capacity method to obtain from the latter the single input capacity and ultimately achieve the related performance indicators. Put simply; the main objective is to calculate the average delay of each single roundabout entrance to apply the most common Highway Capacity Manual, or HCM, criteria. The paper is organized as follows: firstly, the rotary and roundabout capacity models are sketched, and it has made a brief introduction to the model combination technique with some practical instances. The successive section is deserved to summarize the TRRL old rotary capacity model and the most recent HCM-7th modern roundabout capacity model. Then, the two models are combined through an iteration-based algorithm, especially set-up and linked to the concept of roundabout total capacity, i.e., the value reached due to a traffic flow pattern leading to the simultaneous congestion of all roundabout entrances. The solution is the average delay for each entrance of the rotary, by which is estimated its respective level of service. In view of further experimental applications, at this research stage, a collection of existing rotary intersections operating with the priority-to-circle rule has already started, both in the US and in Italy. The rotaries have been selected by direct inspection of aerial photos through a map viewer, namely Google Earth. Each instance has been recorded by location, general urban or rural, and its main geometrical patterns. Finally, conclusion remarks are drawn, and a discussion on some further research developments has opened.

Keywords: mixed methods, old rotary and modern roundabout capacity models, total capacity algorithm, level of service estimation

Procedia PDF Downloads 86
573 Trade in Value Added: The Case of the Central and Eastern European Countries

Authors: Łukasz Ambroziak

Abstract:

Although the impact of the production fragmentation on trade flows has been examined many times since the 1990s, the research was not comprehensive because of the limitations in traditional trade statistics. Early 2010s the complex databases containing world input-output tables (or indicators calculated on their basis) has made available. It increased the possibilities of examining the production sharing in the world. The trade statistic in value-added terms enables us better to estimate trade changes resulted from the internationalisation and globalisation as well as benefits of the countries from international trade. In the literature, there are many research studies on this topic. Unfortunately, trade in value added of the Central and Eastern European Countries (CEECs) has been so far insufficiently studied. Thus, the aim of the paper is to present changes in value added trade of the CEECs (Bulgaria, the Czech Republic, Estonia, Hungary, Latvia, Lithuania, Poland, Romania, Slovakia and Slovenia) in the period of 1995-2011. The concept 'trade in value added' or 'value added trade' is defined as the value added of a country which is directly and indirectly embodied in final consumption of another country. The typical question would be: 'How much value added is created in a country due to final consumption in the other countries?' The data will be downloaded from the World Input-Output Database (WIOD). The structure of this paper is as follows. First, theoretical and methodological aspects related to the application of the input-output tables in the trade analysis will be studied. Second, a brief survey of the empirical literature on this topic will be presented. Third, changes in exports and imports in value added of the CEECs will be analysed. A special attention will be paid to the differences in bilateral trade balances using traditional trade statistics (in gross terms) on one side, and value added statistics on the other. Next, in order to identify factors influencing value added exports and value added imports of the CEECs the generalised gravity model, based on panel data, will be used. The dependent variables will be value added exports and imports. The independent variables will be, among others, the level of GDP of trading partners, the level of GDP per capita of trading partners, the differences in GDP per capita, the level of the FDI inward stock, the geographical distance, the existence (or non-existence) of common border, the membership (or not) in preferential trade agreements or in the EU. For comparison, an estimation will also be made based on exports and imports in gross terms. The initial research results show that the gravity model better explained determinants of trade in value added than gross trade (R2 in the former is higher). The independent variables had the same direction of impact both on value added exports/imports and gross exports/imports. Only value of coefficients differs. The most difference concerned geographical distance. It had smaller impact on trade in value added than gross trade.

Keywords: central and eastern European countries, gravity model, input-output tables, trade in value added

Procedia PDF Downloads 239
572 Bacteriophage Lysis Of Physiologically Stressed Listeria Monocytogenes In A Simulated Seafood Processing Environment

Authors: Geevika J. Ganegama Arachchi, Steve H. Flint, Lynn McIntyre, Cristina D. Cruz, Beatrice M. Dias-Wanigasekera, Craig Billington, J. Andrew Hudson, Anthony N. Mutukumira

Abstract:

In seafood processing plants, Listeriamonocytogenes(L. monocytogenes)likely exists in a metabolically stressed state due to the nutrient-deficient environment, processing treatments such as heating, curing, drying, and freezing, and exposure to detergents and disinfectants. Stressed L. monocytogenes cells have been shown to be as pathogenic as unstressed cells. This study investigated lytic efficacy of (LiMN4L, LiMN4p, and LiMN17) which were previouslycharacterized as virulent against physiologically stressed cells of three seafood borne L. monocytogenesstrains (19CO9, 19DO3, and 19EO3).Physiologically compromised cells ofL. monocytogenesstrains were prepared by aging cultures in TrypticaseSoy Broth at 15±1°C for 72 h; heat injuringcultures at 54±1 - 55±1°C for 40 - 60 min;salt-stressing cultures in Milli-Q water were incubated at 25±1°C in darkness for three weeks; and incubating cultures in 9% (w/v) NaCl at 15±1°C for 72 h. Low concentrations of physiologically compromised cells of three L. monocytogenesstrainswere challenged in vitrowith high titre of three phages in separate experiments using Fish Broth medium (aqueous fish extract) at 15 °C in order to mimic the environment of seafood processing plant. Each phage, when present at ≈9 log10 PFU/ml, reduced late exponential phase cells of L. monocytogenes suspended in fish protein broth at ≈2-3 log10 CFU/ml to a non-detectable level (< 10 CFU/ml). Each phage, when present at ≈8.5 log10 PFU/ml, reduced both heat-injured cells present at 2.5-3.6 log10 CFU/ml and starved cells that were showed coccoid shape, present at ≈2-3 log10 CFU/ml to < 10 CFU/ml after 30 min. Phages also reduced salt-stressed cellspresent at ≈3 log10 CFU/ml by > 2 log10. L. monocytogenes (≈8 log10 CFU/ml) were reduced to below the detection limit (1 CFU/ml) by the three successive phage infections over 16 h, indicating that emergence of spontaneous phage resistance was infrequent. The three virulent phages showed high decontamination potential for physiologically stressed L. monocytogenes strains from seafood processing environments.

Keywords: physiologically stressed L. monocytogenes, heat injured, seafood processing environment, virulent phage

Procedia PDF Downloads 135