Search results for: horizontal and vertical deviation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2600

Search results for: horizontal and vertical deviation

320 Efficacy of Single-Dose Azithromycin Therapy for the Treatment of Chlamydia trachomatis in Patients Evaluated for Child Sexual Abuse in an Urban Health Center 2006-16

Authors: Trenton Hubbard, Kenneth Soyemi, Emily Siffermann

Abstract:

Introduction: According to the American Academy of Pediatrics (AAP) there are different weight-based recommendations for the treatment of Chlamydia trachomatis (CT) in patients who are being evaluated for sexual assault. Current AAP Red Book guidelines recommend that uncomplicated C. trachomatis anogenital infection in prepubertal patients weighing less than =<45 kg be treated with oral erythromycin 50 mg/kg/day QID for 14 days with no alternative therapies, and for patients whose weight => 45 kg are Azithromycin 1 gm PO once. Our study objective was to determine the efficacy of single-dose Azithromycin therapy for the treatment of Chlamydia trachomatis in patients weighing less than 50 kg who were evaluated for child sexual abuse in an urban setting. Methods: We conducted a retrospective chart review of historical medical records (paper and electronic) patients weighing less than 50 kg who were evaluated for child sexual abuse and subsequently treated for C. trachomatis infection with Azithromycin (20 mg/kg PO once up to a maximum 1 gm) and received a Test of Cure (TOC) from 2006-2016. Qualitative variables were expressed as percentages. Quantitative variables were expressed as mean values (+/- standard deviation [SD]) if they followed a normal distribution or as median values (interquartile range[IQR]) if they did not. Wilcoxson two-sample test was used to compare means of Azithromycin Dose, mg/kg, and TOC timing between treatment responders and non-responders. Results: We reviewed records of 34 patients, average age (SD) was 5.4 (2.0) years, 33 (97%) were treated for CT and 1(3%) for both GC and CT. 25 (74%) were females. Urine PCR was the most commonly used test at evaluation and as TOC with 13 (38%) patients completing both tests. The average (SD) dose of Azithromycin at treatment was 470 (136) mg and average (SD) mg/kg dose of 20 (1.9) mg/kg for all patients. Median (IQR) timing for TOC testing was 19 (14-26) days. Of the 33 with complete data 25 (74%) had a negative TOC. When compared with treatment non-responders (TOC failures), treatment responders received higher doses (average dose (SD) received 495 (139) vs 401(110), P 0.06)); similar average (SD) weight base dosing received (20.8(2.0) vs 19.7 (1.5), P 0.15)), and earlier average (SD)TOC test timing (18.8 (5.6) vs 32 (28.6) P 0.02)). Conclusion: Azithromycin dosing appears to be efficacious in the treatment of CT post sexual assault as majority of patients responded. Although treatment responders and non-responders received similar weight based doses, there is need for additional studies to understand variances and predictors of response.

Keywords: child sexual abuse, chlmaydia trachmotis infection, single-dose azithromycin, weight less than or equal to 45 kilograms

Procedia PDF Downloads 278
319 Assessment of Hypersaline Outfalls via Computational Fluid Dynamics Simulations: A Case Study of the Gold Coast Desalination Plant Offshore Multiport Brine Diffuser

Authors: Mitchell J. Baum, Badin Gibbes, Greg Collecutt

Abstract:

This study details a three-dimensional field-scale numerical investigation conducted for the Gold Coast Desalination Plant (GCDP) offshore multiport brine diffuser. Quantitative assessment of diffuser performance with regard to trajectory, dilution and mapping of seafloor concentration distributions was conducted for 100% plant operation. The quasi-steady Computational Fluid Dynamics (CFD) simulations were performed using the Reynolds averaged Navier-Stokes equations with a k-ω shear stress transport turbulence closure scheme. The study compliments a field investigation, which measured brine plume characteristics under similar conditions. CFD models used an iterative mesh in a domain with dimensions 400 m long, 200 m wide and an average depth of 24.2 m. Acoustic Doppler current profiler measurements conducted in the companion field study exhibited considerable variability over the water column. The effect of this vertical variability on simulated discharge outcomes was examined. Seafloor slope was also accommodated into the model. Ambient currents varied predominantly in the longshore direction – perpendicular to the diffuser structure. Under these conditions, the alternating port orientation of the GCDP diffuser resulted in simultaneous subjection to co-propagating and counter-propagating ambient regimes. Results from quiescent ambient simulations suggest broad agreement with empirical scaling arguments traditionally employed in design and regulatory assessments. Simulated dynamic ambient regimes showed the influence of ambient crossflow upon jet trajectory, dilution and seafloor concentration is significant. The effect of ambient flow structure and the subsequent influence on jet dynamics is discussed, along with the implications for using these different simulation approaches to inform regulatory decisions.

Keywords: computational fluid dynamics, desalination, field-scale simulation, multiport brine diffuser, negatively buoyant jet

Procedia PDF Downloads 201
318 Applying Big Data Analysis to Efficiently Exploit the Vast Unconventional Tight Oil Reserves

Authors: Shengnan Chen, Shuhua Wang

Abstract:

Successful production of hydrocarbon from unconventional tight oil reserves has changed the energy landscape in North America. The oil contained within these reservoirs typically will not flow to the wellbore at economic rates without assistance from advanced horizontal well and multi-stage hydraulic fracturing. Efficient and economic development of these reserves is a priority of society, government, and industry, especially under the current low oil prices. Meanwhile, society needs technological and process innovations to enhance oil recovery while concurrently reducing environmental impacts. Recently, big data analysis and artificial intelligence become very popular, developing data-driven insights for better designs and decisions in various engineering disciplines. However, the application of data mining in petroleum engineering is still in its infancy. The objective of this research aims to apply intelligent data analysis and data-driven models to exploit unconventional oil reserves both efficiently and economically. More specifically, a comprehensive database including the reservoir geological data, reservoir geophysical data, well completion data and production data for thousands of wells is firstly established to discover the valuable insights and knowledge related to tight oil reserves development. Several data analysis methods are introduced to analysis such a huge dataset. For example, K-means clustering is used to partition all observations into clusters; principle component analysis is applied to emphasize the variation and bring out strong patterns in the dataset, making the big data easy to explore and visualize; exploratory factor analysis (EFA) is used to identify the complex interrelationships between well completion data and well production data. Different data mining techniques, such as artificial neural network, fuzzy logic, and machine learning technique are then summarized, and appropriate ones are selected to analyze the database based on the prediction accuracy, model robustness, and reproducibility. Advanced knowledge and patterned are finally recognized and integrated into a modified self-adaptive differential evolution optimization workflow to enhance the oil recovery and maximize the net present value (NPV) of the unconventional oil resources. This research will advance the knowledge in the development of unconventional oil reserves and bridge the gap between the big data and performance optimizations in these formations. The newly developed data-driven optimization workflow is a powerful approach to guide field operation, which leads to better designs, higher oil recovery and economic return of future wells in the unconventional oil reserves.

Keywords: big data, artificial intelligence, enhance oil recovery, unconventional oil reserves

Procedia PDF Downloads 273
317 Effects of Group Cognitive Restructuring and Rational Emotive Behavioral Therapy on Psychological Distress of Awaiting-Trial Inmates in Correctional Centers in North-West, Nigeria

Authors: Muhammad Shafi’U Adamu

Abstract:

This study examined the effects of two groups of Cognitive Behavioral Therapies (CBT) which, includes Cognitive Restructuring (CB) and Rational Emotive Behavioral Therapy (REBT), on the Psychological Distress of awaiting-trial Inmates in Correctional Centers in North-West Nigeria. The study had four specific objectives, four research questions, and four null hypotheses. The study used a quasi-experimental design that involved pre-test and post-test. The population comprised of all 7,962 awaiting-trial inmates in correctional centers in North-west Nigeria. 131 awaiting trial inmates from three intact Correctional Centers were randomly selected using the census technique. The respondents were sampled and randomly put into 3 groups (CR, REBT and Control). Kessler Psychological Distress Scale (K10) was adapted for data collection in the study. The instrument was validated by experts and subjected to a pilot study using Cronbach's Alpha with a reliability coefficient of 0.772. Each group received treatment for 8 consecutive weeks (60 minutes/week). Data collected from the field were subjected to descriptive statistics of mean, standard deviation and mean difference to answer the research questions. Inferential statistics of ANOVA and independent sample t-test were used to test the null hypotheses at P≤ 0.05 level of significance. Results in the study revealed that there was no significant difference among the pre-treatment mean scores of experimental and control groups. Statistical evidence also showed a significant difference among the mean scores of the three groups, and thus, results of the Post Hoc multiple-comparison test indicated the posttreatment reduction of psychological distress in the awaiting-trial inmates. Documented output also showed a significant difference between the post-treatment psychologically distressed mean scores of male and female awaiting-trial inmates, but there was no difference in those exposed to REBT. The research recommends that a standardized structured CBT counseling technique treatment should be designed for correctional centers across Nigeria, and CBT counseling techniques could be used in the treatment of PD in both correctional and clinical settings.

Keywords: awaiting-trial inmates, cognitive restructuring, correctional centers, rational emotive behavioral therapy

Procedia PDF Downloads 61
316 Influence of Farnesol on Growth and Development of Dysdercus koenigii

Authors: Shailendra Kumar, Kamal Kumar Gupta

Abstract:

Dysdercus koenigii is an economically important pest of cotton worldwide. The pest damages the crop by sucking sap, staining lint, reducing the oil content of the seeds and deteriorating the quality of cotton. Plant possesses a plethora of secondary metabolites which are used as defense mechanism against herbivores. One of the important categories of such chemicals is insect growth regulators and the intermediates in their biosynthesis. Farnesol belongs to sesquiterpenoid. It is an intermediate in Juvenile hormone biosynthetic pathway in insects has been widely reported in the variety of plants. This chemical can disrupt the normal metabolic function and therefore, affects various life processes of the insects. Present study tested the efficacy of farnesol against Dysdercus koenigii. 2μl of 5% (100µg) and 10% (200µg) of the farnesol was applied topically on the dorsum of thoracic region of the newly emerged fifth instar nymphs of Dysdercus. The treated insects were observed daily for their survival, weight gain, and developmental anomalies for a period of ten days. The results indicated that treatment with 200µg farnesol decreased survival of the insects to 70% after 24h of exposure. At lower doses, no significant decrease in the survival was observed. However, the surviving nymphs showed alteration in growth, development, and metamorphosis. The weight gain in the treated nymphs showed deviation from control. The treated nymphs showed an increase in mortality during subsequent days and increase in the nymphal duration. The number of nymphs undergoing metamorphosis decreased to 46% and 88% in the treatments with the dose of 200µg and 100µg respectively. Severe developmental anomalies were also observed in the treated nymphs. The treated nymphs moulted into supernumerary nymphs, adultoids, adults with exuviae attached and adults with wing deformities. On treatment with 200µg; 26% adultoid, 4% adults with exuviae attached and 12% adults with wing deformed were produced. Treatment with 100µg resulted in production of 34% adultoid, 26% adults with deformed wing and 4% adults with exuviae attached. Many of the treated nymphs did not metamorphose into adults, remained in nymphal stage and died. Our results indicated potential application plant-derived secondary metabolites like farnesol in the management of Dysdercus population.

Keywords: development, Dysdercus koenigii, farnesol, survival

Procedia PDF Downloads 345
315 Comparison of Iodine Density Quantification through Three Material Decomposition between Philips iQon Dual Layer Spectral CT Scanner and Siemens Somatom Force Dual Source Dual Energy CT Scanner: An in vitro Study

Authors: Jitendra Pratap, Jonathan Sivyer

Abstract:

Introduction: Dual energy/Spectral CT scanning permits simultaneous acquisition of two x-ray spectra datasets and can complement radiological diagnosis by allowing tissue characterisation (e.g., uric acid vs. non-uric acid renal stones), enhancing structures (e.g. boost iodine signal to improve contrast resolution), and quantifying substances (e.g. iodine density). However, the latter showed inconsistent results between the 2 main modes of dual energy scanning (i.e. dual source vs. dual layer). Therefore, the present study aimed to determine which technology is more accurate in quantifying iodine density. Methods: Twenty vials with known concentrations of iodine solutions were made using Optiray 350 contrast media diluted in sterile water. The concentration of iodine utilised ranged from 0.1 mg/ml to 1.0mg/ml in 0.1mg/ml increments, 1.5 mg/ml to 4.5 mg/ml in 0.5mg/ml increments followed by further concentrations at 5.0 mg/ml, 7mg/ml, 10 mg/ml and 15mg/ml. The vials were scanned using Dual Energy scan mode on a Siemens Somatom Force at 80kV/Sn150kV and 100kV/Sn150kV kilovoltage pairing. The same vials were scanned using Spectral scan mode on a Philips iQon at 120kVp and 140kVp. The images were reconstructed at 5mm thickness and 5mm increment using Br40 kernel on the Siemens Force and B Filter on Philips iQon. Post-processing of the Dual Energy data was performed on vendor-specific Siemens Syngo VIA (VB40) and Philips Intellispace Portal (Ver. 12) for the Spectral data. For each vial and scan mode, the iodine concentration was measured by placing an ROI in the coronal plane. Intraclass correlation analysis was performed on both datasets. Results: The iodine concentrations were reproduced with a high degree of accuracy for Dual Layer CT scanner. Although the Dual Source images showed a greater degree of deviation in measured iodine density for all vials, the dataset acquired at 80kV/Sn150kV had a higher accuracy. Conclusion: Spectral CT scanning by the dual layer technique has higher accuracy for quantitative measurements of iodine density compared to the dual source technique.

Keywords: CT, iodine density, spectral, dual-energy

Procedia PDF Downloads 111
314 Risk and Reliability Based Probabilistic Structural Analysis of Railroad Subgrade Using Finite Element Analysis

Authors: Asif Arshid, Ying Huang, Denver Tolliver

Abstract:

Finite Element (FE) method coupled with ever-increasing computational powers has substantially advanced the reliability of deterministic three dimensional structural analyses of a structure with uniform material properties. However, railways trackbed is made up of diverse group of materials including steel, wood, rock and soil, while each material has its own varying levels of heterogeneity and imperfections. It is observed that the application of probabilistic methods for trackbed structural analysis while incorporating the material and geometric variabilities is deeply underworked. The authors developed and validated a 3-dimensional FE based numerical trackbed model and in this study, they investigated the influence of variability in Young modulus and thicknesses of granular layers (Ballast and Subgrade) on the reliability index (-index) of the subgrade layer. The influence of these factors is accounted for by changing their Coefficients of Variance (COV) while keeping their means constant. These variations are formulated using Gaussian Normal distribution. Two failure mechanisms in subgrade namely Progressive Shear Failure and Excessive Plastic Deformation are examined. Preliminary results of risk-based probabilistic analysis for Progressive Shear Failure revealed that the variations in Ballast depth are the most influential factor for vertical stress at the top of subgrade surface. Whereas, in case of Excessive Plastic Deformations in subgrade layer, the variations in its own depth and Young modulus proved to be most important while ballast properties remained almost indifferent. For both these failure moods, it is also observed that the reliability index for subgrade failure increases with the increase in COV of ballast depth and subgrade Young modulus. The findings of this work is of particular significance in studying the combined effect of construction imperfections and variations in ground conditions on the structural performance of railroad trackbed and evaluating the associated risk involved. In addition, it also provides an additional tool to supplement the deterministic analysis procedures and decision making for railroad maintenance.

Keywords: finite element analysis, numerical modeling, probabilistic methods, risk and reliability analysis, subgrade

Procedia PDF Downloads 130
313 Motherhood Constrained: The Minotaur Legend Reimagined Through the Perspective of Marginalized Mothers

Authors: Gevorgianiene Violeta, Sumskiene Egle

Abstract:

Background. Child removal is a profound and life-altering measure that significantly impacts both children and their mothers. Unfortunately, mothers with intellectual disabilities are disproportionately affected by the removal of their children. This action is often taken due to concerns about the mother's perceived inability to care for the child, instances of abuse and neglect, or struggles with addiction. In many cases, the failure to meet society's standards of a "good mother" is seen as a deviation from conventional norms of femininity and motherhood. From an institutional perspective, separating a child from their mother is sometimes viewed as a step toward restoring justice or doing what is considered "right." In another light, this act of child removal can be seen as the removal of a mother from her child, an attempt to shield society from the complexities and fears associated with motherhood for women with disabilities. This separation can be likened to the Greek legend of the Minotaur, a fearsome beast confined within an impenetrable labyrinth. By reimagining this legend, we can see the social fears surrounding 'mothering with intellectual disability' as deeply sealed within an unreachable place. The Aim of this Presentation. Our goal with this presentation is to draw from our research and the metaphors found in the Greek legend to delve into the profound challenges faced by mothers with intellectual disabilities in raising their children. These challenges often become entangled within an insurmountable labyrinth, including navigating complex institutional bureaucracies, enduring persistent doubts cast upon their maternal competencies, battling unfavorable societal narratives, and struggling to retain custody of their children. Coupled with limited social support networks, these challenges frequently lead to situations resulting in maternal failure and, ultimately, child removal. On a broader scale, this separation of a child from their mother symbolizes society’s collective avoidance of confronting the issue of 'mothering with disability,' which can only be effectively addressed through united efforts. Conclusion. Just as in the labyrinth of the Minotaur legend, the struggles faced by mothers with disabilities in their pursuit of retaining their children reveal the need for a metaphorical 'string of Ariadne.' This string symbolizes the support offered by social service providers, communities, and the loved ones these women often dream of but rarely encounter in their lives.

Keywords: motherhood, disability, child removal, support.

Procedia PDF Downloads 46
312 An Investigative Study on the Use of Online Marketing Methods in Hungary

Authors: E. Happ, Zs. Ivancsone Horvath

Abstract:

With the development of the information technology, IT, sector, all industry of the world has a new path, dealing with digitalisation. Tourism is the most rapidly increasing industry in the world. Without digitalisation, tourism operators would not be competitive enough with foreign destinations or other experience-based service providers. Digitalisation is also necessary to enable organizations, which are interested in tourism to meet the growing expectations of consumers. With the help of digitalisation, tourism providers can also obtain information about tourists, changes in consumer behaviour, and the use of online services. The degree of digitalisation in tourism is different for different services. The research is based on a questionnaire survey conducted in 2018 in Hungary. The sample with more than 500 respondents was processed by the SPSS program, using a variety of analysis methods. The following two variables were observed from more aspects: frequency of travel and the importance of services related to online travel. With the help of these variables, a cluster analysis was performed among the participants. The sample can be divided into two groups using K-mean cluster analysis. Cluster ‘1’ is a positive group; they can be called the “most digital tourists.” They agree in most things, with low standard deviation, and for them, digitalisation is a starting point. To the members of Cluster ‘2’, digitalisation is important, too. The results show what is important (accommodation, information gathering) to them, but also what they are not interested in at all within the digital world (e.g., car rental or online sharing). Interestingly, there is no third negative cluster. This result (that there is no result) proves that tourism uses digitalisation, and the question is only the extent of the use of online tools and methods. With the help of the designed consumer groups, the characteristics of digital tourism segments can be identified. The help of different variables characterised these groups. One of them is the frequency of travel, where there is a significant correlation between travel frequency and cluster membership. The shift is clear towards Cluster ‘1’, which means, those who find services related to online travel more important, are more likely to travel as well. By learning more about digital tourists’ consumer behaviour, the results of this research can help the providers in what kind of marketing tools could be used to influence the consumer choices of the different consumer groups created using digital devices, furthermore how to conduct more detailed and effective marketing activities. The main finding of the research was that most of the people have digital tools which are important to be able to participate in e-tourism. Of these, mobile devices are increasingly preferred. That means the challenge for service providers is no longer the digital presence but having optimised application for different devices.

Keywords: cluster analysis, digital tourism, marketing tool, tourist behaviour

Procedia PDF Downloads 120
311 The Effects of English Contractions on the Application of Syntactic Theories

Authors: Wakkai Hosanna Hussaini

Abstract:

A formal structure of the English clause is composed of at least two elements – subject and verb, in structural grammar and at least one element – predicate, in systemic (functional) and generative grammars. Each of the elements can be represented by a word or group (of words). In modern English structure, very often speakers merge two words as one with the use of an apostrophe. Each of the two words can come from different elements or belong to the same element. In either case, result of the merger is called contraction. Although contractions constitute a part of modern English structure, they are considered informal in nature (more frequently used in spoken than written English) that is why they were initially viewed as constituting an evidence of language deterioration. To our knowledge, no formal syntactic theory yet has been particular on the contractions because of its deviation from the formal rules of syntax that seek to identify the elements that form a clause in English. The inconsistency between the formal rules and a contraction is established when two words representing two elements in a non-contraction are merged as one element to form a contraction. Thus the paper presents the various syntactic issues as effects arising from converting non-contracted to contracted forms. It categorizes English contractions and describes each category according to its syntactic relations (position and relationship) and morphological formation (form and content) as integral part of modern structure of English. This is a position paper as such the methodology is observational, descriptive and explanatory/analytical based on existing related literature. The inventory of English contractions contained in books on syntax forms the data from where specific examples are drawn. It is noted as conclusion that the existing syntactic theories were not originally established to account for English contractions. The paper, when published, will further expose the inadequacies of the existing syntactic theories by giving more reasons for the establishment of a more comprehensive syntactic theory for analyzing English clause/sentence structure involving contractions. The method used reveals the extent of the inadequacies in applying the three major syntactic theories: structural, systemic (functional) and generative, on the English contractions. Although no theory is without scope, shying away from the three major theories from recognizing the English contractions need to be broken because of the increasing popularity of its use in modern English structure. The paper, therefore, recommends that as use of contraction gains more popular even in formal speeches today, there is need to establish a syntactic theory to handle its patterns of syntactic relations and morphological formation.

Keywords: application, effects, English contractions, syntactic theories

Procedia PDF Downloads 250
310 The Relationship between Caregiver Burden and Life Satisfaction of Caregivers of Elderly Individuals

Authors: Guler Duru Asiret, Cemile Kutmec Yilmaz, Gulcan Bagcivan, Tugce Turten Kaymaz

Abstract:

This descriptive study was conducted to determine the relationship between caregiver burden and life satisfaction who give home care to elderly individuals. The sample was recruited from the internal medicine unit and palliative unit of a state hospital located in Turkey on June 2016-2017. The study sample consisted of 231 primary caregiver family member, who met the eligibility criteria and agreed to participate in the study. The inclusion criteria were as follows: inpatient’s caregiver, primary caregiver for at least 3 months, at least 18 years of age, no communication problem or mental disorder. Data were gathered using an Information Form prepared by the researchers based on previous literature, the Zarit Burden Interview (ZBI), and the Satisfaction with Life Scale (SWLS). The data were analyzed using IBM SPSS Statistics software version 20.0 (SPSS, Chicago, IL). The descriptive characteristics of the participant were analyzed using number, percentage, mean and standard deviation. The suitability of normal distribution of scale scores was analyzed using Kolmogorov-Smirnov and Shapiro-Wilk test. Relationships between scales were analyzed using Spearman’s rank-correlation coefficient. P values less than 0.05 were considered to be significant. The average age of the caregivers was 50.11±13.46 (mean±SD) years. Of the caregivers, 76.2% were women, 45% were primary school graduates, 89.2% were married, 38.1% were the daughters of their patients. Among these, 52.4% evaluated their income level to be good. Of them, 53.6% had been giving care less than 2 years. The patients’ average age was 77.1±8.0 years. Of the patients, 55.8% were women, 56.3% were illeterate, 70.6% were married, and 97.4% had at least one chronic disease. The mean Zarit Burden Interview score was 35.4±1.5 and the Satisfaction with Life Scale score was 20.6±6.8. A negative relationship was found between the patients’ score average on the ZBI, and on the SWLS (r= -0.438, p=0.000). The present study determined that the caregivers have a moderate caregiver burden and the life satisfaction. And the life satisfaction of caregivers decreased as their caregiver burden increase. In line with the results obtained from the research, it is recommended that to increase the effectiveness of discharge training, to arrange training and counseling programs for caregivers to cope with the problems they experienced, to monitor the caregivers at regular intervals and to provide necessary institutional support.

Keywords: caregiver burden, family caregivers, nurses, satisfaction

Procedia PDF Downloads 161
309 Air Handling Units Power Consumption Using Generalized Additive Model for Anomaly Detection: A Case Study in a Singapore Campus

Authors: Ju Peng Poh, Jun Yu Charles Lee, Jonathan Chew Hoe Khoo

Abstract:

The emergence of digital twin technology, a digital replica of physical world, has improved the real-time access to data from sensors about the performance of buildings. This digital transformation has opened up many opportunities to improve the management of the building by using the data collected to help monitor consumption patterns and energy leakages. One example is the integration of predictive models for anomaly detection. In this paper, we use the GAM (Generalised Additive Model) for the anomaly detection of Air Handling Units (AHU) power consumption pattern. There is ample research work on the use of GAM for the prediction of power consumption at the office building and nation-wide level. However, there is limited illustration of its anomaly detection capabilities, prescriptive analytics case study, and its integration with the latest development of digital twin technology. In this paper, we applied the general GAM modelling framework on the historical data of the AHU power consumption and cooling load of the building between Jan 2018 to Aug 2019 from an education campus in Singapore to train prediction models that, in turn, yield predicted values and ranges. The historical data are seamlessly extracted from the digital twin for modelling purposes. We enhanced the utility of the GAM model by using it to power a real-time anomaly detection system based on the forward predicted ranges. The magnitude of deviation from the upper and lower bounds of the uncertainty intervals is used to inform and identify anomalous data points, all based on historical data, without explicit intervention from domain experts. Notwithstanding, the domain expert fits in through an optional feedback loop through which iterative data cleansing is performed. After an anomalously high or low level of power consumption detected, a set of rule-based conditions are evaluated in real-time to help determine the next course of action for the facilities manager. The performance of GAM is then compared with other approaches to evaluate its effectiveness. Lastly, we discuss the successfully deployment of this approach for the detection of anomalous power consumption pattern and illustrated with real-world use cases.

Keywords: anomaly detection, digital twin, generalised additive model, GAM, power consumption, supervised learning

Procedia PDF Downloads 139
308 A Study on the Establishment of Performance Evaluation Criteria for MR-Based Simulation Device to Train K-9 Self-Propelled Artillery Operators

Authors: Yonggyu Lee, Byungkyu Jung, Bom Yoon, Jongil Yoon

Abstract:

MR-based simulation devices have been recently used in various fields such as entertainment, medicine, manufacturing, and education. Different simulation devices are also being developed for military equipment training. This is to address the concerns regarding safety accidents as well as cost issues associated with training with expensive equipment. An important aspect of developing simulation devices to replicate military training is that trainees experience the same effect as training with real devices. In this study, the criteria for performance evaluation are established to compare the training effect of an MR-based simulation device to that of an actual device. K-9 Self-propelled artillery (SPA) operators are selected as training subjects. First, MR-based software is developed to simulate the training ground and training scenarios currently used for training SPA operators in South Korea. Hardware that replicates the interior of SPA is designed, and a simulation device that is linked to the software is developed. Second, criteria are established to evaluate the simulation device based on real-life training scenarios. A total of nine performance evaluation criteria were selected based on the actual SPA operation training scenarios. Evaluation items were selected to evaluate whether the simulation device was designed such that trainees would experience the same effect as training in the field with a real SPA. To eval-uate the level of replication by the simulation device of the actual training environments (driving and passing through trenches, pools, protrusions, vertical obstacles, and slopes) and driving conditions (rapid steering, rapid accelerating, and rapid braking) as per the training scenarios, tests were performed under the actual training conditions and in the simulation device, followed by the comparison of the results. In addition, the level of noise felt by operators during training was also selected as an evaluation criterion. Due to the nature of the simulation device, there may be data latency between HW and SW. If the la-tency in data transmission is significant, the VR image information delivered to trainees as they maneuver HW might not be consistent. This latency in data transmission was also selected as an evaluation criterion to improve the effectiveness of the training. Through this study, the key evaluation metrics were selected to achieve the same training effect as training with real equipment in a training ground during the develop-ment of the simulation device for military equipment training.

Keywords: K-9 self-propelled artillery, mixed reality, simulation device, synchronization

Procedia PDF Downloads 53
307 Relatively High Heart-Rate Variability Predicts Greater Survival Chances in Patients with Covid-19

Authors: Yori Gidron, Maartje Mol, Norbert Foudraine, Frits Van Osch, Joop Van Den Bergh, Moshe Farchi, Maud Straus

Abstract:

Background: The worldwide pandemic of severe acute respiratory syndrome coronavirus 2 (SARS-COV2), which began in 2019, also known as Covid-19, has infected over 136 million people and tragically took the lives of over 2.9 million people worldwide. Many of the complications and deaths are predicted by the inflammatory “cytokine storm.” One way to progress in the prevention of death is by finding a predictive and protective factor that inhibits inflammation, on the one hand, and which also increases anti-viral immunity on the other hand. The vagal nerve does precisely both actions. This study examined whether vagal nerve activity, indexed by heart-rate variability (HRV), predicts survival in patients with Covid-19. Method: We performed a pseudo-prospective study, where we retroactively obtained ECGs of 271 Covid-19 patients arriving at a large regional hospital in The Netherlands. HRV was indexed by the standard deviation of the intervals between normal heartbeats (SDNN). We examined patients’ survival at 3 weeks and took into account multiple confounders and known prognostic factors (e.g., age, heart disease, diabetes, hypertension). Results: Patients’ mean age was 68 (range: 25-95) and nearly 22% of the patients had died by 3 weeks. Their mean SDNN (17.47msec) was far below the norm (50msec). Importantly, relatively higher HRV significantly predicted a higher chance of survival, after statistically controlling for patients’ age, cardiac diseases, hypertension and diabetes (relative risk, H.R, and 95% confidence interval (95%CI): H.R = 0.49, 95%CI: 0.26 – 0.95, p < 0.05). However, since HRV declines rapidly with age and since age is a profound predictor in Covid-19, we split the sample by median age (40). Subsequently, we found that higher HRV significantly predicted greater survival in patients older than 70 (H.R = 0.35, 95%CI: 0.16 – 0.78, p = 0.01), but HRV did not predict survival in patients below age 70 years (H.R = 1.11, 95%CI: 0.37 – 3.28, p > 0.05). Conclusions: To the best of our knowledge, this is the first study showing that higher vagal nerve activity, as indexed by HRV, is an independent predictor of higher chances for survival in Covid-19. The results are in line with the protective role of the vagal nerve in diseases and extend this to a severe infectious illness. Studies should replicate these findings and then test in controlled trials whether activating the vagus nerve may prevent mortality in Covid-19.

Keywords: Covid-19, heart-rate Variability, prognosis, survival, vagal nerve

Procedia PDF Downloads 165
306 Guidelines for the Management Process Development of Research Journals in Order to Develop Suan Sunandha Rajabhat University to International Standards

Authors: Araya Yordchim, Rosjana Chandhasa, Suwaree Yordchim

Abstract:

This research aims to study guidelines on the development of management process for research journals in order to develop Suan Sunandha Rajabhat University to international standards. This research investigated affecting elements ranging from the format of the article, evaluation form for research article quality, the process of creating a scholarly journal, satisfaction level of those with knowledge and competency to conduct research, arisen problems, and solutions. Drawing upon the sample size of 40 persons who had knowledge and competency in conducting research and creating scholarly journal articles at an international level, the data for this research were collected using questionnaires as a tool. Through the usage of computer software, data were analyzed by using the statistics in the forms of frequency, percentage, mean, standard deviation, and multiple regression analysis. The majority of participants were civil servants with a doctorate degree, followed by civil servants with a master's degree. Among them, the suitability of the article format was rated at a good level while the evaluation form for research articles quality was assessed at a good level. Based on participants' viewpoints, the process of creating scholarly journals was at a good level, while the satisfaction of those who had knowledge and competency in conducting research was at a satisfactory level. The problems encountered were the difficulty in accessing the website. The solution to the problem was to develop a website with user-friendly accessibility, including setting up a Google scholar profile for the purpose of references counting and the articles being used for reference in real-time. Research article format influenced the level of satisfaction of those who had the knowledge and competency to conduct research with statistical significance at the 0.01 level. The research article quality assessment form (preface section, research article writing section, preparation for research article manuscripts section, and the original article evaluation form for the author) affected the satisfaction of those with knowledge and competency to conduct research with the statistical significance at the level of 0.01. The process of establishing journals had an impact on the satisfaction of those with knowledge and ability to conduct research with statistical significance at the level of .05

Keywords: guidelines, development of management, research journals, international standards

Procedia PDF Downloads 117
305 A Comparative Study of Sampling-Based Uncertainty Propagation with First Order Error Analysis and Percentile-Based Optimization

Authors: M. Gulam Kibria, Shourav Ahmed, Kais Zaman

Abstract:

In system analysis, the information on the uncertain input variables cause uncertainty in the system responses. Different probabilistic approaches for uncertainty representation and propagation in such cases exist in the literature. Different uncertainty representation approaches result in different outputs. Some of the approaches might result in a better estimation of system response than the other approaches. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge (MUQC) has posed challenges about uncertainty quantification. Subproblem A, the uncertainty characterization subproblem, of the challenge posed is addressed in this study. In this subproblem, the challenge is to gather knowledge about unknown model inputs which have inherent aleatory and epistemic uncertainties in them with responses (output) of the given computational model. We use two different methodologies to approach the problem. In the first methodology we use sampling-based uncertainty propagation with first order error analysis. In the other approach we place emphasis on the use of Percentile-Based Optimization (PBO). The NASA Langley MUQC’s subproblem A is developed in such a way that both aleatory and epistemic uncertainties need to be managed. The challenge problem classifies each uncertain parameter as belonging to one the following three types: (i) An aleatory uncertainty modeled as a random variable. It has a fixed functional form and known coefficients. This uncertainty cannot be reduced. (ii) An epistemic uncertainty modeled as a fixed but poorly known physical quantity that lies within a given interval. This uncertainty is reducible. (iii) A parameter might be aleatory but sufficient data might not be available to adequately model it as a single random variable. For example, the parameters of a normal variable, e.g., the mean and standard deviation, might not be precisely known but could be assumed to lie within some intervals. It results in a distributional p-box having the physical parameter with an aleatory uncertainty, but the parameters prescribing its mathematical model are subjected to epistemic uncertainties. Each of the parameters of the random variable is an unknown element of a known interval. This uncertainty is reducible. From the study, it is observed that due to practical limitations or computational expense, the sampling is not exhaustive in sampling-based methodology. That is why the sampling-based methodology has high probability of underestimating the output bounds. Therefore, an optimization-based strategy to convert uncertainty described by interval data into a probabilistic framework is necessary. This is achieved in this study by using PBO.

Keywords: aleatory uncertainty, epistemic uncertainty, first order error analysis, uncertainty quantification, percentile-based optimization

Procedia PDF Downloads 228
304 Emoji, the Language of the Future: An Analysis of the Usage and Understanding of Emoji across User-Groups

Authors: Sakshi Bhalla

Abstract:

On the one hand, given their seemingly simplistic, near universal usage and understanding, emoji are discarded as a potential step back in the evolution of communication. On the other, their effectiveness, pervasiveness, and adaptability across and within contexts are undeniable. In this study, the responses of 40 people (categorized by age) were recorded based on a uniform two-part questionnaire where they were required to a) identify the meaning of 15 emoji when placed in isolation, and b) interpret the meaning of the same 15 emoji when placed in a context-defining posting on Twitter. Their responses were studied on the basis of deviation from their responses that identified the emoji in isolation, as well as the originally intended meaning ascribed to the emoji. Based on an analysis of these results, it was discovered that each of the five age categories uses, understands and perceives emoji differently, which could be attributed to the degree of exposure they have undergone. For example, in the case of the youngest category (aged < 20), it was observed that they were the least accurate at correctly identifying emoji in isolation (~55%). Further, their proclivity to change their response with respect to the context was also the least (~31%). However, an analysis of each of their individual responses showed that these first-borns of social media seem to have reached a point where emojis no longer inspire their most literal meanings to them. The meaning and implication of these emoji have evolved to imply their context-derived meanings, even when placed in isolation. These trends carry forward meaningfully for the other four groups as well. In the case of the oldest category (aged > 35), however, the trends indicated inaccuracy and therefore, a higher incidence of a proclivity to change their responses. When studied in a continuum, the responses indicate that slowly and steadily, emoji are evolving from pictograms to ideograms. That is to suggest that they do not just indicate a one-to-one relation between a singular form and singular meaning. In fact, they communicate increasingly complicated ideas. This is much like the evolution of ancient hieroglyphics on papyrus reed or cuneiform on Sumerian clay tablets, which evolved from simple pictograms to progressively more complex ideograms. This evolution within communication is parallel to and contingent on the simultaneous evolution of communication. What’s astounding is the capacity of humans to leverage different platforms to facilitate such changes. Twiterese, as it is now called, is one of the instances where language is adapting to the demands of the digital world. That it does not have a spoken component, an ostensible grammar, and lacks standardization of use and meaning, as some might suggest, may seem like impediments in qualifying it as the 'language' of the digital world. However, that kind of a declarative remains a function of time, and time alone.

Keywords: communication, emoji, language, Twitter

Procedia PDF Downloads 89
303 Diversity, Biochemical and Genomic Assessment of Selected Benthic Species of Two Tropical Lagoons, Southwest Nigeria

Authors: G. F. Okunade, M. O. Lawal, R. E. Uwadiae, D. Portnoy

Abstract:

The diversity, physico-chemical, biochemical and genomics assessment of Macrofauna species of Ologe and Badagry Lagoons were carried out between August 2016 and July 2018. The concentrations of Fe, Zn, Mn, Cd, Cr, and Pb in water were determined by Atomic Absorption Spectrophotometer (AAS). Particle size distribution was determined with wet-sieving and sedimentation using hydrometer method. Genomics analyses were carried using 25 P. fusca (quadriseriata) and 25 P.fusca from each lagoon due to abundance in both lagoons all through the two years of collection. DNA was isolated from each sample using the Mag-Bind Blood and Tissue DNA HD 96 kit; a method designed to isolate high quality. The biochemical characteristics were analysed in the dominanat species (P.aurita and T. fuscatus) using ELISA kits. Physico-chemical parameters such as pH, total dissolved solids, dissolved oxygen, conductivity and TDS were analysed using APHA standard protocols. The Physico-chemical parameters of the water quality recorded with mean values of 32.46 ± 0.66mg/L and 41.93 ± 0.65 for COD, 27.28 ± 0.97 and 34.82 ± 0.1 mg/L for BOD, 0.04 ± 4.71 mg/L for DO, 6.65 and 6.58 for pH in Ologe and Badagry lagoons with significant variations (p ≤ 0.05) across seasons. The mean and standard deviation of salinity for Ologe and Badagry Lagoons ranged from 0.43 ± 0.30 to 0.27 ± 0.09. A total of 4210 species belonging to a phylum, two classes, four families and a total of 2008 species in Ologe lagoon while a phylum, two classes, 5 families and a total of 2202 species in Badagry lagoon. The percentage composition of the classes at Ologe lagoon had 99% gastropod and 1% bivalve, while Gastropod contributed 98.91% and bivalve 1.09% in Badagry lagoon. Particle size was distributed in 0.002mm to 2.00mm, particle size distribution in Ologe lagoon recorded 0.83% gravels, 97.83% sand, and 1.33% silt particles while Badagry lagoon recorded 7.43% sand, 24.71% silt, and 67.86% clay particles hence, the excessive dredging activities going on in the lagoon. Maximum percentage of sand (100%) was seen in station 6 in Ologe lagoon while the minimum (96%) was found in station 1. P. aurita (Ologe Lagoon) and T. fuscastus (Badagry Lagoon) were the most abundant benthic species in which both contributed 61.05% and 64.35%, respectively. The enzymatic activities of P. aurita observed with mean values of 21.03 mg/dl for AST, 10.33 mg/dl for ALP, 82.16 mg/dl for ALT and 73.06 mg/dl for CHO in Ologe Lagoon While T. fuscatus observed mean values of Badagry Lagoon) recorded mean values 29.76 mg/dl, ALP with 11.69mg/L, ALT with 140.58 mg/dl and CHO with 45.98 mg/dl. There were significant variations (P < 0.05) in AST and CHO levels of activities in the muscles of the species.

Keywords: benthos, biochemical responses, genomics, metals, particle size

Procedia PDF Downloads 116
302 Effects of Group Cognitive Restructuring and Rational Emotive Behavioral Therapy on Psychological Distress of Awaiting-Trial Inmates in Correctional Centers in North- West, Nigeria

Authors: Muhammad Shafi'u Adamu

Abstract:

This study examined the effects of two Group Cognitive Behavioural Therapies (Cognitive Restructuring and Rational Emotive Behavioural Therapy) on Psychological Distress of awaiting-trial Inmates in Correctional Centres in North-West, Nigeria. The study had four specific objectives, four research questions, and four null hypotheses. The study used a quasi-experimental design that involved pre-test and post-test. The population comprised of all 7,962 awaiting-trial inmates in correctional centres in North-west, Nigeria. 131 awaiting trial inmates from three intact Correctional Centres were randomly selected using the census technique. The respondents were sampled and randomly put into 3 groups (CR, REBT and Control). Kessler Psychological Distress Scale (K10) was adapted for data collection in the study. The instrument was validated by experts and subjected to pilot study using Cronbach's Alpha with reliability co-efficient of 0.772. Each group received treatment for 8 consecutive weeks (60 minutes/week). Data collected from the field were subjected to descriptive statistics of mean, standard deviation and mean difference to answer the research questions. Inferential statistics of ANOVA and independent sample t-test were used to test the null hypotheses at P≤ 0.05 level of significance. Results in the study revealed that there was no significant difference among the pre-treatment mean scores of experimental and control groups. Statistical evidence also showed a significant difference among the mean sores of the three groups, and thus, results of the Post Hoc multiple-comparison test indicating the posttreatment reduction of psychological distress on the awaiting-trial inmates. Documented output also showed a significant difference between the post-treatment psychologically distressed mean scores of male and female awaiting-trial inmates, but there was no difference on those exposed to REBT. The research recommends that a standardized structured CBT counselling technique treatment should be designed for correctional centres across Nigeria, and CBT counselling techniques could be used in the treatment of PD in both correctional and clinical settings.

Keywords: awaiting-trial inmates, cognitive restructuring, correctional centres, group cognitive behavioural therapies, rational emotive behavioural therapy

Procedia PDF Downloads 73
301 Application of Geosynthetics for the Recovery of Located Road on Geological Failure

Authors: Rideci Farias, Haroldo Paranhos

Abstract:

The present work deals with the use of drainage geo-composite as a deep drainage and geogrid element to reinforce the base of the body of the landfill destined to the road pavement on geological faults in the stretch of the TO-342 Highway, between the cities of Miracema and Miranorte, in the State of Tocantins / TO, Brazil, which for many years was the main link between TO-010 and BR-153, after the city of Palmas, also in the state of Tocantins / TO, Brazil. For this application, geotechnical and geological studies were carried out by means of SPT percussion drilling, drilling and rotary drilling, to understand the problem, identifying the type of faults, filling material and the definition of the water table. According to the geological and geotechnical studies carried out, the area where the route was defined, passes through a zone of longitudinal fault to the runway, with strong breaking / fracturing, with presence of voids, intense alteration and with advanced argilization of the rock and with the filling up parts of the faults by organic and compressible soils leachate from other horizons. This geology presents as a geotechnical aggravating agent a medium of high hydraulic load and very low resistance to penetration. For more than 20 years, the region presented constant excessive deformations in the upper layers of the pavement, which after routine services of regularization, reconformation, re-compaction of the layers and application of the asphalt coating. The faults were quickly propagated to the surface of the asphalt pavement, generating a longitudinal shear, forming steps (unevenness), close to 40 cm, causing numerous accidents and discomfort to the drivers, since the geometric positioning was in a horizontal curve. Several projects were presented to the region's highway department to solve the problem. Due to the need for partial closure of the runway, the short time for execution, the use of geosynthetics was proposed and the most adequate solution for the problem was taken into account the movement of existing geological faults and the position of the water level in relation to several Layers of pavement and failure. In order to avoid any flow of water in the body of the landfill and in the filling material of the faults, a drainage curtain solution was used, carried out at 4.0 meters depth, with drainage geo-composite and as reinforcement element and inhibitor of the possible A geogrid of 200 kN / m of resistance was inserted at the base of the reconstituted landfill. Recent evaluations, after 13 years of application of the solution, show the efficiency of the technique used, supported by the geotechnical studies carried out in the area.

Keywords: geosynthetics, geocomposite, geogrid, road, recovery, geological failure

Procedia PDF Downloads 161
300 Study of Drape and Seam Strength of Fabric and Garment in Relation to Weave Design and Comparison of 2D and 3D Drape Properties

Authors: Shagufta Riaz, Ayesha Younus, Munir Ashraf, Tanveer Hussain

Abstract:

Aesthetic and performance are two most important considerations along with quality, durability, comfort and cost that affect the garment credibility. Fabric drape is perhaps the most important clothing characteristics that distinguishes fabric from the sheet, paper, steel or other film materials. It enables the fabric to mold itself under its own weight into desired and required shape when only part of it is directly sustained. The fabric has the ability to be crumpled charmingly in bent folds of single or double curvature due to its drapeability to produce a smooth flowing i.e. ‘the sinusoidal-type folds of a curtain or skirt’. Drape and seam strength are two parameters that are considered for aesthetic and performance of fabric for both apparel and home textiles. Until recently, no such study have been conducted in which effect of weave designs on drape and seam strength of fabric and garment is inspected. Therefore, the aim of this study was to measure seam strength and drape of fabric and garment objectively by changing weave designs and quality of the fabric. Also, the comparison of 2-D drape and 3-D drape was done to find whether a fabric behaves in same manner or differently when sewn and worn on the body. Four different cotton weave designs were developed and pr-treatment was done. 2-D Drape of the fabric was measured by drapemeter attached with digital camera and a supporting disc to hang the specimen on it. Drape coefficient value (DC %) has negative relation with drape. It is the ratio of draped sample’s projected shadow area to the area of undraped (flat) sample expressed as percentage. Similarly, 3-D drape was measured by hanging the A-line skirts for developed weave designs. BS 3356 standard test method was followed for bending length examination. It is related to the angle that the fabric makes with its horizontal axis. Seam strength was determined by following ASTM test standard. For sewn fabric, stitch density of seam was found by magnifying glass according to standard ASTM test method. In this research study, from the experimentation and evaluation it was investigated that drape and seam strength were significantly affected by change of weave design and quality of fabric (PPI & yarn count). Drapeability increased as the number of interlacement or contact point deceased between warp and weft yarns. As the weight of fabric, bending length, and density of fabric had indirect relationship with drapeability. We had concluded that 2-D drape was higher than 3-D drape even though the garment was made of the same fabric construction. Seam breakage strength decreased with decrease in picks density and yarn count.

Keywords: drape coefficient, fabric, seam strength, weave

Procedia PDF Downloads 250
299 An Investigation into the Influence of Compression on 3D Woven Preform Thickness and Architecture

Authors: Calvin Ralph, Edward Archer, Alistair McIlhagger

Abstract:

3D woven textile composites continue to emerge as an advanced material for structural applications and composite manufacture due to their bespoke nature, through thickness reinforcement and near net shape capabilities. When 3D woven preforms are produced, they are in their optimal physical state. As 3D weaving is a dry preforming technology it relies on compression of the preform to achieve the desired composite thickness, fibre volume fraction (Vf) and consolidation. This compression of the preform during manufacture results in changes to its thickness and architecture which can often lead to under-performance or changes of the 3D woven composite. Unlike traditional 2D fabrics, the bespoke nature and variability of 3D woven architectures makes it difficult to know exactly how each 3D preform will behave during processing. Therefore, the focus of this study is to investigate the effect of compression on differing 3D woven architectures in terms of structure, crimp or fibre waviness and thickness as well as analysing the accuracy of available software to predict how 3D woven preforms behave under compression. To achieve this, 3D preforms are modelled and compression simulated in Wisetex with varying architectures of binder style, pick density, thickness and tow size. These architectures have then been woven with samples dry compression tested to determine the compressibility of the preforms under various pressures. Additional preform samples were manufactured using Resin Transfer Moulding (RTM) with varying compressive force. Composite samples were cross sectioned, polished and analysed using microscopy to investigate changes in architecture and crimp. Data from dry fabric compression and composite samples were then compared alongside the Wisetex models to determine accuracy of the prediction and identify architecture parameters that can affect the preform compressibility and stability. Results indicate that binder style/pick density, tow size and thickness have a significant effect on compressibility of 3D woven preforms with lower pick density allowing for greater compression and distortion of the architecture. It was further highlighted that binder style combined with pressure had a significant effect on changes to preform architecture where orthogonal binders experienced highest level of deformation, but highest overall stability, with compression while layer to layer indicated a reduction in fibre crimp of the binder. In general, simulations showed a relative comparison to experimental results; however, deviation is evident due to assumptions present within the modelled results.

Keywords: 3D woven composites, compression, preforms, textile composites

Procedia PDF Downloads 124
298 A Modified QuEChERS Method Using Activated Carbon Fibers as r-DSPE Sorbent for Sample Cleanup: Application to Pesticides Residues Analysis in Food Commodities Using GC-MS/MS

Authors: Anshuman Srivastava, Shiv Singh, Sheelendra Pratap Singh

Abstract:

A simple, sensitive and effective gas chromatography tandem mass spectrometry (GC-MS/MS) method was developed for simultaneous analysis of multi pesticide residues (organophosphate, organochlorines, synthetic pyrethroids and herbicides) in food commodities using phenolic resin based activated carbon fibers (ACFs) as reversed-dispersive solid phase extraction (r-DSPE) sorbent in modified QuEChERS (Quick Easy Cheap Effective Rugged Safe) method. The acetonitrile-based QuEChERS technique was used for the extraction of the analytes from food matrices followed by sample cleanup with ACFs instead of traditionally used primary secondary amine (PSA). Different physico-chemical characterization techniques such as Fourier transform infrared spectroscopy, scanning electron microscopy, X-ray diffraction and Brunauer-Emmet-Teller surface area analysis were employed to investigate the engineering and structural properties of ACFs. The recovery of pesticides and herbicides was tested at concentration levels of 0.02 and 0.2 mg/kg in different commodities such as cauliflower, cucumber, banana, apple, wheat and black gram. The recoveries of all twenty-six pesticides and herbicides were found in acceptable limit (70-120%) according to SANCO guideline with relative standard deviation value < 15%. The limit of detection and limit of quantification of the method was in the range of 0.38-3.69 ng/mL and 1.26 -12.19 ng/mL, respectively. In traditional QuEChERS method, PSA used as r-DSPE sorbent plays a vital role in sample clean-up process and demonstrates good recoveries for multiclass pesticides. This study reports that ACFs are better in terms of removal of co-extractives in comparison of PSA without compromising the recoveries of multi pesticides from food matrices. Further, ACF replaces the need of charcoal in addition to the PSA from traditional QuEChERS method which is used to remove pigments. The developed method will be cost effective because the ACFs are significantly cheaper than the PSA. So the proposed modified QuEChERS method is more robust, effective and has better sample cleanup efficiency for multiclass multi pesticide residues analysis in different food matrices such as vegetables, grains and fruits.

Keywords: QuEChERS, activated carbon fibers, primary secondary amine, pesticides, sample preparation, carbon nanomaterials

Procedia PDF Downloads 256
297 Rebuilding Health Post-Conflict: Case Studies from Afghanistan, Cambodia, and Mozambique

Authors: Spencer Rutherford, Shadi Saleh

Abstract:

War and conflict negatively impact all facets of a health system; services cease to function, resources become depleted, and any semblance of governance is lost. Following cessation of conflict, the rebuilding process includes a wide array of international and local actors. During this period, stakeholders must contend with various trade-offs, including balancing sustainable outcomes with immediate health needs, introducing health reform measures while also increasing local capacity, and reconciling external assistance with local legitimacy. Compounding these factors are additional challenges, including coordination amongst stakeholders, the re-occurrence of conflict, and ulterior motives from donors and governments, to name a few. Therefore, the present paper evaluated health system development in three post-conflict countries over a 12-year timeline. Specifically, health policies, health inputs (such infrastructure and human resources), and measures of governance, from the post-conflict periods of Afghanistan, Cambodia, and Mozambique, were assessed against health outputs and other measures. All post-conflict countries experienced similar challenges when rebuilding the health sector, including; division and competition between donors, NGOs, and local institutions; urban and rural health inequalities; and the re-occurrence of conflict. However, countries also employed unique and effective mechanisms for reconstructing their health systems, including; government engagement of the NGO and private sector; integration of competing factions into the same workforce; and collaborative planning for health policy. Based on these findings, best-practice development strategies were determined and compiled into a 12-year framework. Briefly, during the initial stage of the post-conflict period, primary stakeholders should work quickly to draft a national health strategy in collaboration with the government, and focus on managing and coordinating NGOs through performance-based partnership agreements. With this scaffolding in place, the development community can then prioritize the reconstruction of primary health care centers, increasing and retaining health workers, and horizontal integration of immunization services. The final stages should then concentrate on transferring ownership of the health system national institutions, implementing sustainable financing mechanisms, and phasing-out NGO services. Overall, these findings contribute post-conflict health system development by evaluating the process holistically and along a timeline and can be of further use by healthcare managers, policy-makers, and other health professionals.

Keywords: Afghanistan, Cambodia, health system development, health system reconstruction, Mozambique, post-conflict, state-building

Procedia PDF Downloads 143
296 Learning Resources as Determinants for Improving Teaching and Learning Process in Nigerian Universities

Authors: Abdulmutallib U. Baraya, Aishatu M. Chadi, Zainab A. Aliyu, Agatha Samson

Abstract:

Learning Resources is the field of study that investigates the process of analyzing, designing, developing, implementing, and evaluating learning materials, learners, and the learning process in order to improve teaching and learning in university-level education essential for empowering students and various sectors of Nigeria’s economy to succeed in a fast-changing global economy. Innovation in the information age of the 21st century is the use of educational technologies in the classroom for instructional delivery, it involves the use of appropriate educational technologies like smart boards, computers, projectors and other projected materials to facilitate learning and improve performance. The study examined learning resources as determinants for improving the teaching and learning process in Abubakar Tafawa Balewa University (ATBU), Bauchi, Bauchi state of Nigeria. Three objectives, three research questions and three null hypotheses guided the study. The study adopted a Survey research design. The population of the study was 880 lecturers. A sample of 260 was obtained using the research advisor table for determining sampling, and 250 from the sample was proportionately selected from the seven faculties. The instrument used for data collection was a structured questionnaire. The instrument was subjected to validation by two experts. The reliability of the instrument stood at 0.81, which is reliable. The researchers, assisted by six research assistants, distributed and collected the questionnaire with a 75% return rate. Data were analyzed using mean and standard deviation to answer the research questions, whereas simple linear regression was used to test the null hypotheses at a 0.05 level of significance. The findings revealed that physical facilities and digital technology tools significantly improved the teaching and learning process. Also, consumables, supplies and equipment do not significantly improve the teaching and learning process in the faculties. It was recommended that lecturers in the various faculties should strengthen and sustain the use of digital technology tools, and there is a need to strive and continue to properly maintain the available physical facilities. Also, the university management should, as a matter of priority, continue to adequately fund and upgrade equipment, consumables and supplies frequently to enhance the effectiveness of the teaching and learning process.

Keywords: education, facilities, learning-resources, technology-tools

Procedia PDF Downloads 7
295 Polypropylene Matrix Enriched With Silver Nanoparticles From Banana Peel Extract For Antimicrobial Control Of E. coli and S. epidermidis To Maintain Fresh Food

Authors: Michail Milas, Aikaterini Dafni Tegiou, Nickolas Rigopoulos, Eustathios Giaouris, Zaharias Loannou

Abstract:

Nanotechnology, a relatively new scientific field, addresses the manipulation of nanoscale materials and devices, which are governed by unique properties, and is applied in a wide range of industries, including food packaging. The incorporation of nanoparticles into polymer matrices used for food packaging is a field that is highly researched today. One such combination is silver nanoparticles with polypropylene. In the present study, the synthesis of the silver nanoparticles was carried out by a natural method. In particular, a ripe banana peel extract was used. This method is superior to others as it stands out for its environmental friendliness, high efficiency and low-cost requirement. In particular, a 1.75 mM AgNO₃ silver nitrate solution was used, as well as a BPE concentration of 1.7% v/v, an incubation period of 48 hours at 70°C and a pH of 4.3 and after its preparation, the polypropylene films were soaked in it. For the PP films, random PP spheres were melted at 170-190°C into molds with 0.8cm diameter. This polymer was chosen as it is suitable for plastic parts and reusable plastic containers of various types that are intended to come into contact with food without compromising its quality and safety. The antimicrobial test against Escherichia coli DFSNB1 and Staphylococcus epidermidis DFSNB4 was performed on the films. It appeared that the films with silver nanoparticles had a reduction, at least 100 times, compared to those without silver nanoparticles, in both strains. The limit of detection is the lower limit of the vertical error lines in the presence of nanoparticles, which is 3.11. The main reasons that led to the adsorption of nanoparticles are the porous nature of polypropylene and the adsorption capacity of nanoparticles on the surface of the films due to hydrophobic-hydrophilic forces. The most significant parameters that contributed to the results of the experiment include the following: the stage of ripening of the banana during the preparation of the plant extract, the temperature and residence time of the nanoparticle solution in the oven, the residence time of the polypropylene films in the nanoparticle solution, the number of nanoparticles inoculated on the films and, finally, the time these stayed in the refrigerator so that they could dry and be ready for antimicrobial treatment.

Keywords: antimicrobial control, banana peel extract, E. coli, natural synthesis, microbe, plant extract, polypropylene films, S.epidermidis, silver nano, random pp

Procedia PDF Downloads 160
294 Generation of Roof Design Spectra Directly from Uniform Hazard Spectra

Authors: Amin Asgarian, Ghyslaine McClure

Abstract:

Proper seismic evaluation of Non-Structural Components (NSCs) mandates an accurate estimation of floor seismic demands (i.e. acceleration and displacement demands). Most of the current international codes incorporate empirical equations to calculate equivalent static seismic force for which NSCs and their anchorage system must be designed. These equations, in general, are functions of component mass and peak seismic acceleration to which NSCs are subjected to during the earthquake. However, recent studies have shown that these recommendations are suffered from several shortcomings such as neglecting the higher mode effect, tuning effect, NSCs damping effect, etc. which cause underestimation of the component seismic acceleration demand. This work is aimed to circumvent the aforementioned shortcomings of code provisions as well as improving them by proposing a simplified, practical, and yet accurate approach to generate acceleration Floor Design Spectra (FDS) directly from corresponding Uniform Hazard Spectra (UHS) (i.e. design spectra for structural components). A database of 27 Reinforced Concrete (RC) buildings in which Ambient Vibration Measurements (AVM) have been conducted. The database comprises 12 low-rise, 10 medium-rise, and 5 high-rise buildings all located in Montréal, Canada and designated as post-disaster buildings or emergency shelters. The buildings are subjected to a set of 20 compatible seismic records and Floor Response Spectra (FRS) in terms of pseudo acceleration are derived using the proposed approach for every floor of the building in both horizontal directions considering 4 different damping ratios of NSCs (i.e. 2, 5, 10, and 20% viscous damping). Several effective parameters on NSCs response are evaluated statistically. These parameters comprise NSCs damping ratios, tuning of NSCs natural period with one of the natural periods of supporting structure, higher modes of supporting structures, and location of NSCs. The entire spectral region is divided into three distinct segments namely short-period, fundamental period, and long period region. The derived roof floor response spectra for NSCs with 5% damping are compared with the 5% damping UHS and procedure are proposed to generate roof FDS for NSCs with 5% damping directly from 5% damped UHS in each spectral region. The generated FDS is a powerful, practical, and accurate tool for seismic design and assessment of acceleration-sensitive NSCs particularly in existing post-critical buildings which have to remain functional even after the earthquake and cannot tolerate any damage to NSCs.

Keywords: earthquake engineering, operational and functional components (OFCs), operational modal analysis (OMA), seismic assessment and design

Procedia PDF Downloads 231
293 Artificial Intelligence and Robotics in the Eye of Private Law with Special Regards to Intellectual Property and Liability Issues

Authors: Barna Arnold Keserű

Abstract:

In the last few years (what is called by many scholars the big data era) artificial intelligence (hereinafter AI) get more and more attention from the public and from the different branches of sciences as well. What previously was a mere science-fiction, now starts to become reality. AI and robotics often walk hand in hand, what changes not only the business and industrial life, but also has a serious impact on the legal system. The main research of the author focuses on these impacts in the field of private law, with special regards to liability and intellectual property issues. Many questions arise in these areas connecting to AI and robotics, where the boundaries are not sufficiently clear, and different needs are articulated by the different stakeholders. Recognizing the urgent need of thinking the Committee on Legal Affairs of the European Parliament adopted a Motion for a European Parliament Resolution A8-0005/2017 (of January 27th, 2017) in order to take some recommendations to the Commission on civil law rules on robotics and AI. This document defines some crucial usage of AI and/or robotics, e.g. the field of autonomous vehicles, the human job replacement in the industry or smart applications and machines. It aims to give recommendations to the safe and beneficial use of AI and robotics. However – as the document says – there are no legal provisions that specifically apply to robotics or AI in IP law, but that existing legal regimes and doctrines can be readily applied to robotics, although some aspects appear to call for specific consideration, calls on the Commission to support a horizontal and technologically neutral approach to intellectual property applicable to the various sectors in which robotics could be employed. AI can generate some content what worth copyright protection, but the question came up: who is the author, and the owner of copyright? The AI itself can’t be deemed author because it would mean that it is legally equal with the human persons. But there is the programmer who created the basic code of the AI, or the undertaking who sells the AI as a product, or the user who gives the inputs to the AI in order to create something new. Or AI generated contents are so far from humans, that there isn’t any human author, so these contents belong to public domain. The same questions could be asked connecting to patents. The research aims to answer these questions within the current legal framework and tries to enlighten future possibilities to adapt these frames to the socio-economical needs. In this part, the proper license agreements in the multilevel-chain from the programmer to the end-user become very important, because AI is an intellectual property in itself what creates further intellectual property. This could collide with data-protection and property rules as well. The problems are similar in the field of liability. We can use different existing forms of liability in the case when AI or AI led robotics cause damages, but it is unsure that the result complies with economical and developmental interests.

Keywords: artificial intelligence, intellectual property, liability, robotics

Procedia PDF Downloads 189
292 Analysis of Reflection Coefficients of Reflected and Transmitted Waves at the Interface Between Viscous Fluid and Hygro-Thermo-Orthotropic Medium

Authors: Anand Kumar Yadav

Abstract:

Purpose – The purpose of this paper is to investigate the fluctuation of amplitude ratios of various transmitted and reflected waves. Design/methodology/approach – The reflection and transmission of plane waves on the interface between an orthotropic hygro-thermo-elastic half-space (OHTHS) and a viscous-fluid half-space (VFHS) were investigated in this study with reference to coupled hygro-thermo-elasticity. Findings – The interface, where y = 0, is struck by the principal (P) plane waves as they travel through the VFHS. Two waves are reflected in VFHS, and four waves are transmitted in OHTHS as a result namely longitudinal displacement, Pwave − , thermal diffusion TDwave − and moisture diffusion mDwave − and shear vertical SV wave. Expressions for the reflection and transmitted coefficient are developed for the incidence of a hygrothermal plane wave. It is noted that these ratios are graphically displayed and are observed under the influence of coupled hygro-thermo-elasticity. Research limitations/implications – There isn't much study on the model under consideration, which combines OHTHS and VFHS with coupled hygro-thermo-elasticity, according to the existing literature Practical implications – The current model can be applied in many different areas, such as soil dynamics, nuclear reactors, high particle accelerators, earthquake engineering, and other areas where linked hygrothermo-elasticity is important. In a range of technical and geophysical settings, wave propagation in a viscous fluid-thermoelastic medium with various characteristics, such as initial stress, magnetic field, porosity, temperature, etc., gives essential information regarding the presence of new and modified waves. This model may prove useful in modifying earthquake estimates for experimental seismologists, new material designers, and researchers. Social implications – Researchers may use coupled hygro-thermo-elasticity to categories the material, where the parameter is a new indication of its ability to conduct heat in interaction with diverse materials. Originality/value – The submitted text is the sole creation of the team of writers, and all authors equally contributed to its creation.

Keywords: hygro-thermo-elasticity, viscous fluid, reflection coefficient, transmission coefficient, moisture concentration

Procedia PDF Downloads 58
291 The Study of Intangible Assets at Various Firm States

Authors: Gulnara Galeeva, Yulia Kasperskaya

Abstract:

The study deals with the relevant problem related to the formation of the efficient investment portfolio of an enterprise. The structure of the investment portfolio is connected to the degree of influence of intangible assets on the enterprise’s income. This determines the importance of research on the content of intangible assets. However, intangible assets studies do not take into consideration how the enterprise state can affect the content and the importance of intangible assets for the enterprise`s income. This affects accurateness of the calculations. In order to study this problem, the research was divided into several stages. In the first stage, intangible assets were classified based on their synergies as the underlying intangibles and the additional intangibles. In the second stage, this classification was applied. It showed that the lifecycle model and the theory of abrupt development of the enterprise, that are taken into account while designing investment projects, constitute limit cases of a more general theory of bifurcations. The research identified that the qualitative content of intangible assets significant depends on how close the enterprise is to being in crisis. In the third stage, the author developed and applied the Wide Pairwise Comparison Matrix method. This allowed to establish that using the ratio of the standard deviation to the mean value of the elements of the vector of priority of intangible assets makes it possible to estimate the probability of a full-blown crisis of the enterprise. The author has identified a criterion, which allows making fundamental decisions on investment feasibility. The study also developed an additional rapid method of assessing the enterprise overall status based on using the questionnaire survey with its Director. The questionnaire consists only of two questions. The research specifically focused on the fundamental role of stochastic resonance in the emergence of bifurcation (crisis) in the economic development of the enterprise. The synergetic approach made it possible to describe the mechanism of the crisis start in details and also to identify a range of universal ways of overcoming the crisis. It was outlined that the structure of intangible assets transforms into a more organized state with the strengthened synchronization of all processes as a result of the impact of the sporadic (white) noise. Obtained results offer managers and business owners a simple and an affordable method of investment portfolio optimization, which takes into account how close the enterprise is to a state of a full-blown crisis.

Keywords: analytic hierarchy process, bifurcation, investment portfolio, intangible assets, wide matrix

Procedia PDF Downloads 197