Search results for: remote monitoring app
825 Trends and Inequalities in Distance to and Use of Nearest Natural Space in the Context of the 20-Minute Neighbourhood: A 4-Wave National Repeat Crosssectional Study, 2013 to 2019
Authors: Jonathan R. Olsen, Natalie Nicholls, Jenna Panter, Hannah Burnett, Michael Tornow, Richard Mitchell
Abstract:
The 20-minute neighborhood is a policy priority for governments worldwide and a key feature of this policy is providing access to natural space within 800 meters of home. The study aims were to (1) examine the association between distance to nearest natural space and frequent use over time and (2) examine whether frequent use and changes in use were patterned by income and housing tenure over time. Bi-annual Scottish Household Survey data were obtained for 2013 to 2019 (n:42128 aged 16+). Adults were asked the walking distance to their nearest natural space, the frequency of visits to this space and their housing tenure, as well as age, sex and income. We examined the association between distance from home of nearest natural space, housing tenure, and the likelihood of frequent natural space use (visited once a week or more). Two-way interaction terms were further applied to explore variation in the association between tenure and frequent natural space use over time. We found that 87% of respondents lived within 10 minute walk of a natural space, meeting the policy specification for a 20-minute neighbourhood. Greater proximity to natural space was associated with increased use; individuals living a 6 to 10 minute walk and over 10 minute walk were respectively 53% and 78% less likely to report frequent natural space use than those living within a 5 minute walk. Housing tenure was an important predictor of frequent natural space use; private renters and homeowners were more likely to report frequent natural space use than social renters. Our findings provide evidence that proximity to natural space is a strong predictor of frequent use. Our study provides important evidence that time-based access measures alone do not consider deep-rooted socioeconomic variation in use of Natural space. Policy makers should ensure a nuanced lens is applied to operationalising and monitoring the 20-minute neighbourhood to safeguard against exacerbating existing inequalities.Keywords: natural space, housing, inequalities, 20-minute neighbourhood, urban design
Procedia PDF Downloads 123824 Role of Different Land Use Types on Ecosystem Services Provision in Moribane Forest Reserve - Mozambique
Authors: Francisco Domingos Francisco
Abstract:
Tropical forests are key providers of many Ecosystem Services (ES), contributing to human wellbeing on a global and local scale. Communities around and within Moribane Forest Reserve (MFR), Manica Province - Mozambique, benefit from ES through the exploitation of non-wood and wood forest products. The objective was to assess the provisioning capacity of the MFR in woody forest products in species and profiles of interest to local communities in the main sources of extraction. Social data relating to the basic needs of local communities for these products were captured through an exploratory study before this one. From that study, it became known about the most collected wood species, the sources of collection, and their availability in the profiles of greatest interest to them. A field survey through 39 rectangular 50mx20m plots was conducted with 13 plots established in each of the three land-use types (LUT), namely Restricted Forest, Unrestricted Forest, and Disturbed areas. The results show that 89 species were identified, of which 28 (31.4%) are assumed to be the most used by the communities. The number of species of local interest does not vary across the LUT (p>0.05). The most used species (MUS) is distributed in 82% in Restricted Forest, 75% in Unrestricted, and also 75% in Disturbed. Most individuals of both general and MUS found in Unrestricted Forest, and Degraded areas have lower end profiles (5-7 cm), representing 0.77 and 0.26%, respectively. The profile of individuals of species of local interest varies by LUT (p<0.05), and their greatest proportion (0.51%) outside the lower end is found in Restricted Forest. There were no similarities between the LUT for the species in general (JCI <0.5) but between the MUS (JCI >0.5). Conclusion, the areas authorized for the exploitation of wood forest products in the MFR tend to reduce their ability to provide local communities with forest products in species and profiles of their interest. This reduction item is a serious threat to the biodiversity of the Restricted Forest. The study can help the academic community in future studies by replicating the methodology used for monitoring purposes or conducting studies in other similar areas, and the results may support decision-makers in designing better strategies for sustainability.Keywords: ecosystem services, land-use types, local communities, species profile, wellbeing, wood forest product
Procedia PDF Downloads 135823 Heart Rate Variability Analysis for Early Stage Prediction of Sudden Cardiac Death
Authors: Reeta Devi, Hitender Kumar Tyagi, Dinesh Kumar
Abstract:
In present scenario, cardiovascular problems are growing challenge for researchers and physiologists. As heart disease have no geographic, gender or socioeconomic specific reasons; detecting cardiac irregularities at early stage followed by quick and correct treatment is very important. Electrocardiogram is the finest tool for continuous monitoring of heart activity. Heart rate variability (HRV) is used to measure naturally occurring oscillations between consecutive cardiac cycles. Analysis of this variability is carried out using time domain, frequency domain and non-linear parameters. This paper presents HRV analysis of the online dataset for normal sinus rhythm (taken as healthy subject) and sudden cardiac death (SCD subject) using all three methods computing values for parameters like standard deviation of node to node intervals (SDNN), square root of mean of the sequences of difference between adjacent RR intervals (RMSSD), mean of R to R intervals (mean RR) in time domain, very low-frequency (VLF), low-frequency (LF), high frequency (HF) and ratio of low to high frequency (LF/HF ratio) in frequency domain and Poincare plot for non linear analysis. To differentiate HRV of healthy subject from subject died with SCD, k –nearest neighbor (k-NN) classifier has been used because of its high accuracy. Results show highly reduced values for all stated parameters for SCD subjects as compared to healthy ones. As the dataset used for SCD patients is recording of their ECG signal one hour prior to their death, it is therefore, verified with an accuracy of 95% that proposed algorithm can identify mortality risk of a patient one hour before its death. The identification of a patient’s mortality risk at such an early stage may prevent him/her meeting sudden death if in-time and right treatment is given by the doctor.Keywords: early stage prediction, heart rate variability, linear and non-linear analysis, sudden cardiac death
Procedia PDF Downloads 343822 Development of an Interface between BIM-model and an AI-based Control System for Building Facades with Integrated PV Technology
Authors: Moser Stephan, Lukasser Gerald, Weitlaner Robert
Abstract:
Urban structures will be used more intensively in the future through redensification or new planned districts with high building densities. Especially, to achieve positive energy balances like requested for Positive Energy Districts (PED) the single use of roofs is not sufficient for dense urban areas. However, the increasing share of window significantly reduces the facade area available for use in PV generation. Through the use of PV technology at other building components, such as external venetian blinds, onsite generation can be maximized and standard functionalities of this product can be positively extended. While offering advantages in terms of infrastructure, sustainability in the use of resources and efficiency, these systems require an increased optimization in planning and control strategies of buildings. External venetian blinds with PV technology require an intelligent control concept to meet the required demands such as maximum power generation, glare prevention, high daylight autonomy, avoidance of summer overheating but also use of passive solar gains in wintertime. Today, geometric representation of outdoor spaces and at the building level, three-dimensional geometric information is available for planning with Building Information Modeling (BIM). In a research project, a web application which is called HELLA DECART was developed to provide this data structure to extract the data required for the simulation from the BIM models and to make it usable for the calculations and coupled simulations. The investigated object is uploaded as an IFC file to this web application and includes the object as well as the neighboring buildings and possible remote shading. This tool uses a ray tracing method to determine possible glare from solar reflections of a neighboring building as well as near and far shadows per window on the object. Subsequently, an annual estimate of the sunlight per window is calculated by taking weather data into account. This optimized daylight assessment per window provides the ability to calculate an estimation of the potential power generation at the integrated PV on the venetian blind but also for the daylight and solar entry. As a next step, these results of the calculations as well as all necessary parameters for the thermal simulation can be provided. The overall aim of this workflow is to advance the coordination between the BIM model and coupled building simulation with the resulting shading and daylighting system with the artificial lighting system and maximum power generation in a control system. In the research project Powershade, an AI based control concept for PV integrated façade elements with coupled simulation results is investigated. The developed automated workflow concept in this paper is tested by using an office living lab at the HELLA company.Keywords: BIPV, building simulation, optimized control strategy, planning tool
Procedia PDF Downloads 111821 Sound Analysis of Young Broilers Reared under Different Stocking Densities in Intensive Poultry Farming
Authors: Xiaoyang Zhao, Kaiying Wang
Abstract:
The choice of stocking density in poultry farming is a potential way for determining welfare level of poultry. However, it is difficult to measure stocking densities in poultry farming because of a lot of variables such as species, age and weight, feeding way, house structure and geographical location in different broiler houses. A method was proposed in this paper to measure the differences of young broilers reared under different stocking densities by sound analysis. Vocalisations of broilers were recorded and analysed under different stocking densities to identify the relationship between sounds and stocking densities. Recordings were made continuously for three-week-old chickens in order to evaluate the variation of sounds emitted by the animals at the beginning. The experimental trial was carried out in an indoor reared broiler farm; the audio recording procedures lasted for 5 days. Broilers were divided into 5 groups, stocking density treatments were 8/m², 10/m², 12/m² (96birds/pen), 14/m² and 16/m², all conditions including ventilation and feed conditions were kept same except from stocking densities in every group. The recordings and analysis of sounds of chickens were made noninvasively. Sound recordings were manually analysed and labelled using sound analysis software: GoldWave Digital Audio Editor. After sound acquisition process, the Mel Frequency Cepstrum Coefficients (MFCC) was extracted from sound data, and the Support Vector Machine (SVM) was used as an early detector and classifier. This preliminary study, conducted in an indoor reared broiler farm shows that this method can be used to classify sounds of chickens under different densities economically (only a cheap microphone and recorder can be used), the classification accuracy is 85.7%. This method can predict the optimum stocking density of broilers with the complement of animal welfare indicators, animal productive indicators and so on.Keywords: broiler, stocking density, poultry farming, sound monitoring, Mel Frequency Cepstrum Coefficients (MFCC), Support Vector Machine (SVM)
Procedia PDF Downloads 163820 An Approach for Ensuring Data Flow in Freight Delivery and Management Systems
Authors: Aurelija Burinskienė, Dalė Dzemydienė, Arūnas Miliauskas
Abstract:
This research aims at developing the approach for more effective freight delivery and transportation process management. The road congestions and the identification of causes are important, as well as the context information recognition and management. The measure of many parameters during the transportation period and proper control of driver work became the problem. The number of vehicles per time unit passing at a given time and point for drivers can be evaluated in some situations. The collection of data is mainly used to establish new trips. The flow of the data is more complex in urban areas. Herein, the movement of freight is reported in detail, including the information on street level. When traffic density is extremely high in congestion cases, and the traffic speed is incredibly low, data transmission reaches the peak. Different data sets are generated, which depend on the type of freight delivery network. There are three types of networks: long-distance delivery networks, last-mile delivery networks and mode-based delivery networks; the last one includes different modes, in particular, railways and other networks. When freight delivery is switched from one type of the above-stated network to another, more data could be included for reporting purposes and vice versa. In this case, a significant amount of these data is used for control operations, and the problem requires an integrated methodological approach. The paper presents an approach for providing e-services for drivers by including the assessment of the multi-component infrastructure needed for delivery of freights following the network type. The construction of such a methodology is required to evaluate data flow conditions and overloads, and to minimize the time gaps in data reporting. The results obtained show the possibilities of the proposing methodological approach to support the management and decision-making processes with functionality of incorporating networking specifics, by helping to minimize the overloads in data reporting.Keywords: transportation networks, freight delivery, data flow, monitoring, e-services
Procedia PDF Downloads 129819 Competitive DNA Calibrators as Quality Reference Standards (QRS™) for Germline and Somatic Copy Number Variations/Variant Allelic Frequencies Analyses
Authors: Eirini Konstanta, Cedric Gouedard, Aggeliki Delimitsou, Stefania Patera, Samuel Murray
Abstract:
Introduction: Quality reference DNA standards (QRS) for molecular testing by next-generation sequencing (NGS) are essential for accurate quantitation of copy number variations (CNV) for germline and variant allelic frequencies (VAF) for somatic analyses. Objectives: Presently, several molecular analytics for oncology patients are reliant upon quantitative metrics. Test validation and standardisation are also reliant upon the availability of surrogate control materials allowing for understanding test LOD (limit of detection), sensitivity, specificity. We have developed a dual calibration platform allowing for QRS pairs to be included in analysed DNA samples, allowing for accurate quantitation of CNV and VAF metrics within and between patient samples. Methods: QRS™ blocks up to 500nt were designed for common NGS panel targets incorporating ≥ 2 identification tags (IDTDNA.com). These were analysed upon spiking into gDNA, somatic, and ctDNA using a proprietary CalSuite™ platform adaptable to common LIMS. Results: We demonstrate QRS™ calibration reproducibility spiked to 5–25% at ± 2.5% in gDNA and ctDNA. Furthermore, we demonstrate CNV and VAF within and between samples (gDNA and ctDNA) with the same reproducibility (± 2.5%) in a clinical sample of lung cancer and HBOC (EGFR and BRCA1, respectively). CNV analytics was performed with similar accuracy using a single pair of QRS calibrators when using multiple single targeted sequencing controls. Conclusion: Dual paired QRS™ calibrators allow for accurate and reproducible quantitative analyses of CNV, VAF, intrinsic sample allele measurement, inter and intra-sample measure not only simplifying NGS analytics but allowing for monitoring clinically relevant biomarker VAF across patient ctDNA samples with improved accuracy.Keywords: calibrator, CNV, gene copy number, VAF
Procedia PDF Downloads 155818 Treatment and Diagnostic Imaging Methods of Fetal Heart Function in Radiology
Authors: Mahdi Farajzadeh Ajirlou
Abstract:
Prior evidence of normal cardiac anatomy is desirable to relieve the anxiety of cases with a family history of congenital heart disease or to offer the option of early gestation termination or close follow-up should a cardiac anomaly be proved. Fetal heart discovery plays an important part in the opinion of the fetus, and it can reflect the fetal heart function of the fetus, which is regulated by the central nervous system. Acquisition of ventricular volume and inflow data would be useful to quantify more valve regurgitation and ventricular function to determine the degree of cardiovascular concession in fetal conditions at threat for hydrops fetalis. This study discusses imaging the fetal heart with transvaginal ultrasound, Doppler ultrasound, three-dimensional ultrasound (3DUS) and four-dimensional (4D) ultrasound, spatiotemporal image correlation (STIC), glamorous resonance imaging and cardiac catheterization. Doppler ultrasound (DUS) image is a kind of real- time image with a better imaging effect on blood vessels and soft tissues. DUS imaging can observe the shape of the fetus, but it cannot show whether the fetus is hypoxic or distressed. Spatiotemporal image correlation (STIC) enables the acquisition of a volume of data concomitant with the beating heart. The automated volume accession is made possible by the array in the transducer performing a slow single reach, recording a single 3D data set conforming to numerous 2D frames one behind the other. The volume accession can be done in a stationary 3D, either online 4D (direct volume scan, live 3D ultrasound or a so-called 4D (3D/ 4D)), or either spatiotemporal image correlation-STIC (off-line 4D, which is a circular volume check-up). Fetal cardiovascular MRI would appear to be an ideal approach to the noninvasive disquisition of the impact of abnormal cardiovascular hemodynamics on antenatal brain growth and development. Still, there are practical limitations to the use of conventional MRI for fetal cardiovascular assessment, including the small size and high heart rate of the mortal fetus, the lack of conventional cardiac gating styles to attend data accession, and the implicit corruption of MRI data due to motherly respiration and unpredictable fetal movements. Fetal cardiac MRI has the implicit to complement ultrasound in detecting cardiovascular deformations and extracardiac lesions. Fetal cardiac intervention (FCI), minimally invasive catheter interventions, is a new and evolving fashion that allows for in-utero treatment of a subset of severe forms of congenital heart deficiency. In special cases, it may be possible to modify the natural history of congenital heart disorders. It's entirely possible that future generations will ‘repair’ congenital heart deficiency in utero using nanotechnologies or remote computer-guided micro-robots that work in the cellular layer.Keywords: fetal, cardiac MRI, ultrasound, 3D, 4D, heart disease, invasive, noninvasive, catheter
Procedia PDF Downloads 44817 Intrusion Detection in SCADA Systems
Authors: Leandros A. Maglaras, Jianmin Jiang
Abstract:
The protection of the national infrastructures from cyberattacks is one of the main issues for national and international security. The funded European Framework-7 (FP7) research project CockpitCI introduces intelligent intrusion detection, analysis and protection techniques for Critical Infrastructures (CI). The paradox is that CIs massively rely on the newest interconnected and vulnerable Information and Communication Technology (ICT), whilst the control equipment, legacy software/hardware, is typically old. Such a combination of factors may lead to very dangerous situations, exposing systems to a wide variety of attacks. To overcome such threats, the CockpitCI project combines machine learning techniques with ICT technologies to produce advanced intrusion detection, analysis and reaction tools to provide intelligence to field equipment. This will allow the field equipment to perform local decisions in order to self-identify and self-react to abnormal situations introduced by cyberattacks. In this paper, an intrusion detection module capable of detecting malicious network traffic in a Supervisory Control and Data Acquisition (SCADA) system is presented. Malicious data in a SCADA system disrupt its correct functioning and tamper with its normal operation. OCSVM is an intrusion detection mechanism that does not need any labeled data for training or any information about the kind of anomaly is expecting for the detection process. This feature makes it ideal for processing SCADA environment data and automates SCADA performance monitoring. The OCSVM module developed is trained by network traces off line and detects anomalies in the system real time. The module is part of an IDS (intrusion detection system) developed under CockpitCI project and communicates with the other parts of the system by the exchange of IDMEF messages that carry information about the source of the incident, the time and a classification of the alarm.Keywords: cyber-security, SCADA systems, OCSVM, intrusion detection
Procedia PDF Downloads 556816 Role of Human Epididymis Protein 4 as a Biomarker in the Diagnosis of Ovarian Cancer
Authors: Amar Ranjan, Julieana Durai, Pranay Tanwar
Abstract:
Background &Introduction: Ovarian cancer is one of the most common malignant tumor in the female. 70% of the cases of ovarian cancer are diagnosed at an advanced stage. The five-year survival rate associated with ovarian cancer is less than 30%. The early diagnosis of ovarian cancer becomes a key factor in improving the survival rate of patients. Presently, CAl25 (carbohydrate antigen125) is used for the diagnosis and therapeutic monitoring of ovarian cancer, but its sensitivity and specificity is not ideal. The introduction of HE4, human epididymis protein 4 has attracted much attention. HE4 has a sensitivity and specificity of 72.9% and 95% for differentiating between benign and malignant adnexal masses, which is better than CA125 detection. Methods: Serum HE4 and CA -125 were estimated using the chemiluminescence method. Our cases were 40 epithelial ovarian cancer, 9 benign ovarian tumor, 29 benign gynaecological diseases and 13 healthy individuals. This group include healthy woman those who have undergoing family planning and menopause-related medical consultations and they are negative for ovarian mass. Optimal cut off values for HE4 and CA125 were 55.89pmol/L and 40.25U/L respectively (determined by statistical analysis). Results: The level of HE4 was raised in all ovarian cancer patients (n=40) whereas CA125 levels were normal in 6/40 ovarian cancer patients, which were the cases of OC confirmed by histopathology. There is a significant decrease in the level of HE4 with comparison to CA125 in benign ovarian tumor cases. Both the levels of HE4 and CA125 were raised in the nonovarian cancer group, which includes cancer of endometrium and cervix. In the healthy group, HE4 was normal in all patients except in one case of the rudimentary horn, and the reason for this raised HE4 level is due to the incomplete development of uterus whereas CA125 was raised in 3 cases. Conclusions: Findings showed that the serum level of HE4 is an important indicator in the diagnosis of ovarian cancer, and it also distinguishes between benign and malignant pelvic masses. However, a combination of HE4 and CA125 panel will be extremely valuable in improving the diagnostic efficiency of ovarian cancer. These findings of our study need to be validated in the larger cohort of patients.Keywords: human epididymis protein 4, ovarian cancer, diagnosis, benign lesions
Procedia PDF Downloads 133815 Toward Indoor and Outdoor Surveillance using an Improved Fast Background Subtraction Algorithm
Authors: El Harraj Abdeslam, Raissouni Naoufal
Abstract:
The detection of moving objects from a video image sequences is very important for object tracking, activity recognition, and behavior understanding in video surveillance. The most used approach for moving objects detection / tracking is background subtraction algorithms. Many approaches have been suggested for background subtraction. But, these are illumination change sensitive and the solutions proposed to bypass this problem are time consuming. In this paper, we propose a robust yet computationally efficient background subtraction approach and, mainly, focus on the ability to detect moving objects on dynamic scenes, for possible applications in complex and restricted access areas monitoring, where moving and motionless persons must be reliably detected. It consists of three main phases, establishing illumination changes in variance, background/foreground modeling and morphological analysis for noise removing. We handle illumination changes using Contrast Limited Histogram Equalization (CLAHE), which limits the intensity of each pixel to user determined maximum. Thus, it mitigates the degradation due to scene illumination changes and improves the visibility of the video signal. Initially, the background and foreground images are extracted from the video sequence. Then, the background and foreground images are separately enhanced by applying CLAHE. In order to form multi-modal backgrounds we model each channel of a pixel as a mixture of K Gaussians (K=5) using Gaussian Mixture Model (GMM). Finally, we post process the resulting binary foreground mask using morphological erosion and dilation transformations to remove possible noise. For experimental test, we used a standard dataset to challenge the efficiency and accuracy of the proposed method on a diverse set of dynamic scenes.Keywords: video surveillance, background subtraction, contrast limited histogram equalization, illumination invariance, object tracking, object detection, behavior understanding, dynamic scenes
Procedia PDF Downloads 260814 Safety Study of Intravenously Administered Human Cord Blood Stem Cells in the Treatment of Symptoms Related to Chronic Inflammation
Authors: Brian M. Mehling, Louis Quartararo, Marine Manvelyan, Paul Wang, Dong-Cheng Wu
Abstract:
Numerous investigations suggest that Mesenchymal Stem Cells (MSCs) in general represent a valuable tool for therapy of symptoms related to chronic inflammatory diseases. Blue Horizon Stem Cell Therapy Program is a leading provider of adult and children’s stem cell therapies. Uniquely we have safely and efficiently treated more than 600 patients with documenting each procedure. The purpose of our study is primarily to monitor the immune response in order to validate the safety of intravenous infusion of human umbilical cord blood derived MSCs (UC-MSCs), and secondly, to evaluate effects on biomarkers associated with chronic inflammation. Nine patients were treated for conditions associated with chronic inflammation and for the purpose of anti-aging. They have been given one intravenous infusion of UC-MSCs. Our study of blood test markers of 9 patients with chronic inflammation before and within three months after MSCs treatment demonstrates that there is no significant changes and MSCs treatment was safe for the patients. Analysis of different indicators of chronic inflammation and aging included in initial, 24-hours, two weeks and three months protocols showed that stem cell treatment was safe for the patients; there were no adverse reactions. Moreover data from follow up protocols demonstrates significant improvement in energy level, hair, nails growth and skin conditions. Intravenously administered UC-MSCs were safe and effective in the improvement of symptoms related to chronic inflammation. Further close monitoring and inclusion of more patients are necessary to fully characterize the advantages of UC-MSCs application in treatment of symptoms related to chronic inflammation.Keywords: chronic inflammatory diseases, intravenous infusion, stem cell therapy, umbilical cord blood derived mesenchymal stem cells (UC-MSCs)
Procedia PDF Downloads 435813 Verification and Proposal of Information Processing Model Using EEG-Based Brain Activity Monitoring
Authors: Toshitaka Higashino, Naoki Wakamiya
Abstract:
Human beings perform a task by perceiving information from outside, recognizing them, and responding them. There have been various attempts to analyze and understand internal processes behind the reaction to a given stimulus by conducting psychological experiments and analysis from multiple perspectives. Among these, we focused on Model Human Processor (MHP). However, it was built based on psychological experiments and thus the relation with brain activity was unclear so far. To verify the validity of the MHP and propose our model from a viewpoint of neuroscience, EEG (Electroencephalography) measurements are performed during experiments in this study. More specifically, first, experiments were conducted where Latin alphabet characters were used as visual stimuli. In addition to response time, ERPs (event-related potentials) such as N100 and P300 were measured by using EEG. By comparing cycle time predicted by the MHP and latency of ERPs, it was found that N100, related to perception of stimuli, appeared at the end of the perceptual processor. Furthermore, by conducting an additional experiment, it was revealed that P300, related to decision making, appeared during the response decision process, not at the end. Second, by experiments using Japanese Hiragana characters, i.e. Japan's own phonetic symbols, those findings were confirmed. Finally, Japanese Kanji characters were used as more complicated visual stimuli. A Kanji character usually has several readings and several meanings. Despite the difference, a reading-related task and a meaning-related task exhibited similar results, meaning that they involved similar information processing processes of the brain. Based on those results, our model was proposed which reflects response time and ERP latency. It consists of three processors: the perception processor from an input of a stimulus to appearance of N100, the cognitive processor from N100 to P300, and the decision-action processor from P300 to response. Using our model, an application system which reflects brain activity can be established.Keywords: brain activity, EEG, information processing model, model human processor
Procedia PDF Downloads 102812 Nutrient Content and Labelling Status of Pre-Packaged Beverages in Saudi Arabia
Authors: Ruyuf Y. Alnafisah, Nouf S. Alammari, Amani S. Alqahtani
Abstract:
Background: Beverage choice can have implications for the risk of non-communicable diseases. However, there is a lack of knowledge in assessing the nutritional content of these beverages. This study aims to describe the nutrient content of pre-packaged beverages available in the Saudi market. Design: Data were collected from the Saudi Branded Food Data-base (SBFD). Nutrient content was standardized in terms of units and reference volumes to ensure consistency in analysis. Results: A total of 1490 beverages were analyzed. The highest median levels of the majority of nutrients were found among dairy products; energy (68.4(43-188]kcal/100 ml in a milkshake); protein (8.2(0.5-8.2]g/100 ml in yogurt drinks); total fat (2.1(1.3-3.5]g/100 ml in milk); saturated fat (1.4(0-1.4]g/100 ml in yogurt drinks); cholesterol (30(0-30]mg/100 ml in yogurt drinks); sodium (65(65-65].4mg/100 ml in yogurt drinks); and total sugars (12.9(7.5-27]g/100 ml in milkshake). Carbohydrate level was the highest in nectar (13(11.8-14.2] g/100ml]; fruits drinks (12.9(11.9-13.9] g/100ml), and sparkling juices (12.9(8.8-14] g/100ml). The highest added sugar level was observed among regular soft drinks (12(10.8-14] g/100ml). The average rate of nutrient declaration was 60.95%. Carbo-hydrate had the highest declaration rate among nutrients (99.1%), and yogurt drinks had the highest declaration rate among beverage categories (92.7%). The median content of vitamins A and D in dairy products met the mandatory addition levels. Conclusion: This study provides valuable insights into the nutrient content of pre-packaged beverages in the Saudi market. It serves as a foundation for future research and monitoring. The findings of the study support the idea of taxing sugary beverages and raise concerns about the health effects of high sugar in fruit juices. Despite the inclusion of vitamins D and A in dairy products, the study highlights the need for alternative strategies to address these deficiencies.Keywords: pre-packaged beverages, nutrients content, nutrients declaration, daily percentage value, mandatory addition of vitamins
Procedia PDF Downloads 59811 Maintenance Performance Measurement Derived Optimization: A Case Study
Authors: James M. Wakiru, Liliane Pintelon, Peter Muchiri, Stanley Mburu
Abstract:
Maintenance performance measurement (MPM) represents an integrated aspect that considers both operational and maintenance related aspects while evaluating the effectiveness and efficiency of maintenance to ensure assets are working as they should. Three salient issues require to be addressed for an asset-intensive organization to employ an MPM-based framework to optimize maintenance. Firstly, the organization should establish important perfomance metric(s), in this case the maintenance objective(s), which they will be focuss on. The second issue entails aligning the maintenance objective(s) with maintenance optimization. This is achieved by deriving maintenance performance indicators that subsequently form an objective function for the optimization program. Lastly, the objective function is employed in an optimization program to derive maintenance decision support. In this study, we develop a framework that initially identifies the crucial maintenance performance measures, and employs them to derive maintenance decision support. The proposed framework is demonstrated in a case study of a geothermal drilling rig, where the objective function is evaluated utilizing a simulation-based model whose parameters are derived from empirical maintenance data. Availability, reliability and maintenance inventory are depicted as essential objectives requiring further attention. A simulation model is developed mimicking a drilling rig operations and maintenance where the sub-systems are modelled undergoing imperfect maintenance, corrective (CM) and preventive (PM), with the total cost as the primary performance measurement. Moreover, three maintenance spare inventory policies are considered; classical (retaining stocks for a contractual period), vendor-managed inventory with consignment stock and periodic monitoring order-to-stock (s, S) policy. Optimization results infer that the adoption of (s, S) inventory policy, increased PM interval and reduced reliance of CM actions offers improved availability and total costs reduction.Keywords: maintenance, vendor-managed, decision support, performance, optimization
Procedia PDF Downloads 127810 Revolutionizing Project Management: A Comprehensive Review of Artificial Intelligence and Machine Learning Applications for Smarter Project Execution
Authors: Wenzheng Fu, Yue Fu, Zhijiang Dong, Yujian Fu
Abstract:
The integration of artificial intelligence (AI) and machine learning (ML) into project management is transforming how engineering projects are executed, monitored, and controlled. This paper provides a comprehensive survey of AI and ML applications in project management, systematically categorizing their use in key areas such as project data analytics, monitoring, tracking, scheduling, and reporting. As project management becomes increasingly data-driven, AI and ML offer powerful tools for improving decision-making, optimizing resource allocation, and predicting risks, leading to enhanced project outcomes. The review highlights recent research that demonstrates the ability of AI and ML to automate routine tasks, provide predictive insights, and support dynamic decision-making, which in turn increases project efficiency and reduces the likelihood of costly delays. This paper also examines the emerging trends and future opportunities in AI-driven project management, such as the growing emphasis on transparency, ethical governance, and data privacy concerns. The research suggests that AI and ML will continue to shape the future of project management by driving further automation and offering intelligent solutions for real-time project control. Additionally, the review underscores the need for ongoing innovation and the development of governance frameworks to ensure responsible AI deployment in project management. The significance of this review lies in its comprehensive analysis of AI and ML’s current contributions to project management, providing valuable insights for both researchers and practitioners. By offering a structured overview of AI applications across various project phases, this paper serves as a guide for the adoption of intelligent systems, helping organizations achieve greater efficiency, adaptability, and resilience in an increasingly complex project management landscape.Keywords: artificial intelligence, decision support systems, machine learning, project management, resource optimization, risk prediction
Procedia PDF Downloads 25809 Inpatient Neonatal Deaths in Rural Uganda: A Retrospective Comparative Mortality Study of Labour Ward versus Community Admissions
Authors: Najade Sheriff, Malaz Elsaddig, Kevin Jones
Abstract:
Background: Death in the first month of life accounts for an increasing proportion of under-five mortality. Advancement to reduce this number is being made across the globe; however, progress is slowest in sub-Saharan Africa. Objectives: The study aims to identify differences between neonatal deaths of inpatient babies born in a hospital facility in rural Uganda to those of neonates admitted from the community and to explore whether they can be used to risk stratify neonatal admissions. Results: A retrospective chart review was conducted on records for neonates admitted to the Special Care Baby Unit (SCBU) Kitovu Hospital from 1st July 2016 to 21st July 2017. A total of 442 babies were admitted and the overall neonatal mortality was 24.8% (40% inpatient, 37% community, 23% hospital referrals). 40% of deaths occurred within 24 hours of admission and the majority were male (63%). 43% of babies were hypothermic upon admission, a significantly greater proportion of which were inpatient babies born in labour ward (P=0.0025). Intrapartum related death accounted for ½ of all inpatient babies whereas complications of prematurity were the predominant cause of death in the community group (37%). Severe infection does not seem like a significant factor of mortality for inpatients (2%) as it does for community admissions (29%). Furthermore, with 52.5% of community admissions weighing < 1500g, very low birth weight (VLBW) may be a significant risk factor for community neonatal death. Conclusion: The neonatal mortality rate in this study is high, and the leading causes of death are all largely preventable. A high rate of inpatient birth asphyxiation indicates the need for good quality facility-based perinatal care as well as a greater focus on the management of hypothermia, such as Kangaroo care. Moreover, a reduction in preterm deliveries is necessary to reduce associated comorbidities, and monitoring for signs of infection is especially important for community admissions.Keywords: community, mortality, newborn, Uganda
Procedia PDF Downloads 193808 Urban Corridor Management Strategy Based on Intelligent Transportation System
Authors: Sourabh Jain, Sukhvir Singh Jain, Gaurav V. Jain
Abstract:
Intelligent Transportation System (ITS) is the application of technology for developing a user–friendly transportation system for urban areas in developing countries. The goal of urban corridor management using ITS in road transport is to achieve improvements in mobility, safety, and the productivity of the transportation system within the available facilities through the integrated application of advanced monitoring, communications, computer, display, and control process technologies, both in the vehicle and on the road. This paper attempts to present the past studies regarding several ITS available that have been successfully deployed in urban corridors of India and abroad, and to know about the current scenario and the methodology considered for planning, design, and operation of Traffic Management Systems. This paper also presents the endeavor that was made to interpret and figure out the performance of the 27.4 Km long study corridor having eight intersections and four flyovers. The corridor consisting of 6 lanes as well as 8 lanes divided road network. Two categories of data were collected on February 2016 such as traffic data (traffic volume, spot speed, delay) and road characteristics data (no. of lanes, lane width, bus stops, mid-block sections, intersections, flyovers). The instruments used for collecting the data were video camera, radar gun, mobile GPS and stopwatch. From analysis, the performance interpretations incorporated were identification of peak hours and off peak hours, congestion and level of service (LOS) at mid blocks, delay followed by the plotting speed contours and recommending urban corridor management strategies. From the analysis, it is found that ITS based urban corridor management strategies will be useful to reduce congestion, fuel consumption and pollution so as to provide comfort and efficiency to the users. The paper presented urban corridor management strategies based on sensors incorporated in both vehicles and on the roads.Keywords: congestion, ITS strategies, mobility, safety
Procedia PDF Downloads 445807 Engaging the Terrorism Problematique in Africa: Discursive and Non-Discursive Approaches to Counter Terrorism
Authors: Cecil Blake, Tolu Kayode-Adedeji, Innocent Chiluwa, Charles Iruonagbe
Abstract:
National, regional and international security threats have dominated the twenty-first century thus far. Insurgencies that utilize “terrorism” as their primary strategy pose the most serious threat to global security. States in turn adopt terrorist strategies to resist and even defeat insurgents who invoke the legitimacy of statehood to justify their action. In short, the era is dominated by the use of terror tactics by state and non-state actors. Globally, there is a powerful network of groups involved in insurgencies using Islam as the bastion for their cause. In Africa, there are Boko Haram, Al Shabaab and Al Qaeda in the Maghreb representing Islamic groups utilizing terror strategies and tactics to prosecute their wars. The task at hand is to discover and to use multiple ways of handling the present security threats, including novel approaches to policy formulation, implementation, monitoring and evaluation that would pay significant attention to the important role of culture and communication strategies germane for discursive means of conflict resolution. In other to achieve this, the proposed research would address inter alia, root causes of insurgences that predicate their mission on Islamic tenets particularly in Africa; discursive and non-discursive counter-terrorism approaches fashioned by African governments, continental supra-national and regional organizations, recruitment strategies by major non-sate actors in Africa that rely solely on terrorist strategies and tactics and sources of finances for the groups under study. A major anticipated outcome of this research is a contribution to answers that would lead to the much needed stability required for development in African countries experiencing insurgencies carried out by the use of patterned terror strategies and tactics. The nature of the research requires the use of triangulation as the methodological tool.Keywords: counter-terrorism, discourse, Nigeria, security, terrorism
Procedia PDF Downloads 487806 Simultaneous Determination of Methotrexate and Aspirin Using Fourier Transform Convolution Emission Data under Non-Parametric Linear Regression Method
Authors: Marwa A. A. Ragab, Hadir M. Maher, Eman I. El-Kimary
Abstract:
Co-administration of methotrexate (MTX) and aspirin (ASP) can cause a pharmacokinetic interaction and a subsequent increase in blood MTX concentrations which may increase the risk of MTX toxicity. Therefore, it is important to develop a sensitive, selective, accurate and precise method for their simultaneous determination in urine. A new hybrid chemometric method has been applied to the emission response data of the two drugs. Spectrofluorimetric method for determination of MTX through measurement of its acid-degradation product, 4-amino-4-deoxy-10-methylpteroic acid (4-AMP), was developed. Moreover, the acid-catalyzed degradation reaction enables the spectrofluorimetric determination of ASP through the formation of its active metabolite salicylic acid (SA). The proposed chemometric method deals with convolution of emission data using 8-points sin xi polynomials (discrete Fourier functions) after the derivative treatment of these emission data. The first and second derivative curves (D1 & D2) were obtained first then convolution of these curves was done to obtain first and second derivative under Fourier functions curves (D1/FF) and (D2/FF). This new application was used for the resolution of the overlapped emission bands of the degradation products of both drugs to allow their simultaneous indirect determination in human urine. Not only this chemometric approach was applied to the emission data but also the obtained data were subjected to non-parametric linear regression analysis (Theil’s method). The proposed method was fully validated according to the ICH guidelines and it yielded linearity ranges as follows: 0.05-0.75 and 0.5-2.5 µg mL-1 for MTX and ASP respectively. It was found that the non-parametric method was superior over the parametric one in the simultaneous determination of MTX and ASP after the chemometric treatment of the emission spectra of their degradation products. The work combines the advantages of derivative and convolution using discrete Fourier function together with the reliability and efficacy of the non-parametric analysis of data. The achieved sensitivity along with the low values of LOD (0.01 and 0.06 µg mL-1) and LOQ (0.04 and 0.2 µg mL-1) for MTX and ASP respectively, by the second derivative under Fourier functions (D2/FF) were promising and guarantee its application for monitoring the two drugs in patients’ urine samples.Keywords: chemometrics, emission curves, derivative, convolution, Fourier transform, human urine, non-parametric regression, Theil’s method
Procedia PDF Downloads 431805 Flow Reproduction Using Vortex Particle Methods for Wake Buffeting Analysis of Bluff Structures
Authors: Samir Chawdhury, Guido Morgenthal
Abstract:
The paper presents a novel extension of Vortex Particle Methods (VPM) where the study aims to reproduce a template simulation of complex flow field that is generated from impulsively started flow past an upstream bluff body at certain Reynolds number Re-Vibration of a structural system under upstream wake flow is often considered its governing design criteria. Therefore, the attention is given in this study especially for the reproduction of wake flow simulation. The basic methodology for the implementation of the flow reproduction requires the downstream velocity sampling from the template flow simulation; therefore, at particular distances from the upstream section the instantaneous velocity components are sampled using a series of square sampling-cells arranged vertically where each of the cell contains four velocity sampling points at its corner. Since the grid free Lagrangian VPM algorithm discretises vorticity on particle elements, the method requires transformation of the velocity components into vortex circulation, and finally the simulation of the reproduction of the template flow field by seeding these vortex circulations or particles into a free stream flow. It is noteworthy that the vortex particles have to be released into the free stream exactly at same rate of velocity sampling. Studies have been done, specifically, in terms of different sampling rates and velocity sampling positions to find their effects on flow reproduction quality. The quality assessments are mainly done, using a downstream flow monitoring profile, by comparing the characteristic wind flow profiles using several statistical turbulence measures. Additionally, the comparisons are performed using velocity time histories, snapshots of the flow fields, and the vibration of a downstream bluff section by performing wake buffeting analyses of the section under the original and reproduced wake flows. Convergence study is performed for the validation of the method. The study also describes the possibilities how to achieve flow reproductions with less computational effort.Keywords: vortex particle method, wake flow, flow reproduction, wake buffeting analysis
Procedia PDF Downloads 314804 Development of an Atmospheric Radioxenon Detection System for Nuclear Explosion Monitoring
Authors: V. Thomas, O. Delaune, W. Hennig, S. Hoover
Abstract:
Measurement of radioactive isotopes of atmospheric xenon is used to detect, locate and identify any confined nuclear tests as part of the Comprehensive Nuclear Test-Ban Treaty (CTBT). In this context, the Alternative Energies and French Atomic Energy Commission (CEA) has developed a fixed device to continuously measure the concentration of these fission products, the SPALAX process. During its atmospheric transport, the radioactive xenon will undergo a significant dilution between the source point and the measurement station. Regarding the distance between fixed stations located all over the globe, the typical volume activities measured are near 1 mBq m⁻³. To avoid the constraints induced by atmospheric dilution, the development of a mobile detection system is in progress; this system will allow on-site measurements in order to confirm or infringe a suspicious measurement detected by a fixed station. Furthermore, this system will use beta/gamma coincidence measurement technique in order to drastically reduce environmental background (which masks such activities). The detector prototype consists of a gas cell surrounded by two large silicon wafers, coupled with two square NaI(Tl) detectors. The gas cell has a sample volume of 30 cm³ and the silicon wafers are 500 µm thick with an active surface area of 3600 mm². In order to minimize leakage current, each wafer has been segmented into four independent silicon pixels. This cell is sandwiched between two low background NaI(Tl) detectors (70x70x40 mm³ crystal). The expected Minimal Detectable Concentration (MDC) for each radio-xenon is in the order of 1-10 mBq m⁻³. Three 4-channels digital acquisition modules (Pixie-NET) are used to process all the signals. Time synchronization is ensured by a dedicated PTP-network, using the IEEE 1588 Precision Time Protocol. We would like to present this system from its simulation to the laboratory tests.Keywords: beta/gamma coincidence technique, low level measurement, radioxenon, silicon pixels
Procedia PDF Downloads 127803 “The Effectiveness of Group Logo Therapy on Meaning and Quality of Life of Women in Old Age Home”
Authors: Sophia Cyril Vincent
Abstract:
Background: As per the Indian Census 2011, there is nearly 104 million elderly population aged above 60 years (53 million females and 51 males), and the count is expected to be 173 million by the end of 2026. Nearly 5.5% of women and 1.5% of men are living alone.1 In India, even though it is the moral duty of the children to take care of aged parents, many elders are landing in old age homes due to the social transformation factors like mushrooming of nuclear families, migration of children, cultural echoes, differences in mindset and values. Nearly 728 old age homes are seen across the country, out of which 78 old age homes with approximately 3000 inmates are seen only in Bangalore2. The existing literature shows that elderly women residing in old age homes experience the challenges like- loneliness, health issues, rejection from children, grief, death anxiety, etc, which leads to mental and physical wellbeing in numerous and tangible ways3. Hence the best and cost-effective way to improve the meaning and quality of life among elderly females is logotherapy, a type of psychotherapeutic analysis and treatment, motivating and driving force4 within the human experience to lead a decent life. Aim: The current research is aimed at studying the effectiveness of a logotherapy intervention on meaning and quality of life among elderly women of old age homes. Samples:200 women aged < 60 years and staying in the old age home for more than 1 year were randomly allocated to the control group and experimental group. Methodology: Using the Meaning in life questionnaire (MLQ)and the World health organization quality of life (WHOQOL) questionnaire, meaning and quality of life were assessed among both groups' women. Intensive Logotherapy and meaning in life program for five days were provided for the experimental group and the control group, with no treatment. Result: Under analysis. Conclusion: It is the right of the elderly woman to lead a happy and peaceful life till her death irrespective of the residing place. Hence, continuous monitoring and effective management are necessary for elderly women.Keywords: quality of life, meaning of life, logo therapy, old age home
Procedia PDF Downloads 206802 Infection Risk of Fecal Coliform Contamination in Drinking Water Sources of Urban Slum Dwellers: Application of Quantitative Microbiological Risk Assessment
Authors: Sri Yusnita Irda Sari, Deni Kurniadi Sunjaya, Ardini Saptaningsih Raksanagara
Abstract:
Water is one of the fundamental basic needs for human life, particularly drinking water sources. Although water quality is getting better, fecal-contamination of water is still found around the world, especially in the slum area of mid-low income countries. Drinking water source contamination in urban slum dwellers increases the risk of water borne diseases. Low level of sanitation and poor drinking water supply known as risk factors for diarrhea, moreover bacteria-contaminated drinking water source is the main cause of diarrhea in developing countries. This study aimed to assess risk infection due to Fecal Coliform contamination in various drinking water sources in urban area by applying Quantitative Microbiological Risk Assessment (QMRA). A Cross-sectional survey was conducted in a period of August to October 2015. Water samples were taken by simple random sampling from households in Cikapundung river basin which was one of urban slum area in the center of Bandung city, Indonesia. About 379 water samples from 199 households and 15 common wells were tested. Half of the households used treated drinking water from water gallon mostly refill water gallon which was produced in drinking water refill station. Others used raw water sources which need treatment before consume as drinking water such as tap water, borehole, dug well and spring water source. Annual risk to get infection due to Fecal Coliform contamination from highest to lowest risk was dug well (1127.9 x 10-5), spring water (49.7 x 10-5), borehole (1.383 x 10-5) and tap water (1.121 x 10-5). Annual risk infection of refill drinking water was 1.577 x 10-5 which is comparable to borehole and tap water. Household water treatment and storage to make raw water sources drinkable is essential to prevent risk of water borne diseases. Strong regulation and intense monitoring of refill water gallon quality should be prioritized by the government; moreover, distribution of tap water should be more accessible and affordable especially in urban slum area.Keywords: drinking water, quantitative microbiological risk assessment, slum, urban
Procedia PDF Downloads 282801 Lake Water Surface Variations and Its Influencing Factors in Tibetan Plateau in Recent 10 Years
Authors: Shanlong Lu, Jiming Jin, Xiaochun Wang
Abstract:
The Tibetan Plateau has the largest number of inland lakes with the highest elevation on the planet. These massive and large lakes are mostly in natural state and are less affected by human activities. Their shrinking or expansion can truly reflect regional climate and environmental changes and are sensitive indicators of global climate change. However, due to the sparsely populated nature of the plateau and the poor natural conditions, it is difficult to effectively obtain the change data of the lake, which has affected people's understanding of the temporal and spatial processes of lake water changes and their influencing factors. By using the MODIS (Moderate Resolution Imaging Spectroradiometer) MOD09Q1 surface reflectance images as basic data, this study produced the 8-day lake water surface data set of the Tibetan Plateau from 2000 to 2012 at 250 m spatial resolution, with a lake water surface extraction method of combined with lake water surface boundary buffer analyzing and lake by lake segmentation threshold determining. Then based on the dataset, the lake water surface variations and their influencing factors were analyzed, by using 4 typical natural geographical zones of Eastern Qinghai and Qilian, Southern Qinghai, Qiangtang, and Southern Tibet, and the watersheds of the top 10 lakes of Qinghai, Siling Co, Namco, Zhari NamCo, Tangra Yumco, Ngoring, UlanUla, Yamdrok Tso, Har and Gyaring as the analysis units. The accuracy analysis indicate that compared with water surface data of the 134 sample lakes extracted from the 30 m Landsat TM (Thematic Mapper ) images, the average overall accuracy of the lake water surface data set is 91.81% with average commission and omission error of 3.26% and 5.38%; the results also show strong linear (R2=0.9991) correlation with the global MODIS water mask dataset with overall accuracy of 86.30%; and the lake area difference between the Second National Lake Survey and this study is only 4.74%, respectively. This study provides reliable dataset for the lake change research of the plateau in the recent decade. The change trends and influencing factors analysis indicate that the total water surface area of lakes in the plateau showed overall increases, but only lakes with areas larger than 10 km2 had statistically significant increases. Furthermore, lakes with area larger than 100 km2 experienced an abrupt change in 2005. In addition, the annual average precipitation of Southern Tibet and Southern Qinghai experienced significant increasing and decreasing trends, and corresponding abrupt changes in 2004 and 2006, respectively. The annual average temperature of Southern Tibet and Qiangtang showed a significant increasing trend with an abrupt change in 2004. The major reason for the lake water surface variation in Eastern Qinghai and Qilian, Southern Qinghai and Southern Tibet is the changes of precipitation, and that for Qiangtang is the temperature variations.Keywords: lake water surface variation, MODIS MOD09Q1, remote sensing, Tibetan Plateau
Procedia PDF Downloads 232800 E4D-MP: Time-Lapse Multiphysics Simulation and Joint Inversion Toolset for Large-Scale Subsurface Imaging
Authors: Zhuanfang Fred Zhang, Tim C. Johnson, Yilin Fang, Chris E. Strickland
Abstract:
A variety of geophysical techniques are available to image the opaque subsurface with little or no contact with the soil. It is common to conduct time-lapse surveys of different types for a given site for improved results of subsurface imaging. Regardless of the chosen survey methods, it is often a challenge to process the massive amount of survey data. The currently available software applications are generally based on the one-dimensional assumption for a desktop personal computer. Hence, they are usually incapable of imaging the three-dimensional (3D) processes/variables in the subsurface of reasonable spatial scales; the maximum amount of data that can be inverted simultaneously is often very small due to the capability limitation of personal computers. Presently, high-performance or integrating software that enables real-time integration of multi-process geophysical methods is needed. E4D-MP enables the integration and inversion of time-lapsed large-scale data surveys from geophysical methods. Using the supercomputing capability and parallel computation algorithm, E4D-MP is capable of processing data across vast spatiotemporal scales and in near real time. The main code and the modules of E4D-MP for inverting individual or combined data sets of time-lapse 3D electrical resistivity, spectral induced polarization, and gravity surveys have been developed and demonstrated for sub-surface imaging. E4D-MP provides capability of imaging the processes (e.g., liquid or gas flow, solute transport, cavity development) and subsurface properties (e.g., rock/soil density, conductivity) critical for successful control of environmental engineering related efforts such as environmental remediation, carbon sequestration, geothermal exploration, and mine land reclamation, among others.Keywords: gravity survey, high-performance computing, sub-surface monitoring, electrical resistivity tomography
Procedia PDF Downloads 160799 Energy Intensity: A Case of Indian Manufacturing Industries
Authors: Archana Soni, Arvind Mittal, Manmohan Kapshe
Abstract:
Energy has been recognized as one of the key inputs for the economic growth and social development of a country. High economic growth naturally means a high level of energy consumption. However, in the present energy scenario where there is a wide gap between the energy generation and energy consumption, it is extremely difficult to match the demand with the supply. India being one of the largest and rapidly growing developing countries, there is an impending energy crisis which requires immediate measures to be adopted. In this situation, the concept of Energy Intensity comes under special focus to ensure energy security in an environmentally sustainable way. Energy Intensity is defined as the energy consumed per unit output in the context of industrial energy practices. It is a key determinant of the projections of future energy demands which assists in policy making. Energy Intensity is inversely related to energy efficiency; lesser the energy required to produce a unit of output or service, the greater is the energy efficiency. Energy Intensity of Indian manufacturing industries is among the highest in the world and stands for enormous energy consumption. Hence, reducing the Energy Intensity of Indian manufacturing industries is one of the best strategies to achieve a low level of energy consumption and conserve energy. This study attempts to analyse the factors which influence the Energy Intensity of Indian manufacturing firms and how they can be used to reduce the Energy Intensity. The paper considers six of the largest energy consuming manufacturing industries in India viz. Aluminium, Cement, Iron & Steel Industries, Textile Industries, Fertilizer and Paper industries and conducts a detailed Energy Intensity analysis using the data from PROWESS database of the Centre for Monitoring Indian Economy (CMIE). A total of twelve independent explanatory variables based on various factors such as raw material, labour, machinery, repair and maintenance, production technology, outsourcing, research and development, number of employees, wages paid, profit margin and capital invested have been taken into consideration for the analysis.Keywords: energy intensity, explanatory variables, manufacturing industries, PROWESS database
Procedia PDF Downloads 332798 The Impact of Heat Waves on Human Health: State of Art in Italy
Authors: Vito Telesca, Giuseppina A. Giorgio
Abstract:
The earth system is subject to a wide range of human activities that have changed the ecosystem more rapidly and extensively in the last five decades. These global changes have a large impact on human health. The relationship between extreme weather events and mortality are widely documented in different studies. In particular, a number of studies have investigated the relationship between climatological variations and the cardiovascular and respiratory system. The researchers have become interested in the evaluation of the effect of environmental variations on the occurrence of different diseases (such as infarction, ischemic heart disease, asthma, respiratory problems, etc.) and mortality. Among changes in weather conditions, the heat waves have been used for investigating the association between weather conditions and cardiovascular events and cerebrovascular, using thermal indices, which combine air temperature, relative humidity, and wind speed. The effects of heat waves on human health are mainly found in the urban areas and they are aggravated by the presence of atmospheric pollution. The consequences of these changes for human health are of growing concern. In particular, meteorological conditions are one of the environmental aspects because cardiovascular diseases are more common among the elderly population, and such people are more sensitive to weather changes. In addition, heat waves, or extreme heat events, are predicted to increase in frequency, intensity, and duration with climate change. In this context, are very important public health and climate change connections increasingly being recognized by the medical research, because these might help in informing the public at large. Policy experts claim that a growing awareness of the relationships of public health and climate change could be a key in breaking through political logjams impeding action on mitigation and adaptation. The aims of this study are to investigate about the importance of interactions between weather variables and your effects on human health, focusing on Italy. Also highlighting the need to define strategies and practical actions of monitoring, adaptation and mitigation of the phenomenon.Keywords: climate change, illness, Italy, temperature, weather
Procedia PDF Downloads 249797 Measurement of Ionospheric Plasma Distribution over Myanmar Using Single Frequency Global Positioning System Receiver
Authors: Win Zaw Hein, Khin Sandar Linn, Su Su Yi Mon, Yoshitaka Goto
Abstract:
The Earth ionosphere is located at the altitude of about 70 km to several 100 km from the ground, and it is composed of ions and electrons called plasma. In the ionosphere, these plasma makes delay in GPS (Global Positioning System) signals and reflect in radio waves. The delay along the signal path from the satellite to the receiver is directly proportional to the total electron content (TEC) of plasma, and this delay is the largest error factor in satellite positioning and navigation. Sounding observation from the top and bottom of the ionosphere was popular to investigate such ionospheric plasma for a long time. Recently, continuous monitoring of the TEC using networks of GNSS (Global Navigation Satellite System) observation stations, which are basically built for land survey, has been conducted in several countries. However, in these stations, multi-frequency support receivers are installed to estimate the effect of plasma delay using their frequency dependence and the cost of multi-frequency support receivers are much higher than single frequency support GPS receiver. In this research, single frequency GPS receiver was used instead of expensive multi-frequency GNSS receivers to measure the ionospheric plasma variation such as vertical TEC distribution. In this measurement, single-frequency support ublox GPS receiver was used to probe ionospheric TEC. The location of observation was assigned at Mandalay Technological University in Myanmar. In the method, the ionospheric TEC distribution is represented by polynomial functions for latitude and longitude, and parameters of the functions are determined by least-squares fitting on pseudorange data obtained at a known location under an assumption of thin layer ionosphere. The validity of the method was evaluated by measurements obtained by the Japanese GNSS observation network called GEONET. The performance of measurement results using single-frequency of GPS receiver was compared with the results by dual-frequency measurement.Keywords: ionosphere, global positioning system, GPS, ionospheric delay, total electron content, TEC
Procedia PDF Downloads 140796 Subdued Electrodermal Response to Empathic Induction Task in Intimate Partner Violence (IPV) Perpetrators
Authors: Javier Comes Fayos, Isabel Rodríguez Moreno, Sara Bressanutti, Marisol Lila, Angel Romero Martínez, Luis Moya Albiol
Abstract:
Empathy is a cognitive-affective capacity whose deterioration is associated with aggressive behaviour. Deficient affective processing is one of the predominant risk factors in men convicted of intimate partner violence (IPV perpetrators), since it makes their capacity to empathize very difficult. The objective of this study is to compare the response of electrodermal activity (EDA), as an indicator of emotionality, to an empathic induction task, between IPV perpetrators and men without a history of violence. The sample was composed of 51 men who attended the CONTEXTO program, with penalties for gender violence under two years, and 47 men with no history of violence. Empathic induction was achieved through the visualization of 4 negative emotional-eliciting videos taken from an emotional induction battery of videos validated for the Spanish population. The participants were asked to actively empathize with the video characters (previously pointed out). The psychophysiological recording of the EDA was accomplished by the "Vrije Universiteit Ambulatory Monitoring System (VU-AMS)." An analysis of repeated measurements was carried out with 10 intra-subject measurements (time) and "group" (IPV perpetrators and non-violent perpetrators) as the inter-subject factor. First, there were no significant differences between groups in the baseline AED levels. Yet, a significant interaction between the “time” and “group” was found with IPV perpetrators exhibiting lower EDA response than controls after the empathic induction task. These findings provide evidence of a subdued EDA response after an empathic induction task in IPV perpetrators with respect to men without a history of violence. Therefore, the lower psychophysiological activation would be indicative of difficulties in the emotional processing and response, functions that are necessary for the empathic function. Consequently, the importance of addressing possible empathic difficulties in IPV perpetrator psycho-educational programs is reinforced, putting special emphasis on the affective dimension that could hinder the empathic function.Keywords: electrodermal activity, emotional induction, empathy, intimate partner violence
Procedia PDF Downloads 203