Search results for: load monitoring
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5671

Search results for: load monitoring

1111 An Approach for Ensuring Data Flow in Freight Delivery and Management Systems

Authors: Aurelija Burinskienė, Dalė Dzemydienė, Arūnas Miliauskas

Abstract:

This research aims at developing the approach for more effective freight delivery and transportation process management. The road congestions and the identification of causes are important, as well as the context information recognition and management. The measure of many parameters during the transportation period and proper control of driver work became the problem. The number of vehicles per time unit passing at a given time and point for drivers can be evaluated in some situations. The collection of data is mainly used to establish new trips. The flow of the data is more complex in urban areas. Herein, the movement of freight is reported in detail, including the information on street level. When traffic density is extremely high in congestion cases, and the traffic speed is incredibly low, data transmission reaches the peak. Different data sets are generated, which depend on the type of freight delivery network. There are three types of networks: long-distance delivery networks, last-mile delivery networks and mode-based delivery networks; the last one includes different modes, in particular, railways and other networks. When freight delivery is switched from one type of the above-stated network to another, more data could be included for reporting purposes and vice versa. In this case, a significant amount of these data is used for control operations, and the problem requires an integrated methodological approach. The paper presents an approach for providing e-services for drivers by including the assessment of the multi-component infrastructure needed for delivery of freights following the network type. The construction of such a methodology is required to evaluate data flow conditions and overloads, and to minimize the time gaps in data reporting. The results obtained show the possibilities of the proposing methodological approach to support the management and decision-making processes with functionality of incorporating networking specifics, by helping to minimize the overloads in data reporting.

Keywords: transportation networks, freight delivery, data flow, monitoring, e-services

Procedia PDF Downloads 125
1110 Competitive DNA Calibrators as Quality Reference Standards (QRS™) for Germline and Somatic Copy Number Variations/Variant Allelic Frequencies Analyses

Authors: Eirini Konstanta, Cedric Gouedard, Aggeliki Delimitsou, Stefania Patera, Samuel Murray

Abstract:

Introduction: Quality reference DNA standards (QRS) for molecular testing by next-generation sequencing (NGS) are essential for accurate quantitation of copy number variations (CNV) for germline and variant allelic frequencies (VAF) for somatic analyses. Objectives: Presently, several molecular analytics for oncology patients are reliant upon quantitative metrics. Test validation and standardisation are also reliant upon the availability of surrogate control materials allowing for understanding test LOD (limit of detection), sensitivity, specificity. We have developed a dual calibration platform allowing for QRS pairs to be included in analysed DNA samples, allowing for accurate quantitation of CNV and VAF metrics within and between patient samples. Methods: QRS™ blocks up to 500nt were designed for common NGS panel targets incorporating ≥ 2 identification tags (IDTDNA.com). These were analysed upon spiking into gDNA, somatic, and ctDNA using a proprietary CalSuite™ platform adaptable to common LIMS. Results: We demonstrate QRS™ calibration reproducibility spiked to 5–25% at ± 2.5% in gDNA and ctDNA. Furthermore, we demonstrate CNV and VAF within and between samples (gDNA and ctDNA) with the same reproducibility (± 2.5%) in a clinical sample of lung cancer and HBOC (EGFR and BRCA1, respectively). CNV analytics was performed with similar accuracy using a single pair of QRS calibrators when using multiple single targeted sequencing controls. Conclusion: Dual paired QRS™ calibrators allow for accurate and reproducible quantitative analyses of CNV, VAF, intrinsic sample allele measurement, inter and intra-sample measure not only simplifying NGS analytics but allowing for monitoring clinically relevant biomarker VAF across patient ctDNA samples with improved accuracy.

Keywords: calibrator, CNV, gene copy number, VAF

Procedia PDF Downloads 152
1109 Reliability Analysis of Glass Epoxy Composite Plate under Low Velocity

Authors: Shivdayal Patel, Suhail Ahmad

Abstract:

Safety assurance and failure prediction of composite material component of an offshore structure due to low velocity impact is essential for associated risk assessment. It is important to incorporate uncertainties associated with material properties and load due to an impact. Likelihood of this hazard causing a chain of failure events plays an important role in risk assessment. The material properties of composites mostly exhibit a scatter due to their in-homogeneity and anisotropic characteristics, brittleness of the matrix and fiber and manufacturing defects. In fact, the probability of occurrence of such a scenario is due to large uncertainties arising in the system. Probabilistic finite element analysis of composite plates due to low-velocity impact is carried out considering uncertainties of material properties and initial impact velocity. Impact-induced damage of composite plate is a probabilistic phenomenon due to a wide range of uncertainties arising in material and loading behavior. A typical failure crack initiates and propagates further into the interface causing de-lamination between dissimilar plies. Since individual crack in the ply is difficult to track. The progressive damage model is implemented in the FE code by a user-defined material subroutine (VUMAT) to overcome these problems. The limit state function is accordingly established while the stresses in the lamina are such that the limit state function (g(x)>0). The Gaussian process response surface method is presently adopted to determine the probability of failure. A comparative study is also carried out for different combination of impactor masses and velocities. The sensitivity based probabilistic design optimization procedure is investigated to achieve better strength and lighter weight of composite structures. Chain of failure events due to different modes of failure is considered to estimate the consequences of failure scenario. Frequencies of occurrence of specific impact hazards yield the expected risk due to economic loss.

Keywords: composites, damage propagation, low velocity impact, probability of failure, uncertainty modeling

Procedia PDF Downloads 279
1108 Ni-W-P Alloy Coating as an Alternate to Electroplated Hard Cr Coating

Authors: S. K. Ghosh, C. Srivastava, P. K. Limaye, V. Kain

Abstract:

Electroplated hard chromium is widely known in coatings and surface finishing, automobile and aerospace industries because of its excellent hardness, wear resistance and corrosion properties. However, its precursor, Cr+6 is highly carcinogenic in nature and a consensus has been adopted internationally to eradicate this coating technology with an alternative one. The search for alternate coatings to electroplated hard chrome is continuing worldwide. Various alloys and nanocomposites like Co-W alloys, Ni-Graphene, Ni-diamond nanocomposites etc. have already shown promising results in this regard. Basically, in this study, electroless Ni-P alloys with excellent corrosion resistance was taken as the base matrix and incorporation of tungsten as third alloying element was considered to improve the hardness and wear resistance of the resultant alloy coating. The present work is focused on the preparation of Ni–W–P coatings by electrodeposition with different content of phosphorous and its effect on the electrochemical, mechanical and tribological performances. The results were also compared with Ni-W alloys. Composition analysis by EDS showed deposition of Ni-32.85 wt% W-3.84 wt% P (designated as Ni-W-LP) and Ni-18.55 wt% W-8.73 wt% P (designated as Ni-W-HP) alloy coatings from electrolytes containing of 0.006 and 0.01M sodium hypophosphite respectively. Inhibition of tungsten deposition in the presence of phosphorous was noted. SEM investigation showed cauliflower like growth along with few microcracks. The as-deposited Ni-W-P alloy coating was amorphous in nature as confirmed by XRD investigation and step-wise crystallization was noticed upon annealing at higher temperatures. For all the coatings, the nanohardness was found to increase after heat-treatment and typical nanonahardness values obtained for 400°C annealed samples were 18.65±0.20 GPa, 20.03±0.25 GPa, and 19.17±0.25 for alloy coatings Ni-W, Ni-W-LP and Ni-W-HP respectively. Therefore, the nanohardness data show very promising results. Wear and coefficient of friction data were recorded by applying a different normal load in reciprocating motion using a ball on plate geometry. Post experiment, the wear mechanism was established by detail investigation of wear-scar morphology. Potentiodynamic measurements showed coating with a high content of phosphorous was most corrosion resistant in 3.5wt% NaCl solution.

Keywords: corrosion, electrodeposition, nanohardness, Ni-W-P alloy coating

Procedia PDF Downloads 348
1107 Intrusion Detection in SCADA Systems

Authors: Leandros A. Maglaras, Jianmin Jiang

Abstract:

The protection of the national infrastructures from cyberattacks is one of the main issues for national and international security. The funded European Framework-7 (FP7) research project CockpitCI introduces intelligent intrusion detection, analysis and protection techniques for Critical Infrastructures (CI). The paradox is that CIs massively rely on the newest interconnected and vulnerable Information and Communication Technology (ICT), whilst the control equipment, legacy software/hardware, is typically old. Such a combination of factors may lead to very dangerous situations, exposing systems to a wide variety of attacks. To overcome such threats, the CockpitCI project combines machine learning techniques with ICT technologies to produce advanced intrusion detection, analysis and reaction tools to provide intelligence to field equipment. This will allow the field equipment to perform local decisions in order to self-identify and self-react to abnormal situations introduced by cyberattacks. In this paper, an intrusion detection module capable of detecting malicious network traffic in a Supervisory Control and Data Acquisition (SCADA) system is presented. Malicious data in a SCADA system disrupt its correct functioning and tamper with its normal operation. OCSVM is an intrusion detection mechanism that does not need any labeled data for training or any information about the kind of anomaly is expecting for the detection process. This feature makes it ideal for processing SCADA environment data and automates SCADA performance monitoring. The OCSVM module developed is trained by network traces off line and detects anomalies in the system real time. The module is part of an IDS (intrusion detection system) developed under CockpitCI project and communicates with the other parts of the system by the exchange of IDMEF messages that carry information about the source of the incident, the time and a classification of the alarm.

Keywords: cyber-security, SCADA systems, OCSVM, intrusion detection

Procedia PDF Downloads 552
1106 Role of Human Epididymis Protein 4 as a Biomarker in the Diagnosis of Ovarian Cancer

Authors: Amar Ranjan, Julieana Durai, Pranay Tanwar

Abstract:

Background &Introduction: Ovarian cancer is one of the most common malignant tumor in the female. 70% of the cases of ovarian cancer are diagnosed at an advanced stage. The five-year survival rate associated with ovarian cancer is less than 30%. The early diagnosis of ovarian cancer becomes a key factor in improving the survival rate of patients. Presently, CAl25 (carbohydrate antigen125) is used for the diagnosis and therapeutic monitoring of ovarian cancer, but its sensitivity and specificity is not ideal. The introduction of HE4, human epididymis protein 4 has attracted much attention. HE4 has a sensitivity and specificity of 72.9% and 95% for differentiating between benign and malignant adnexal masses, which is better than CA125 detection.  Methods: Serum HE4 and CA -125 were estimated using the chemiluminescence method. Our cases were 40 epithelial ovarian cancer, 9 benign ovarian tumor, 29 benign gynaecological diseases and 13 healthy individuals. This group include healthy woman those who have undergoing family planning and menopause-related medical consultations and they are negative for ovarian mass. Optimal cut off values for HE4 and CA125 were 55.89pmol/L and 40.25U/L respectively (determined by statistical analysis). Results: The level of HE4 was raised in all ovarian cancer patients (n=40) whereas CA125 levels were normal in 6/40 ovarian cancer patients, which were the cases of OC confirmed by histopathology. There is a significant decrease in the level of HE4 with comparison to CA125 in benign ovarian tumor cases. Both the levels of HE4 and CA125 were raised in the nonovarian cancer group, which includes cancer of endometrium and cervix. In the healthy group, HE4 was normal in all patients except in one case of the rudimentary horn, and the reason for this raised HE4 level is due to the incomplete development of uterus whereas CA125 was raised in 3 cases. Conclusions: Findings showed that the serum level of HE4 is an important indicator in the diagnosis of ovarian cancer, and it also distinguishes between benign and malignant pelvic masses. However, a combination of HE4 and CA125 panel will be extremely valuable in improving the diagnostic efficiency of ovarian cancer. These findings of our study need to be validated in the larger cohort of patients.

Keywords: human epididymis protein 4, ovarian cancer, diagnosis, benign lesions

Procedia PDF Downloads 131
1105 Toward Indoor and Outdoor Surveillance using an Improved Fast Background Subtraction Algorithm

Authors: El Harraj Abdeslam, Raissouni Naoufal

Abstract:

The detection of moving objects from a video image sequences is very important for object tracking, activity recognition, and behavior understanding in video surveillance. The most used approach for moving objects detection / tracking is background subtraction algorithms. Many approaches have been suggested for background subtraction. But, these are illumination change sensitive and the solutions proposed to bypass this problem are time consuming. In this paper, we propose a robust yet computationally efficient background subtraction approach and, mainly, focus on the ability to detect moving objects on dynamic scenes, for possible applications in complex and restricted access areas monitoring, where moving and motionless persons must be reliably detected. It consists of three main phases, establishing illumination changes in variance, background/foreground modeling and morphological analysis for noise removing. We handle illumination changes using Contrast Limited Histogram Equalization (CLAHE), which limits the intensity of each pixel to user determined maximum. Thus, it mitigates the degradation due to scene illumination changes and improves the visibility of the video signal. Initially, the background and foreground images are extracted from the video sequence. Then, the background and foreground images are separately enhanced by applying CLAHE. In order to form multi-modal backgrounds we model each channel of a pixel as a mixture of K Gaussians (K=5) using Gaussian Mixture Model (GMM). Finally, we post process the resulting binary foreground mask using morphological erosion and dilation transformations to remove possible noise. For experimental test, we used a standard dataset to challenge the efficiency and accuracy of the proposed method on a diverse set of dynamic scenes.

Keywords: video surveillance, background subtraction, contrast limited histogram equalization, illumination invariance, object tracking, object detection, behavior understanding, dynamic scenes

Procedia PDF Downloads 256
1104 Safety Study of Intravenously Administered Human Cord Blood Stem Cells in the Treatment of Symptoms Related to Chronic Inflammation

Authors: Brian M. Mehling, Louis Quartararo, Marine Manvelyan, Paul Wang, Dong-Cheng Wu

Abstract:

Numerous investigations suggest that Mesenchymal Stem Cells (MSCs) in general represent a valuable tool for therapy of symptoms related to chronic inflammatory diseases. Blue Horizon Stem Cell Therapy Program is a leading provider of adult and children’s stem cell therapies. Uniquely we have safely and efficiently treated more than 600 patients with documenting each procedure. The purpose of our study is primarily to monitor the immune response in order to validate the safety of intravenous infusion of human umbilical cord blood derived MSCs (UC-MSCs), and secondly, to evaluate effects on biomarkers associated with chronic inflammation. Nine patients were treated for conditions associated with chronic inflammation and for the purpose of anti-aging. They have been given one intravenous infusion of UC-MSCs. Our study of blood test markers of 9 patients with chronic inflammation before and within three months after MSCs treatment demonstrates that there is no significant changes and MSCs treatment was safe for the patients. Analysis of different indicators of chronic inflammation and aging included in initial, 24-hours, two weeks and three months protocols showed that stem cell treatment was safe for the patients; there were no adverse reactions. Moreover data from follow up protocols demonstrates significant improvement in energy level, hair, nails growth and skin conditions. Intravenously administered UC-MSCs were safe and effective in the improvement of symptoms related to chronic inflammation. Further close monitoring and inclusion of more patients are necessary to fully characterize the advantages of UC-MSCs application in treatment of symptoms related to chronic inflammation.

Keywords: chronic inflammatory diseases, intravenous infusion, stem cell therapy, umbilical cord blood derived mesenchymal stem cells (UC-MSCs)

Procedia PDF Downloads 434
1103 Verification and Proposal of Information Processing Model Using EEG-Based Brain Activity Monitoring

Authors: Toshitaka Higashino, Naoki Wakamiya

Abstract:

Human beings perform a task by perceiving information from outside, recognizing them, and responding them. There have been various attempts to analyze and understand internal processes behind the reaction to a given stimulus by conducting psychological experiments and analysis from multiple perspectives. Among these, we focused on Model Human Processor (MHP). However, it was built based on psychological experiments and thus the relation with brain activity was unclear so far. To verify the validity of the MHP and propose our model from a viewpoint of neuroscience, EEG (Electroencephalography) measurements are performed during experiments in this study. More specifically, first, experiments were conducted where Latin alphabet characters were used as visual stimuli. In addition to response time, ERPs (event-related potentials) such as N100 and P300 were measured by using EEG. By comparing cycle time predicted by the MHP and latency of ERPs, it was found that N100, related to perception of stimuli, appeared at the end of the perceptual processor. Furthermore, by conducting an additional experiment, it was revealed that P300, related to decision making, appeared during the response decision process, not at the end. Second, by experiments using Japanese Hiragana characters, i.e. Japan's own phonetic symbols, those findings were confirmed. Finally, Japanese Kanji characters were used as more complicated visual stimuli. A Kanji character usually has several readings and several meanings. Despite the difference, a reading-related task and a meaning-related task exhibited similar results, meaning that they involved similar information processing processes of the brain. Based on those results, our model was proposed which reflects response time and ERP latency. It consists of three processors: the perception processor from an input of a stimulus to appearance of N100, the cognitive processor from N100 to P300, and the decision-action processor from P300 to response. Using our model, an application system which reflects brain activity can be established.

Keywords: brain activity, EEG, information processing model, model human processor

Procedia PDF Downloads 98
1102 Nutrient Content and Labelling Status of Pre-Packaged Beverages in Saudi Arabia

Authors: Ruyuf Y. Alnafisah, Nouf S. Alammari, Amani S. Alqahtani

Abstract:

Background: Beverage choice can have implications for the risk of non-communicable diseases. However, there is a lack of knowledge in assessing the nutritional content of these beverages. This study aims to describe the nutrient content of pre-packaged beverages available in the Saudi market. Design: Data were collected from the Saudi Branded Food Data-base (SBFD). Nutrient content was standardized in terms of units and reference volumes to ensure consistency in analysis. Results: A total of 1490 beverages were analyzed. The highest median levels of the majority of nutrients were found among dairy products; energy (68.4(43-188]kcal/100 ml in a milkshake); protein (8.2(0.5-8.2]g/100 ml in yogurt drinks); total fat (2.1(1.3-3.5]g/100 ml in milk); saturated fat (1.4(0-1.4]g/100 ml in yogurt drinks); cholesterol (30(0-30]mg/100 ml in yogurt drinks); sodium (65(65-65].4mg/100 ml in yogurt drinks); and total sugars (12.9(7.5-27]g/100 ml in milkshake). Carbohydrate level was the highest in nectar (13(11.8-14.2] g/100ml]; fruits drinks (12.9(11.9-13.9] g/100ml), and sparkling juices (12.9(8.8-14] g/100ml). The highest added sugar level was observed among regular soft drinks (12(10.8-14] g/100ml). The average rate of nutrient declaration was 60.95%. Carbo-hydrate had the highest declaration rate among nutrients (99.1%), and yogurt drinks had the highest declaration rate among beverage categories (92.7%). The median content of vitamins A and D in dairy products met the mandatory addition levels. Conclusion: This study provides valuable insights into the nutrient content of pre-packaged beverages in the Saudi market. It serves as a foundation for future research and monitoring. The findings of the study support the idea of taxing sugary beverages and raise concerns about the health effects of high sugar in fruit juices. Despite the inclusion of vitamins D and A in dairy products, the study highlights the need for alternative strategies to address these deficiencies.

Keywords: pre-packaged beverages, nutrients content, nutrients declaration, daily percentage value, mandatory addition of vitamins

Procedia PDF Downloads 58
1101 Maintenance Performance Measurement Derived Optimization: A Case Study

Authors: James M. Wakiru, Liliane Pintelon, Peter Muchiri, Stanley Mburu

Abstract:

Maintenance performance measurement (MPM) represents an integrated aspect that considers both operational and maintenance related aspects while evaluating the effectiveness and efficiency of maintenance to ensure assets are working as they should. Three salient issues require to be addressed for an asset-intensive organization to employ an MPM-based framework to optimize maintenance. Firstly, the organization should establish important perfomance metric(s), in this case the maintenance objective(s), which they will be focuss on. The second issue entails aligning the maintenance objective(s) with maintenance optimization. This is achieved by deriving maintenance performance indicators that subsequently form an objective function for the optimization program. Lastly, the objective function is employed in an optimization program to derive maintenance decision support. In this study, we develop a framework that initially identifies the crucial maintenance performance measures, and employs them to derive maintenance decision support. The proposed framework is demonstrated in a case study of a geothermal drilling rig, where the objective function is evaluated utilizing a simulation-based model whose parameters are derived from empirical maintenance data. Availability, reliability and maintenance inventory are depicted as essential objectives requiring further attention. A simulation model is developed mimicking a drilling rig operations and maintenance where the sub-systems are modelled undergoing imperfect maintenance, corrective (CM) and preventive (PM), with the total cost as the primary performance measurement. Moreover, three maintenance spare inventory policies are considered; classical (retaining stocks for a contractual period), vendor-managed inventory with consignment stock and periodic monitoring order-to-stock (s, S) policy. Optimization results infer that the adoption of (s, S) inventory policy, increased PM interval and reduced reliance of CM actions offers improved availability and total costs reduction.

Keywords: maintenance, vendor-managed, decision support, performance, optimization

Procedia PDF Downloads 125
1100 Revolutionizing Project Management: A Comprehensive Review of Artificial Intelligence and Machine Learning Applications for Smarter Project Execution

Authors: Wenzheng Fu, Yue Fu, Zhijiang Dong, Yujian Fu

Abstract:

The integration of artificial intelligence (AI) and machine learning (ML) into project management is transforming how engineering projects are executed, monitored, and controlled. This paper provides a comprehensive survey of AI and ML applications in project management, systematically categorizing their use in key areas such as project data analytics, monitoring, tracking, scheduling, and reporting. As project management becomes increasingly data-driven, AI and ML offer powerful tools for improving decision-making, optimizing resource allocation, and predicting risks, leading to enhanced project outcomes. The review highlights recent research that demonstrates the ability of AI and ML to automate routine tasks, provide predictive insights, and support dynamic decision-making, which in turn increases project efficiency and reduces the likelihood of costly delays. This paper also examines the emerging trends and future opportunities in AI-driven project management, such as the growing emphasis on transparency, ethical governance, and data privacy concerns. The research suggests that AI and ML will continue to shape the future of project management by driving further automation and offering intelligent solutions for real-time project control. Additionally, the review underscores the need for ongoing innovation and the development of governance frameworks to ensure responsible AI deployment in project management. The significance of this review lies in its comprehensive analysis of AI and ML’s current contributions to project management, providing valuable insights for both researchers and practitioners. By offering a structured overview of AI applications across various project phases, this paper serves as a guide for the adoption of intelligent systems, helping organizations achieve greater efficiency, adaptability, and resilience in an increasingly complex project management landscape.

Keywords: artificial intelligence, decision support systems, machine learning, project management, resource optimization, risk prediction

Procedia PDF Downloads 21
1099 Inpatient Neonatal Deaths in Rural Uganda: A Retrospective Comparative Mortality Study of Labour Ward versus Community Admissions

Authors: Najade Sheriff, Malaz Elsaddig, Kevin Jones

Abstract:

Background: Death in the first month of life accounts for an increasing proportion of under-five mortality. Advancement to reduce this number is being made across the globe; however, progress is slowest in sub-Saharan Africa. Objectives: The study aims to identify differences between neonatal deaths of inpatient babies born in a hospital facility in rural Uganda to those of neonates admitted from the community and to explore whether they can be used to risk stratify neonatal admissions. Results: A retrospective chart review was conducted on records for neonates admitted to the Special Care Baby Unit (SCBU) Kitovu Hospital from 1st July 2016 to 21st July 2017. A total of 442 babies were admitted and the overall neonatal mortality was 24.8% (40% inpatient, 37% community, 23% hospital referrals). 40% of deaths occurred within 24 hours of admission and the majority were male (63%). 43% of babies were hypothermic upon admission, a significantly greater proportion of which were inpatient babies born in labour ward (P=0.0025). Intrapartum related death accounted for ½ of all inpatient babies whereas complications of prematurity were the predominant cause of death in the community group (37%). Severe infection does not seem like a significant factor of mortality for inpatients (2%) as it does for community admissions (29%). Furthermore, with 52.5% of community admissions weighing < 1500g, very low birth weight (VLBW) may be a significant risk factor for community neonatal death. Conclusion: The neonatal mortality rate in this study is high, and the leading causes of death are all largely preventable. A high rate of inpatient birth asphyxiation indicates the need for good quality facility-based perinatal care as well as a greater focus on the management of hypothermia, such as Kangaroo care. Moreover, a reduction in preterm deliveries is necessary to reduce associated comorbidities, and monitoring for signs of infection is especially important for community admissions.

Keywords: community, mortality, newborn, Uganda

Procedia PDF Downloads 187
1098 Urban Corridor Management Strategy Based on Intelligent Transportation System

Authors: Sourabh Jain, Sukhvir Singh Jain, Gaurav V. Jain

Abstract:

Intelligent Transportation System (ITS) is the application of technology for developing a user–friendly transportation system for urban areas in developing countries. The goal of urban corridor management using ITS in road transport is to achieve improvements in mobility, safety, and the productivity of the transportation system within the available facilities through the integrated application of advanced monitoring, communications, computer, display, and control process technologies, both in the vehicle and on the road. This paper attempts to present the past studies regarding several ITS available that have been successfully deployed in urban corridors of India and abroad, and to know about the current scenario and the methodology considered for planning, design, and operation of Traffic Management Systems. This paper also presents the endeavor that was made to interpret and figure out the performance of the 27.4 Km long study corridor having eight intersections and four flyovers. The corridor consisting of 6 lanes as well as 8 lanes divided road network. Two categories of data were collected on February 2016 such as traffic data (traffic volume, spot speed, delay) and road characteristics data (no. of lanes, lane width, bus stops, mid-block sections, intersections, flyovers). The instruments used for collecting the data were video camera, radar gun, mobile GPS and stopwatch. From analysis, the performance interpretations incorporated were identification of peak hours and off peak hours, congestion and level of service (LOS) at mid blocks, delay followed by the plotting speed contours and recommending urban corridor management strategies. From the analysis, it is found that ITS based urban corridor management strategies will be useful to reduce congestion, fuel consumption and pollution so as to provide comfort and efficiency to the users. The paper presented urban corridor management strategies based on sensors incorporated in both vehicles and on the roads.

Keywords: congestion, ITS strategies, mobility, safety

Procedia PDF Downloads 443
1097 The Effects of Climate Change and Upstream Dam Development on Sediment Distribution in the Vietnamese Mekong Delta

Authors: Trieu Anh Ngoc, Nguyen Quang Kim

Abstract:

Located at the downstream of the Mekong Delta, the Vietnamese Mekong Delta is well-known as 'rice bowl' of Vietnam. The Vietnamese Mekong Delta experiences widespread flooding annually where is habitat for about 17 million people. The economy of this region mainly depends on the agricultural productivities. The suspended sediment load in the Mekong River plays an important role in carrying contaminants and nutrients to the delta and changing the geomorphology of the delta river system. In many past decades, flooding and suspended sediment were considered as indispensable factors in agricultural cultivations. Although flooding in the wet season caused serious inundation in paddy field and affected livelihoods, it is an effective facility for flushing acid and saline to this area - alluvial soil heavily contaminated with acid and salt intrusion. In addition, sediment delivery to this delta contained rich-nutrients distributed and deposited on the fields through flooding process. In recent decades, the changing of flow and sediment transport have been strongly and clearly occurring due to upstream dam development and climate change. However, effects of sediment delivery on agricultural cultivations were less attention. This study investigated the impacts of upstream flow on sediment distribution in the Vietnamese Mekong Delta. Flow fluctuation and sediment distribution were simulated by the Mike 11 model, including hydrodynamics model and advection-dispersion model. Various scenarios were simulated based on anticipated upstream discharges. Our findings indicated that sediment delivery into the Vietnamese Mekong Delta come from not only Tien River but also border of Cambodia floodplains. Sediment distribution in the Vietnamese Mekong Delta is dramatically changed by the distance from the main rivers and the secondary channels. The dam development in the upstream is one of the major factors leading a decrease in sediment discharge as well as sediment deposition. Moreover, sea level rise partially contributed to decrease in sediment transport and change of sediment distribution between upstream and downstream of the Vietnamese Mekong Delta.

Keywords: sediment transport, sea level rise, climate change, Mike Model

Procedia PDF Downloads 276
1096 Performance Evaluation of Production Schedules Based on Process Mining

Authors: Kwan Hee Han

Abstract:

External environment of enterprise is rapidly changing majorly by global competition, cost reduction pressures, and new technology. In these situations, production scheduling function plays a critical role to meet customer requirements and to attain the goal of operational efficiency. It deals with short-term decision making in the production process of the whole supply chain. The major task of production scheduling is to seek a balance between customer orders and limited resources. In manufacturing companies, this task is so difficult because it should efficiently utilize resource capacity under the careful consideration of many interacting constraints. At present, many computerized software solutions have been utilized in many enterprises to generate a realistic production schedule to overcome the complexity of schedule generation. However, most production scheduling systems do not provide sufficient information about the validity of the generated schedule except limited statistics. Process mining only recently emerged as a sub-discipline of both data mining and business process management. Process mining techniques enable the useful analysis of a wide variety of processes such as process discovery, conformance checking, and bottleneck analysis. In this study, the performance of generated production schedule is evaluated by mining event log data of production scheduling software system by using the process mining techniques since every software system generates event logs for the further use such as security investigation, auditing and error bugging. An application of process mining approach is proposed for the validation of the goodness of production schedule generated by scheduling software systems in this study. By using process mining techniques, major evaluation criteria such as utilization of workstation, existence of bottleneck workstations, critical process route patterns, and work load balance of each machine over time are measured, and finally, the goodness of production schedule is evaluated. By using the proposed process mining approach for evaluating the performance of generated production schedule, the quality of production schedule of manufacturing enterprises can be improved.

Keywords: data mining, event log, process mining, production scheduling

Procedia PDF Downloads 279
1095 Engaging the Terrorism Problematique in Africa: Discursive and Non-Discursive Approaches to Counter Terrorism

Authors: Cecil Blake, Tolu Kayode-Adedeji, Innocent Chiluwa, Charles Iruonagbe

Abstract:

National, regional and international security threats have dominated the twenty-first century thus far. Insurgencies that utilize “terrorism” as their primary strategy pose the most serious threat to global security. States in turn adopt terrorist strategies to resist and even defeat insurgents who invoke the legitimacy of statehood to justify their action. In short, the era is dominated by the use of terror tactics by state and non-state actors. Globally, there is a powerful network of groups involved in insurgencies using Islam as the bastion for their cause. In Africa, there are Boko Haram, Al Shabaab and Al Qaeda in the Maghreb representing Islamic groups utilizing terror strategies and tactics to prosecute their wars. The task at hand is to discover and to use multiple ways of handling the present security threats, including novel approaches to policy formulation, implementation, monitoring and evaluation that would pay significant attention to the important role of culture and communication strategies germane for discursive means of conflict resolution. In other to achieve this, the proposed research would address inter alia, root causes of insurgences that predicate their mission on Islamic tenets particularly in Africa; discursive and non-discursive counter-terrorism approaches fashioned by African governments, continental supra-national and regional organizations, recruitment strategies by major non-sate actors in Africa that rely solely on terrorist strategies and tactics and sources of finances for the groups under study. A major anticipated outcome of this research is a contribution to answers that would lead to the much needed stability required for development in African countries experiencing insurgencies carried out by the use of patterned terror strategies and tactics. The nature of the research requires the use of triangulation as the methodological tool.

Keywords: counter-terrorism, discourse, Nigeria, security, terrorism

Procedia PDF Downloads 486
1094 Austempered Compacted Graphite Irons: Influence of Austempering Temperature on Microstructure and Microscratch Behavior

Authors: Rohollah Ghasemi, Arvin Ghorbani

Abstract:

This study investigates the effect of austempering temperature on microstructure and scratch behavior of the austempered heat-treated compacted graphite irons. The as-cast was used as base material for heat treatment practices. The samples were extracted from as-cast ferritic CGI pieces and were heat treated under austenitising temperature of 900°C for 60 minutes which followed by quenching in salt-bath at different austempering temperatures of 275°C, 325°C and 375°C. For all heat treatments, an austempering holding time of 30 minutes was selected for this study. Light optical microscope (LOM) and scanning electron microscope (SEM) and electron back scattered diffraction (EBSD) analysis confirmed the ausferritic matrix formed in all heat-treated samples. Microscratches were performed under the load of 200, 600 and 1000 mN using a sphero-conical diamond indenter with a tip radius of 50 μm and induced cone angle 90° at a speed of 10 μm/s at room temperature ~25°C. An instrumented nanoindentation machine was used for performing nanoindentation hardness measurement and microscratch testing. Hardness measurements and scratch resistance showed a significant increase in Brinell, Vickers, and nanoindentation hardness values as well as microscratch resistance of the heat-treated samples compared to the as-cast ferritic sample. The increase in hardness and improvement in microscratch resistance are associated with the formation of the ausferrite matrix consisted of carbon-saturated retained austenite and acicular ferrite in austempered matrix. The maximum hardness was observed for samples austempered at 275°C which resulted in the formation of very fine acicular ferrite. In addition, nanohardness values showed a quite significant variation in the matrix due to the presence of acicular ferrite and carbon-saturated retained austenite. It was also observed that the increase of austempering temperature resulted in increase of volume of the carbon-saturated retained austenite and decrease of hardness values.

Keywords: austempered CGI, austempering, scratch testing, scratch plastic deformation, scratch hardness

Procedia PDF Downloads 136
1093 Simultaneous Determination of Methotrexate and Aspirin Using Fourier Transform Convolution Emission Data under Non-Parametric Linear Regression Method

Authors: Marwa A. A. Ragab, Hadir M. Maher, Eman I. El-Kimary

Abstract:

Co-administration of methotrexate (MTX) and aspirin (ASP) can cause a pharmacokinetic interaction and a subsequent increase in blood MTX concentrations which may increase the risk of MTX toxicity. Therefore, it is important to develop a sensitive, selective, accurate and precise method for their simultaneous determination in urine. A new hybrid chemometric method has been applied to the emission response data of the two drugs. Spectrofluorimetric method for determination of MTX through measurement of its acid-degradation product, 4-amino-4-deoxy-10-methylpteroic acid (4-AMP), was developed. Moreover, the acid-catalyzed degradation reaction enables the spectrofluorimetric determination of ASP through the formation of its active metabolite salicylic acid (SA). The proposed chemometric method deals with convolution of emission data using 8-points sin xi polynomials (discrete Fourier functions) after the derivative treatment of these emission data. The first and second derivative curves (D1 & D2) were obtained first then convolution of these curves was done to obtain first and second derivative under Fourier functions curves (D1/FF) and (D2/FF). This new application was used for the resolution of the overlapped emission bands of the degradation products of both drugs to allow their simultaneous indirect determination in human urine. Not only this chemometric approach was applied to the emission data but also the obtained data were subjected to non-parametric linear regression analysis (Theil’s method). The proposed method was fully validated according to the ICH guidelines and it yielded linearity ranges as follows: 0.05-0.75 and 0.5-2.5 µg mL-1 for MTX and ASP respectively. It was found that the non-parametric method was superior over the parametric one in the simultaneous determination of MTX and ASP after the chemometric treatment of the emission spectra of their degradation products. The work combines the advantages of derivative and convolution using discrete Fourier function together with the reliability and efficacy of the non-parametric analysis of data. The achieved sensitivity along with the low values of LOD (0.01 and 0.06 µg mL-1) and LOQ (0.04 and 0.2 µg mL-1) for MTX and ASP respectively, by the second derivative under Fourier functions (D2/FF) were promising and guarantee its application for monitoring the two drugs in patients’ urine samples.

Keywords: chemometrics, emission curves, derivative, convolution, Fourier transform, human urine, non-parametric regression, Theil’s method

Procedia PDF Downloads 430
1092 Flow Reproduction Using Vortex Particle Methods for Wake Buffeting Analysis of Bluff Structures

Authors: Samir Chawdhury, Guido Morgenthal

Abstract:

The paper presents a novel extension of Vortex Particle Methods (VPM) where the study aims to reproduce a template simulation of complex flow field that is generated from impulsively started flow past an upstream bluff body at certain Reynolds number Re-Vibration of a structural system under upstream wake flow is often considered its governing design criteria. Therefore, the attention is given in this study especially for the reproduction of wake flow simulation. The basic methodology for the implementation of the flow reproduction requires the downstream velocity sampling from the template flow simulation; therefore, at particular distances from the upstream section the instantaneous velocity components are sampled using a series of square sampling-cells arranged vertically where each of the cell contains four velocity sampling points at its corner. Since the grid free Lagrangian VPM algorithm discretises vorticity on particle elements, the method requires transformation of the velocity components into vortex circulation, and finally the simulation of the reproduction of the template flow field by seeding these vortex circulations or particles into a free stream flow. It is noteworthy that the vortex particles have to be released into the free stream exactly at same rate of velocity sampling. Studies have been done, specifically, in terms of different sampling rates and velocity sampling positions to find their effects on flow reproduction quality. The quality assessments are mainly done, using a downstream flow monitoring profile, by comparing the characteristic wind flow profiles using several statistical turbulence measures. Additionally, the comparisons are performed using velocity time histories, snapshots of the flow fields, and the vibration of a downstream bluff section by performing wake buffeting analyses of the section under the original and reproduced wake flows. Convergence study is performed for the validation of the method. The study also describes the possibilities how to achieve flow reproductions with less computational effort.

Keywords: vortex particle method, wake flow, flow reproduction, wake buffeting analysis

Procedia PDF Downloads 311
1091 Remote Sensing of Aerated Flows at Large Dams: Proof of Concept

Authors: Ahmed El Naggar, Homyan Saleh

Abstract:

Dams are crucial for flood control, water supply, and the creation of hydroelectric power. Every dam has a water conveyance system, such as a spillway, providing the safe discharge of catastrophic floods when necessary. Spillway design has historically been investigated in laboratory research owing to the absence of suitable full-scale flow monitoring equipment and safety problems. Prototype measurements of aerated flows are urgently needed to quantify projected scale effects and provide missing validation data for design guidelines and numerical simulations. In this work, an image-based investigation of free-surface flows on a tiered spillway was undertaken at the laboratory (fixed camera installation) and prototype size (drone video) (drone footage) (drone footage). The drone videos were generated using data from citizen science. Analyses permitted the measurement of the free-surface aeration inception point, air-water surface velocities, fluctuations, and residual energy at the chute's downstream end from a remote site. The prototype observations offered full-scale proof of concept, while laboratory results were efficiently confirmed against invasive phase-detection probe data. This paper stresses the efficacy of image-based analyses at prototype spillways. It highlights how citizen science data may enable academics better understand real-world air-water flow dynamics and offers a framework for a small collection of long-missing prototype data.

Keywords: remote sensing, aerated flows, large dams, proof of concept, dam spillways, air-water flows, prototype operation, remote sensing, inception point, optical flow, turbulence, residual energy

Procedia PDF Downloads 92
1090 Study of Error Analysis and Sources of Uncertainty in the Measurement of Residual Stresses by the X-Ray Diffraction

Authors: E. T. Carvalho Filho, J. T. N. Medeiros, L. G. Martinez

Abstract:

Residual stresses are self equilibrating in a rigid body that acts on the microstructure of the material without application of an external load. They are elastic stresses and can be induced by mechanical, thermal and chemical processes causing a deformation gradient in the crystal lattice favoring premature failure in mechanicals components. The search for measurements with good reliability has been of great importance for the manufacturing industries. Several methods are able to quantify these stresses according to physical principles and the response of the mechanical behavior of the material. The diffraction X-ray technique is one of the most sensitive techniques for small variations of the crystalline lattice since the X-ray beam interacts with the interplanar distance. Being very sensitive technique is also susceptible to variations in measurements requiring a study of the factors that influence the final result of the measurement. Instrumental, operational factors, form deviations of the samples and geometry of analyzes are some variables that need to be considered and analyzed in order for the true measurement. The aim of this work is to analyze the sources of errors inherent to the residual stress measurement process by X-ray diffraction technique making an interlaboratory comparison to verify the reproducibility of the measurements. In this work, two specimens were machined, differing from each other by the surface finishing: grinding and polishing. Additionally, iron powder with particle size less than 45 µm was selected in order to be a reference (as recommended by ASTM E915 standard) for the tests. To verify the deviations caused by the equipment, those specimens were positioned and with the same analysis condition, seven measurements were carried out at 11Ψ tilts. To verify sample positioning errors, seven measurements were performed by positioning the sample at each measurement. To check geometry errors, measurements were repeated for the geometry and Bragg Brentano parallel beams. In order to verify the reproducibility of the method, the measurements were performed in two different laboratories and equipments. The results were statistically worked out and the quantification of the errors.

Keywords: residual stress, x-ray diffraction, repeatability, reproducibility, error analysis

Procedia PDF Downloads 181
1089 Development of an Atmospheric Radioxenon Detection System for Nuclear Explosion Monitoring

Authors: V. Thomas, O. Delaune, W. Hennig, S. Hoover

Abstract:

Measurement of radioactive isotopes of atmospheric xenon is used to detect, locate and identify any confined nuclear tests as part of the Comprehensive Nuclear Test-Ban Treaty (CTBT). In this context, the Alternative Energies and French Atomic Energy Commission (CEA) has developed a fixed device to continuously measure the concentration of these fission products, the SPALAX process. During its atmospheric transport, the radioactive xenon will undergo a significant dilution between the source point and the measurement station. Regarding the distance between fixed stations located all over the globe, the typical volume activities measured are near 1 mBq m⁻³. To avoid the constraints induced by atmospheric dilution, the development of a mobile detection system is in progress; this system will allow on-site measurements in order to confirm or infringe a suspicious measurement detected by a fixed station. Furthermore, this system will use beta/gamma coincidence measurement technique in order to drastically reduce environmental background (which masks such activities). The detector prototype consists of a gas cell surrounded by two large silicon wafers, coupled with two square NaI(Tl) detectors. The gas cell has a sample volume of 30 cm³ and the silicon wafers are 500 µm thick with an active surface area of 3600 mm². In order to minimize leakage current, each wafer has been segmented into four independent silicon pixels. This cell is sandwiched between two low background NaI(Tl) detectors (70x70x40 mm³ crystal). The expected Minimal Detectable Concentration (MDC) for each radio-xenon is in the order of 1-10 mBq m⁻³. Three 4-channels digital acquisition modules (Pixie-NET) are used to process all the signals. Time synchronization is ensured by a dedicated PTP-network, using the IEEE 1588 Precision Time Protocol. We would like to present this system from its simulation to the laboratory tests.

Keywords: beta/gamma coincidence technique, low level measurement, radioxenon, silicon pixels

Procedia PDF Downloads 126
1088 Comparison of 18F-FDG and 11C-Methionine PET-CT for Assessment of Response to Neoadjuvant Chemotherapy in Locally Advanced Breast Carcinoma

Authors: Sonia Mahajan Dinesh, Anant Dinesh, Madhavi Tripathi, Vinod Kumar Ramteke, Rajnish Sharma, Anupam Mondal

Abstract:

Background: Neo-adjuvant chemotherapy plays an important role in treatment of breast cancer by decreasing the tumour load and it offers an opportunity to evaluate response of primary tumour to chemotherapy. Standard anatomical imaging modalities are unable to accurately reflect the response to chemotherapy until several cycles of drug treatment have been completed. Metabolic imaging using tracers like 18F-fluorodeoxyglucose (FDG) as a marker of glucose metabolism or amino acid tracers like L-methyl-11C methionine (MET) have potential role for the measurement of treatment response. In this study, our objective was to compare these two PET tracers for assessment of response to neoadjuvant chemotherapy, in locally advanced breast carcinoma. Methods: In our prospective study, 20 female patients with histology proven locally advanced breast carcinoma underwent PET-CT imaging using FDG and MET before and after three cycles of neoadjuvant chemotherapy (CAF regimen). Thereafter, all patients were taken for MRM and the resected specimen was sent for histo-pathological analysis. Tumour response to the neoadjuvant chemotherapy was evaluated by PET-CT imaging using PERCIST criteria and correlated with histological results. Responses calculated were compared for statistical significance using paired t- test. Results: Mean SUVmax for primary lesion in FDG PET and MET PET was 15.88±11.12 and 5.01±2.14 respectively (p<0.001) and for axillary lymph nodes was 7.61±7.31 and 2.75±2.27 respectively (p=0.001). Statistically significant response in primary tumour and axilla was noted on both FDG and MET PET after three cycles of NAC. Complete response in primary tumour was seen in only 1 patient in FDG and 7 patients in MET PET (p=0.001) whereas there was no histological complete resolution of tumor in any patient. Response to therapy in axillary nodes noted on both PET scans were similar (p=0.45) and correlated well with histological findings. Conclusions: For the primary breast tumour, FDG PET has a higher sensitivity and accuracy than MET PET and for axilla both have comparable sensitivity and specificity. FDG PET shows higher target to background ratios so response is better predicted for primary breast tumour and axilla. Also, FDG-PET is widely available and has the advantage of a whole body evaluation in one study.

Keywords: 11C-methionine, 18F-FDG, breast carcinoma, neoadjuvant chemotherapy

Procedia PDF Downloads 510
1087 “The Effectiveness of Group Logo Therapy on Meaning and Quality of Life of Women in Old Age Home”

Authors: Sophia Cyril Vincent

Abstract:

Background: As per the Indian Census 2011, there is nearly 104 million elderly population aged above 60 years (53 million females and 51 males), and the count is expected to be 173 million by the end of 2026. Nearly 5.5% of women and 1.5% of men are living alone.1 In India, even though it is the moral duty of the children to take care of aged parents, many elders are landing in old age homes due to the social transformation factors like mushrooming of nuclear families, migration of children, cultural echoes, differences in mindset and values. Nearly 728 old age homes are seen across the country, out of which 78 old age homes with approximately 3000 inmates are seen only in Bangalore2. The existing literature shows that elderly women residing in old age homes experience the challenges like- loneliness, health issues, rejection from children, grief, death anxiety, etc, which leads to mental and physical wellbeing in numerous and tangible ways3. Hence the best and cost-effective way to improve the meaning and quality of life among elderly females is logotherapy, a type of psychotherapeutic analysis and treatment, motivating and driving force4 within the human experience to lead a decent life. Aim: The current research is aimed at studying the effectiveness of a logotherapy intervention on meaning and quality of life among elderly women of old age homes. Samples:200 women aged < 60 years and staying in the old age home for more than 1 year were randomly allocated to the control group and experimental group. Methodology: Using the Meaning in life questionnaire (MLQ)and the World health organization quality of life (WHOQOL) questionnaire, meaning and quality of life were assessed among both groups' women. Intensive Logotherapy and meaning in life program for five days were provided for the experimental group and the control group, with no treatment. Result: Under analysis. Conclusion: It is the right of the elderly woman to lead a happy and peaceful life till her death irrespective of the residing place. Hence, continuous monitoring and effective management are necessary for elderly women.

Keywords: quality of life, meaning of life, logo therapy, old age home

Procedia PDF Downloads 204
1086 Infection Risk of Fecal Coliform Contamination in Drinking Water Sources of Urban Slum Dwellers: Application of Quantitative Microbiological Risk Assessment

Authors: Sri Yusnita Irda Sari, Deni Kurniadi Sunjaya, Ardini Saptaningsih Raksanagara

Abstract:

Water is one of the fundamental basic needs for human life, particularly drinking water sources. Although water quality is getting better, fecal-contamination of water is still found around the world, especially in the slum area of mid-low income countries. Drinking water source contamination in urban slum dwellers increases the risk of water borne diseases. Low level of sanitation and poor drinking water supply known as risk factors for diarrhea, moreover bacteria-contaminated drinking water source is the main cause of diarrhea in developing countries. This study aimed to assess risk infection due to Fecal Coliform contamination in various drinking water sources in urban area by applying Quantitative Microbiological Risk Assessment (QMRA). A Cross-sectional survey was conducted in a period of August to October 2015. Water samples were taken by simple random sampling from households in Cikapundung river basin which was one of urban slum area in the center of Bandung city, Indonesia. About 379 water samples from 199 households and 15 common wells were tested. Half of the households used treated drinking water from water gallon mostly refill water gallon which was produced in drinking water refill station. Others used raw water sources which need treatment before consume as drinking water such as tap water, borehole, dug well and spring water source. Annual risk to get infection due to Fecal Coliform contamination from highest to lowest risk was dug well (1127.9 x 10-5), spring water (49.7 x 10-5), borehole (1.383 x 10-5) and tap water (1.121 x 10-5). Annual risk infection of refill drinking water was 1.577 x 10-5 which is comparable to borehole and tap water. Household water treatment and storage to make raw water sources drinkable is essential to prevent risk of water borne diseases. Strong regulation and intense monitoring of refill water gallon quality should be prioritized by the government; moreover, distribution of tap water should be more accessible and affordable especially in urban slum area.

Keywords: drinking water, quantitative microbiological risk assessment, slum, urban

Procedia PDF Downloads 281
1085 Sedimentological and Geochemical Characteristics of Aeolian Sediments and Their Implication for Sand Origin in the Yarlung Zangbo River Valley, Southern Qinghai-Tibetan Plateau

Authors: Na Zhou, Chun-Lai Zhang, Qing Li, Bingqi Zhu, Xun-Ming Wang

Abstract:

The understanding of the dynamics of aeolian sand in the Yarlung Zangbo River Valley (YLZBV), southern Qinghai-Tibetan Plateau, including its origins, transportation,and deposition, remains preliminary. In this study, we investigated the extensive origin of aeolian sediments in the YLZBV by analyzing the distribution and composition of sediment’s grain size and geochemical composition in dune sediments collected from the wide river terraces. The major purpose is to characterize the sedimentological and geochemical compositions of these aeolian sediments, trace back to their sources, and understand their influencing factors. As a result, the grain size and geochemistry variations, which showed a significant correlation between grain sizes distribution and element abundances, give a strong evidence that the important part of the aeolian sediments in the downstream areas was firstly derived from the upper reaches by intense fluvial processes. However, the sediments experienced significant mixing process with local inputs and reconstructed by regional wind transportation. The diverse compositions and tight associations in the major and trace element geochemistry between the up- and down-stream aeolian sediments and the local detrital rocks, which were collected from the surrounding mountains, suggest that the upstream aeolian sediments had originated from the various close-range rock types, and experienced intensive mixing processes via aeolian- fluvial dynamics. Sand mass transported by water and wind was roughly estimated to qualify the interplay between the aeolian and fluvial processes controlling the sediment transport, yield, and ultimately shaping the aeolian landforms in the mainstream of the YLZBV.

Keywords: grain size distribution, geochemistry, wind and water load, sand source, Yarlung Zangbo River Valley

Procedia PDF Downloads 97
1084 Multivariate Statistical Analysis of Heavy Metals Pollution of Dietary Vegetables in Swabi, Khyber Pakhtunkhwa, Pakistan

Authors: Fawad Ali

Abstract:

Toxic heavy metal contamination has a negative impact on soil quality which ultimately pollutes the agriculture system. In the current work, we analyzed uptake of various heavy metals by dietary vegetables grown in wastewater irrigated areas of Swabi city. The samples of soil and vegetables were analyzed for heavy metals viz Cd, Cr, Mn, Fe, Ni, Cu, Zn and Pb using Atomic Absorption Spectrophotometer. High levels of metals were found in wastewater irrigated soil and vegetables in the study area. Especially the concentrations of Pb and Cd in the dietary vegetable crossed the permissible level of World Health Organization. Substantial positive correlation was found among the soil and vegetable contamination. Transfer factor for some metals including Cr, Zn, Mn, Ni, Cd and Cu was greater than 0.5 which shows enhanced accumulation of these metals due to contamination by domestic discharges and industrial effluents. Linear regression analysis indicated significant correlation of heavy metals viz Pb, Cr, Cd, Ni, Zn, Cu, Fe and Mn in vegetables with concentration in soil of 0.964 at P≤0.001. Abelmoschus esculentus indicated Health Risk Index (HRI) of Pb >1 in adults and children. The source identification analysis carried out by Principal Component Analysis (PCA) and Cluster Analysis (CA) showed that ground water and soil were being polluted by the trace metals coming out from industries and domestic wastes. Hierarchical cluster analysis (HCA) divided metals into two clusters for wastewater and soil but into five clusters for soil of control area. PCA extracted two factors for wastewater, each contributing 61.086 % and 16.229 % of the total 77.315 % variance. PCA extracted two factors, for soil samples, having total variance of 79.912 % factor 1 and factor 2 contributed 63.889 % and 16.023 % of the total variance. PCA for sub soil extracted two factors with a total variance of 76.136 % factor 1 being 61.768 % and factor 2 being 14.368 %of the total variance. High pollution load index for vegetables in the study area due to metal polluted soil has opened a study area for proper legislation to protect further contamination of vegetables. This work would further reveal serious health risks to human population of the study area.

Keywords: health risk, vegetables, wastewater, atomic absorption sepctrophotometer

Procedia PDF Downloads 70
1083 E4D-MP: Time-Lapse Multiphysics Simulation and Joint Inversion Toolset for Large-Scale Subsurface Imaging

Authors: Zhuanfang Fred Zhang, Tim C. Johnson, Yilin Fang, Chris E. Strickland

Abstract:

A variety of geophysical techniques are available to image the opaque subsurface with little or no contact with the soil. It is common to conduct time-lapse surveys of different types for a given site for improved results of subsurface imaging. Regardless of the chosen survey methods, it is often a challenge to process the massive amount of survey data. The currently available software applications are generally based on the one-dimensional assumption for a desktop personal computer. Hence, they are usually incapable of imaging the three-dimensional (3D) processes/variables in the subsurface of reasonable spatial scales; the maximum amount of data that can be inverted simultaneously is often very small due to the capability limitation of personal computers. Presently, high-performance or integrating software that enables real-time integration of multi-process geophysical methods is needed. E4D-MP enables the integration and inversion of time-lapsed large-scale data surveys from geophysical methods. Using the supercomputing capability and parallel computation algorithm, E4D-MP is capable of processing data across vast spatiotemporal scales and in near real time. The main code and the modules of E4D-MP for inverting individual or combined data sets of time-lapse 3D electrical resistivity, spectral induced polarization, and gravity surveys have been developed and demonstrated for sub-surface imaging. E4D-MP provides capability of imaging the processes (e.g., liquid or gas flow, solute transport, cavity development) and subsurface properties (e.g., rock/soil density, conductivity) critical for successful control of environmental engineering related efforts such as environmental remediation, carbon sequestration, geothermal exploration, and mine land reclamation, among others.

Keywords: gravity survey, high-performance computing, sub-surface monitoring, electrical resistivity tomography

Procedia PDF Downloads 157
1082 Energy Intensity: A Case of Indian Manufacturing Industries

Authors: Archana Soni, Arvind Mittal, Manmohan Kapshe

Abstract:

Energy has been recognized as one of the key inputs for the economic growth and social development of a country. High economic growth naturally means a high level of energy consumption. However, in the present energy scenario where there is a wide gap between the energy generation and energy consumption, it is extremely difficult to match the demand with the supply. India being one of the largest and rapidly growing developing countries, there is an impending energy crisis which requires immediate measures to be adopted. In this situation, the concept of Energy Intensity comes under special focus to ensure energy security in an environmentally sustainable way. Energy Intensity is defined as the energy consumed per unit output in the context of industrial energy practices. It is a key determinant of the projections of future energy demands which assists in policy making. Energy Intensity is inversely related to energy efficiency; lesser the energy required to produce a unit of output or service, the greater is the energy efficiency. Energy Intensity of Indian manufacturing industries is among the highest in the world and stands for enormous energy consumption. Hence, reducing the Energy Intensity of Indian manufacturing industries is one of the best strategies to achieve a low level of energy consumption and conserve energy. This study attempts to analyse the factors which influence the Energy Intensity of Indian manufacturing firms and how they can be used to reduce the Energy Intensity. The paper considers six of the largest energy consuming manufacturing industries in India viz. Aluminium, Cement, Iron & Steel Industries, Textile Industries, Fertilizer and Paper industries and conducts a detailed Energy Intensity analysis using the data from PROWESS database of the Centre for Monitoring Indian Economy (CMIE). A total of twelve independent explanatory variables based on various factors such as raw material, labour, machinery, repair and maintenance, production technology, outsourcing, research and development, number of employees, wages paid, profit margin and capital invested have been taken into consideration for the analysis.

Keywords: energy intensity, explanatory variables, manufacturing industries, PROWESS database

Procedia PDF Downloads 329