Search results for: voltage-time curve
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1026

Search results for: voltage-time curve

816 Pharmacokinetic Study of Clarithromycin in Human Female of Pakistani Population

Authors: Atifa Mushtaq, Tanweer Khaliq, Hafiz Alam Sher, Asia Farid, Anila Kanwal, Maliha Sarfraz

Abstract:

The study was designed to assess the various pharmacokinetic parameters of a commercially available clarithromycin Tablet (Klaricid® 250 mg Abbot, Pakistan) in plasma sample of healthy adult female volunteers by applying a rapid, sensitive and accurate HPLC-UV analytical method. The human plasma samples were evaluated by using an isocratic High Performance Liquid Chromatography (HPLC) system of Sykam consisted of a pump with a column C18 column (250×4.6mn, 5µm) UV-detector. The mobile phase comprises of potassium dihydrogen phosphate (50 mM, pH 6.8, contained 0.7% triethylamine), methanol and acetonitrile (30:25:45, v/v/v) was delivered with injection volume of 20µL at flow rate of 1 mL/min. The detection was performed at λmax 275 nm. By applying this method, important pharmacokinetic parameters Cmax, Tmax, Area under curve (AUC), half-life (t1/2), , Volume of distribution (Vd) and Clearance (Cl) were measured. The parameters of pharmacokinetics of clarithromycin were calculated by software (APO) pharmacological analysis. Maximum plasma concentrations Cmax 2.78 ±0.33 µg/ml, time to reach maximum concentration tmax 2.82 ± 0.11 h and Area under curve AUC was 20.14 h.µg/ml. The mean ± SD values obtained for the pharmacokinetic parameters showed a significant difference in pharmacokinetic parameters observed in previous literature which emphasizes the need for dose adjustment of clarithromycin in Pakistani population.

Keywords: Pharmacokinetc, Clarothromycin, HPLC, Pakistan

Procedia PDF Downloads 70
815 Improved of Elliptic Curves Cryptography over a Ring

Authors: Abdelhakim Chillali, Abdelhamid Tadmori, Muhammed Ziane

Abstract:

In this article we will study the elliptic curve defined over the ring An and we define the mathematical operations of ECC, which provides a high security and advantage for wireless applications compared to other asymmetric key cryptosystem.

Keywords: elliptic curves, finite ring, cryptography, study

Procedia PDF Downloads 342
814 Tillage and Manure Effects on Water Retention and Van Genuchten Parameters in Western Iran

Authors: Azadeh Safadoust, Ali Akbar Mahboubi, Mohammad Reza Mosaddeghi, Bahram Gharabaghi

Abstract:

A study was conducted to evaluate hydraulic properties of a sandy loam soil and corn (Zea mays L.) crop production under a short-term tillage and manure combinations field experiment carried out in west of Iran. Treatments included composted cattle manure application rates [0, 30, and 60 Mg (dry weight) ha⁻¹] and tillage systems [no-tillage (NT), chisel plowing (CP), and moldboard plowing (MP)] arranged in a split-plot design. Soil water characteristic curve (SWCC) and saturated hydraulic conductivity (Ks) were significantly affected by manure and tillage treatments. At any matric suction, the soil water content was in the order of MP>CP>NT. At all matric suctions, the amount of water retained by the soil increased as manure application rate increased (i.e. 60>30>0 Mg ha⁻¹). Similar to the tillage effects, at high suctions the differences of water retained due to manure addition were less than that at low suctions. The change of SWCC from tillage methods and manure applications may attribute to the change of pore size and aggregate size distributions. Soil Ks was in the order of CP>MP>NT for the first two layers and in the order of MP>CP and NT for the deeper soil layer. The Ks also increased with increasing rates of manure application (i.e. 60>30>0 Mg ha⁻¹). This was due to the increase in the total pore size and continuity.

Keywords: corn, manure, saturated hydraulic conductivity, soil water characteristic curve, tillage

Procedia PDF Downloads 40
813 Destination Port Detection For Vessels: An Analytic Tool For Optimizing Port Authorities Resources

Authors: Lubna Eljabu, Mohammad Etemad, Stan Matwin

Abstract:

Port authorities have many challenges in congested ports to allocate their resources to provide a safe and secure loading/ unloading procedure for cargo vessels. Selecting a destination port is the decision of a vessel master based on many factors such as weather, wavelength and changes of priorities. Having access to a tool which leverages AIS messages to monitor vessel’s movements and accurately predict their next destination port promotes an effective resource allocation process for port authorities. In this research, we propose a method, namely, Reference Route of Trajectory (RRoT) to assist port authorities in predicting inflow and outflow traffic in their local environment by monitoring Automatic Identification System (AIS) messages. Our RRoT method creates a reference route based on historical AIS messages. It utilizes some of the best trajectory similarity measure to identify the destination of a vessel using their recent movement. We evaluated five different similarity measures such as Discrete Fr´echet Distance (DFD), Dynamic Time Warping (DTW), Partial Curve Mapping (PCM), Area between two curves (Area) and Curve length (CL). Our experiments show that our method identifies the destination port with an accuracy of 98.97% and an fmeasure of 99.08% using Dynamic Time Warping (DTW) similarity measure.

Keywords: spatial temporal data mining, trajectory mining, trajectory similarity, resource optimization

Procedia PDF Downloads 81
812 Dynamic Effects of Energy Consumption, Economic Growth, International Trade and Urbanization on Environmental Degradation in Nigeria

Authors: Abdulkarim Yusuf

Abstract:

Motivation: A crucial but difficult goal for governments and policymakers in Nigeria in recent years has been the sustainability of economic growth. This goal must be accomplished by regulating or lowering greenhouse gas emissions, which calls for switching to a low- or zero-carbon production system. The lack of in-depth empirical studies on the environmental impact of socioeconomic variables on Nigeria and a number of unresolved issues from earlier research is what led to the current study. Objective: This study fills an important empirical gap by investigating the existence of an Environmental Kuznets Curve hypothesis and the long and short-run dynamic impact of socioeconomic variables on ecological sustainability in Nigeria. Data and method: Annual time series data covering the period 1980 to 2020 and the Autoregressive Distributed Lag technique in the presence of structural breaks were adopted for this study. Results: The empirical findings support the existence of the environmental Kuznets curve hypothesis for Nigeria in the long and short run. Energy consumption and total import exacerbate environmental deterioration in the long and short run, whereas total export improves environmental quality in the long and short run. Financial development, which contributed to a conspicuous decrease in the level of environmental destruction in the long run, escalated it in the short run. In contrast, urbanization caused a significant increase in environmental damage in the long run but motivated a decrease in biodiversity loss in the short run. Implications: The government, policymakers, and all energy stakeholders should take additional measures to ensure the implementation and diversification of energy sources to accommodate more renewable energy sources that emit less carbon in order to promote efficiency in Nigeria's production processes and lower carbon emissions. In order to promote the production and trade of environmentally friendly goods, they should also revise and strengthen environmental policies. With affordable, dependable, and sustainable energy use for higher productivity and inclusive growth, Nigeria will be able to achieve its long-term development goals of good health and wellbeing.

Keywords: economic growth, energy consumption, environmental degradation, environmental Kuznets curve, urbanization, Nigeria

Procedia PDF Downloads 9
811 The Relationship between Human Neutrophil Elastase Levels and Acute Respiratory Distress Syndrome in Patients with Thoracic Trauma

Authors: Wahyu Purnama Putra, Artono Isharanto

Abstract:

Thoracic trauma is trauma that hits the thoracic wall or intrathoracic organs, either due to blunt trauma or sharp trauma. Thoracic trauma often causes impaired ventilation-perfusion due to damage to the lung parenchyma. This results in impaired tissue oxygenation, which is one of the causes of acute respiratory distress syndrome (ARDS). These changes are caused by the release of pro-inflammatory mediators, plasmatic proteins, and proteases into the alveolar space associated with ongoing edema, as well as oxidative products that ultimately result in severe inhibition of the surfactant system. This study aims to predict the incidence of acute respiratory distress syndrome (ARDS) through human neutrophil elastase levels. This study examines the relationship between plasma elastase levels as a predictor of the incidence of ARDS in thoracic trauma patients in Malang. This study is an observational cohort study. Data analysis uses the Pearson correlation test and ROC curve (receiver operating characteristic curve). It can be concluded that there is a significant (p= 0.000, r= -0.988) relationship between elastase levels and BGA-3. If the value of elastase levels is limited to 23.79 ± 3.95, the patient will experience mild ARDS. While if the value of elastase levels is limited to 57.68 ± 18.55, in the future, the patient will experience moderate ARDS. Meanwhile, if the elastase level is between 107.85 ± 5.04, the patient will likely experience severe ARDS. Neutrophil elastase levels correlate with the degree of severity of ARDS incidence.

Keywords: ARDS, human neutrophil elastase, severity, thoracic trauma

Procedia PDF Downloads 107
810 Performance Assessment of Multi-Level Ensemble for Multi-Class Problems

Authors: Rodolfo Lorbieski, Silvia Modesto Nassar

Abstract:

Many supervised machine learning tasks require decision making across numerous different classes. Multi-class classification has several applications, such as face recognition, text recognition and medical diagnostics. The objective of this article is to analyze an adapted method of Stacking in multi-class problems, which combines ensembles within the ensemble itself. For this purpose, a training similar to Stacking was used, but with three levels, where the final decision-maker (level 2) performs its training by combining outputs from the tree-based pair of meta-classifiers (level 1) from Bayesian families. These are in turn trained by pairs of base classifiers (level 0) of the same family. This strategy seeks to promote diversity among the ensembles forming the meta-classifier level 2. Three performance measures were used: (1) accuracy, (2) area under the ROC curve, and (3) time for three factors: (a) datasets, (b) experiments and (c) levels. To compare the factors, ANOVA three-way test was executed for each performance measure, considering 5 datasets by 25 experiments by 3 levels. A triple interaction between factors was observed only in time. The accuracy and area under the ROC curve presented similar results, showing a double interaction between level and experiment, as well as for the dataset factor. It was concluded that level 2 had an average performance above the other levels and that the proposed method is especially efficient for multi-class problems when compared to binary problems.

Keywords: stacking, multi-layers, ensemble, multi-class

Procedia PDF Downloads 232
809 Vehicle Maneuverability on Horizontal Curves on Hilly Terrain: A Study on Shillong Highway

Authors: Surendra Choudhary, Sapan Tiwari

Abstract:

The driver has two fundamental duties i) controlling the position of the vehicle along the longitudinal and lateral direction of movement ii) roadway width. Both of these duties are interdependent and are concurrently referred to as two-dimensional driver behavior. One of the main problems facing driver behavior modeling is to identify the parameters for describing the exemplary driving conduct and car maneuver under distinct traffic circumstances. Still, to date, there is no well-accepted theory that can comprehensively model the 2-D driver conduct (longitudinal and lateral). The primary objective of this research is to explore the vehicle's lateral longitudinal behavior in the heterogeneous condition of traffic on horizontal curves as well as the effect of road geometry on dynamic traffic parameters, i.e., car velocity and lateral placement. In this research, with their interrelationship, a thorough assessment of dynamic car parameters, i.e., speed, lateral acceleration, and turn radius. Also, horizontal curve road parameters, i.e., curvature radius, pavement friction, are performed. The dynamic parameters of the various types of car drivers are gathered using a VBOX GPS-based tool with high precision. The connection between dynamic car parameters and curve geometry is created after the removal of noise from the GPS trajectories. The major findings of the research are that car maneuvers with higher than the design limits of speed, acceleration, and lateral deviation on the studied curves of the highway. It can become lethal if the weather changes from dry to wet.

Keywords: geometry, maneuverability, terrain, trajectory, VBOX

Procedia PDF Downloads 111
808 A Novel Approach of NPSO on Flexible Logistic (S-Shaped) Model for Software Reliability Prediction

Authors: Pooja Rani, G. S. Mahapatra, S. K. Pandey

Abstract:

In this paper, we propose a novel approach of Neural Network and Particle Swarm Optimization methods for software reliability prediction. We first explain how to apply compound function in neural network so that we can derive a Flexible Logistic (S-shaped) Growth Curve (FLGC) model. This model mathematically represents software failure as a random process and can be used to evaluate software development status during testing. To avoid trapping in local minima, we have applied Particle Swarm Optimization method to train proposed model using failure test data sets. We drive our proposed model using computational based intelligence modeling. Thus, proposed model becomes Neuro-Particle Swarm Optimization (NPSO) model. We do test result with different inertia weight to update particle and update velocity. We obtain result based on best inertia weight compare along with Personal based oriented PSO (pPSO) help to choose local best in network neighborhood. The applicability of proposed model is demonstrated through real time test data failure set. The results obtained from experiments show that the proposed model has a fairly accurate prediction capability in software reliability.

Keywords: software reliability, flexible logistic growth curve model, software cumulative failure prediction, neural network, particle swarm optimization

Procedia PDF Downloads 316
807 Critical Study on the Sensitivity of Corrosion Fatigue Crack Growth Rate to Cyclic Waveform and Microstructure in Marine Steel

Authors: V. C. Igwemezie, A. N. Mehmanparast

Abstract:

The primary focus of this work is to understand how variations in the microstructure and cyclic waveform affect the corrosion fatigue crack growth (CFCG) in steel, especially in the Paris region of the da/dN vs. ΔK curve. This work is important because it provides fundamental information on the modelling, design, selection, and use of steels for various engineering applications in the marine environment. The corrosion fatigue tests data on normalized and thermomechanical control process (TMCP) ferritic-pearlitic steels by the authors were compared with several studies on different microstructures in the literature. The microstructures of these steels are radically different and general comparative fatigue crack growth resistance performance study on the effect of microstructure in these materials are very scarce and where available are limited to few studies. The results, for purposes of engineering application, in this study show less dependency of fatigue crack growth rate (FCGR) on yield strength, tensile strength, ductility, frequency and stress ratio in the range 0.1 – 0.7. The nature of the steel microstructure appears to be a major factor in determining the rate at which fatigue cracks propagate in the entire da/dN vs. ΔK sigmoidal curve. The study also shows that the sine wave shape is the most damaging fatigue waveform for ferritic-pearlitic steels. This tends to suggest that the test under sine waveform would be a conservative approach, regardless of the waveform for design of engineering structures.

Keywords: BS7910, corrosion-fatigue crack growth rate, cyclic waveform, microstructure, steel

Procedia PDF Downloads 114
806 Developing Pavement Structural Deterioration Curves

Authors: Gregory Kelly, Gary Chai, Sittampalam Manoharan, Deborah Delaney

Abstract:

A Structural Number (SN) can be calculated for a road pavement from the properties and thicknesses of the surface, base course, sub-base, and subgrade. Historically, the cost of collecting structural data has been very high. Data were initially collected using Benkelman Beams and now by Falling Weight Deflectometer (FWD). The structural strength of pavements weakens over time due to environmental and traffic loading factors, but due to a lack of data, no structural deterioration curve for pavements has been implemented in a Pavement Management System (PMS). International Roughness Index (IRI) is a measure of the road longitudinal profile and has been used as a proxy for a pavement’s structural integrity. This paper offers two conceptual methods to develop Pavement Structural Deterioration Curves (PSDC). Firstly, structural data are grouped in sets by design Equivalent Standard Axles (ESA). An ‘Initial’ SN (ISN), Intermediate SN’s (SNI) and a Terminal SN (TSN), are used to develop the curves. Using FWD data, the ISN is the SN after the pavement is rehabilitated (Financial Accounting ‘Modern Equivalent’). Intermediate SNIs, are SNs other than the ISN and TSN. The TSN was defined as the SN of the pavement when it was approved for pavement rehabilitation. The second method is to use Traffic Speed Deflectometer data (TSD). The road network already divided into road blocks, is grouped by traffic loading. For each traffic loading group, road blocks that have had a recent pavement rehabilitation, are used to calculate the ISN and those planned for pavement rehabilitation to calculate the TSN. The remaining SNs are used to complete the age-based or if available, historical traffic loading-based SNI’s.

Keywords: conceptual, pavement structural number, pavement structural deterioration curve, pavement management system

Procedia PDF Downloads 510
805 Optimization of Dez Dam Reservoir Operation Using Genetic Algorithm

Authors: Alireza Nikbakht Shahbazi, Emadeddin Shirali

Abstract:

Since optimization issues of water resources are complicated due to the variety of decision making criteria and objective functions, it is sometimes impossible to resolve them through regular optimization methods or, it is time or money consuming. Therefore, the use of modern tools and methods is inevitable in resolving such problems. An accurate and essential utilization policy has to be determined in order to use natural resources such as water reservoirs optimally. Water reservoir programming studies aim to determine the final cultivated land area based on predefined agricultural models and water requirements. Dam utilization rule curve is also provided in such studies. The basic information applied in water reservoir programming studies generally include meteorological, hydrological, agricultural and water reservoir related data, and the geometric characteristics of the reservoir. The system of Dez dam water resources was simulated applying the basic information in order to determine the capability of its reservoir to provide the objectives of the performed plan. As a meta-exploratory method, genetic algorithm was applied in order to provide utilization rule curves (intersecting the reservoir volume). MATLAB software was used in order to resolve the foresaid model. Rule curves were firstly obtained through genetic algorithm. Then the significance of using rule curves and the decrease in decision making variables in the system was determined through system simulation and comparing the results with optimization results (Standard Operating Procedure). One of the most essential issues in optimization of a complicated water resource system is the increasing number of variables. Therefore a lot of time is required to find an optimum answer and in some cases, no desirable result is obtained. In this research, intersecting the reservoir volume has been applied as a modern model in order to reduce the number of variables. Water reservoir programming studies has been performed based on basic information, general hypotheses and standards and applying monthly simulation technique for a statistical period of 30 years. Results indicated that application of rule curve prevents the extreme shortages and decrease the monthly shortages.

Keywords: optimization, rule curve, genetic algorithm method, Dez dam reservoir

Procedia PDF Downloads 233
804 Thermoluminescence Investigations of Tl2Ga2Se3S Layered Single Crystals

Authors: Serdar Delice, Mehmet Isik, Nizami Hasanli, Kadir Goksen

Abstract:

Researchers have donated great interest to ternary and quaternary semiconductor compounds especially with the improvement of the optoelectronic technology. The quaternary compound Tl2Ga2Se3S which was grown by Bridgman method carries the properties of ternary thallium chalcogenides group of semiconductors with layered structure. This compound can be formed from TlGaSe2 crystals replacing the one quarter of selenium atom by sulfur atom. Although Tl2Ga2Se3S crystals are not intentionally doped, some unintended defect types such as point defects, dislocations and stacking faults can occur during growth processes of crystals. These defects can cause undesirable problems in semiconductor materials especially produced for optoelectronic technology. Defects of various types in the semiconductor devices like LEDs and field effect transistor may act as a non-radiative or scattering center in electron transport. Also, quick recombination of holes with electrons without any energy transfer between charge carriers can occur due to the existence of defects. Therefore, the characterization of defects may help the researchers working in this field to produce high quality devices. Thermoluminescence (TL) is an effective experimental method to determine the kinetic parameters of trap centers due to defects in crystals. In this method, the sample is illuminated at low temperature by a light whose energy is bigger than the band gap of studied sample. Thus, charge carriers in the valence band are excited to delocalized band. Then, the charge carriers excited into conduction band are trapped. The trapped charge carriers are released by heating the sample gradually and these carriers then recombine with the opposite carriers at the recombination center. By this way, some luminescence is emitted from the samples. The emitted luminescence is converted to pulses by using an experimental setup controlled by computer program and TL spectrum is obtained. Defect characterization of Tl2Ga2Se3S single crystals has been performed by TL measurements at low temperatures between 10 and 300 K with various heating rate ranging from 0.6 to 1.0 K/s. The TL signal due to the luminescence from trap centers revealed one glow peak having maximum temperature of 36 K. Curve fitting and various heating rate methods were used for the analysis of the glow curve. The activation energy of 13 meV was found by the application of curve fitting method. This practical method established also that the trap center exhibits the characteristics of mixed (general) kinetic order. In addition, various heating rate analysis gave a compatible result (13 meV) with curve fitting as the temperature lag effect was taken into consideration. Since the studied crystals were not intentionally doped, these centers are thought to originate from stacking faults, which are quite possible in Tl2Ga2Se3S due to the weakness of the van der Waals forces between the layers. Distribution of traps was also investigated using an experimental method. A quasi-continuous distribution was attributed to the determined trap centers.

Keywords: chalcogenides, defects, thermoluminescence, trap centers

Procedia PDF Downloads 254
803 Analysis of Accurate Direct-Estimation of the Maximum Power Point and Thermal Characteristics of High Concentration Photovoltaic Modules

Authors: Yan-Wen Wang, Chu-Yang Chou, Jen-Cheng Wang, Min-Sheng Liao, Hsuan-Hsiang Hsu, Cheng-Ying Chou, Chen-Kang Huang, Kun-Chang Kuo, Joe-Air Jiang

Abstract:

Performance-related parameters of high concentration photovoltaic (HCPV) modules (e.g. current and voltage) are required when estimating the maximum power point using numerical and approximation methods. The maximum power point on the characteristic curve for a photovoltaic module varies when temperature or solar radiation is different. It is also difficult to estimate the output performance and maximum power point (MPP) due to the special characteristics of HCPV modules. Based on the p-n junction semiconductor theory, a brand new and simple method is presented in this study to directly evaluate the MPP of HCPV modules. The MPP of HCPV modules can be determined from an irradiated I-V characteristic curve, because there is a non-linear relationship between the temperature of a solar cell and solar radiation. Numerical simulations and field tests are conducted to examine the characteristics of HCPV modules during maximum output power tracking. The performance of the presented method is evaluated by examining the dependence of temperature and irradiation intensity on the MPP characteristics of HCPV modules. These results show that the presented method allows HCPV modules to achieve their maximum power and perform power tracking under various operation conditions. A 0.1% error is found between the estimated and the real maximum power point.

Keywords: energy performance, high concentrated photovoltaic, maximum power point, p-n junction semiconductor

Procedia PDF Downloads 535
802 The Neutrophil-to-Lymphocyte Ratio after Surgery for Hip Fracture in a New, Simple, and Objective Score to Predict Postoperative Mortality

Authors: Philippe Dillien, Patrice Forget, Harald Engel, Olivier Cornu, Marc De Kock, Jean Cyr Yombi

Abstract:

Introduction: Hip fracture precedes commonly death in elderly people. Identification of high-risk patients may contribute to target patients in whom optimal management, resource allocation and trials efficiency is needed. The aim of this study is to construct a predictive score of mortality after hip fracture on the basis of the objective prognostic factors available: Neutrophil-to-lymphocyte ratio (NLR), age, and sex. C-Reactive Protein (CRP), is also considered as an alternative to the NLR. Patients and methods: After the IRB approval, we analyzed our prospective database including 286 consecutive patients with hip fracture. A score was constructed combining age (1 point per decade above 74 years), sex (1 point for males), and NLR at postoperative day+5 (1 point if >5). A receiver-operating curve (ROC) curve analysis was performed. Results: From the 286 patients included, 235 were analyzed (72 males and 163 females, 30.6%/69.4%), with a median age of 84 (range: 65 to 102) years, mean NLR values of 6.47+/-6.07. At one year, 82/280 patients died (29.3%). Graphical analysis and log-rank test confirm a highly statistically significant difference (P<0.001). Performance analysis shows an AUC of 0.72 [95%CI 0.65-0.79]. CRP shows no advantage on NLR. Conclusion: We have developed a score based on age, sex and the NLR to predict the risk of mortality at one year in elderly patients after surgery for a hip fracture. After external validation, it may be included in clinical practice as in clinical research to stratify the risk of postoperative mortality.

Keywords: neutrophil-to-lymphocyte ratio, hip fracture, postoperative mortality, medical and health sciences

Procedia PDF Downloads 386
801 Family of Density Curves of Queensland Soils from Compaction Tests, on a 3D Z-Plane Function of Moisture Content, Saturation, and Air-Void Ratio

Authors: Habib Alehossein, M. S. K. Fernando

Abstract:

Soil density depends on the volume of the voids and the proportion of the water and air in the voids. However, there is a limit to the contraction of the voids at any given compaction energy, whereby additional water is used to reduce the void volume further by lubricating the particles' frictional contacts. Hence, at an optimum moisture content and specific compaction energy, the density of unsaturated soil can be maximized where the void volume is minimum. However, when considering a full compaction curve and permutations and variations of all these components (soil, air, water, and energy), laboratory soil compaction tests can become expensive, time-consuming, and exhausting. Therefore, analytical methods constructed on a few test data can be developed and used to reduce such unnecessary efforts significantly. Concentrating on the compaction testing results, this study discusses the analytical modelling method developed for some fine-grained and coarse-grained soils of Queensland. Soil properties and characteristics, such as full functional compaction curves under various compaction energy conditions, were studied and developed for a few soil types. Using MATLAB, several generic analytical codes were created for this study, covering all possible compaction parameters and results as they occur in a soil mechanics lab. These MATLAB codes produce a family of curves to determine the relationships between the density, moisture content, void ratio, saturation, and compaction energy.

Keywords: analytical, MATLAB, modelling, compaction curve, void ratio, saturation, moisture content

Procedia PDF Downloads 52
800 Epidemiological and Clinical Characteristics of Five Rare Pathological Subtypes of Hepatocellular Carcinoma

Authors: Xiaoyuan Chen

Abstract:

Background: This study aimed to characterize the epidemiological and clinical features of five rare subtypes of hepatocellular carcinoma (HCC) and to create a competing risk nomogram for predicting cancer-specific survival. Methods: This study used the Surveillance, Epidemiology, and End Results database to analyze the clinicopathological data of 50,218 patients with classic HCC and five rare subtypes (ICD-O-3 Histology Code=8170/3-8175/3) between 2004 and 2018. The annual percent change (APC) was calculated using Joinpoint regression, and a nomogram was developed based on multivariable competing risk survival analyses. The prognostic performance of the nomogram was evaluated using the Akaike information criterion, Bayesian information criterion, C-index, calibration curve, and area under the receiver operating characteristic curve. Decision curve analysis was used to assess the clinical value of the models. Results: The incidence of scirrhous carcinoma showed a decreasing trend (APC=-6.8%, P=0.025), while the morbidity of other rare subtypes remained stable from 2004 to 2018. The incidence-based mortality plateau in all subtypes during the period. Clear cell carcinoma was the most common subtype (n=551, 1.1%), followed by fibrolamellar (n=241, 0.5%), scirrhous (n=82, 0.2%), spindle cell (n=61, 0.1%), and pleomorphic (n=17, ~0%) carcinomas. Patients with fibrolamellar carcinoma were younger and more likely to have non-cirrhotic liver and better prognoses. Scirrhous carcinoma shared almost the same macro clinical characteristics and outcomes as classic HCC. Clear cell carcinoma tended to occur in the Asia-Pacific elderly male population, and more than half of them were large HCC (Size>5cm). Sarcomatoid (including spindle cell and pleomorphic) carcinoma was associated with larger tumor size, poorer differentiation, and more dismal prognoses. The pathological subtype, T stage, M stage, surgery, alpha-fetoprotein, and cancer history were identified as independent predictors in patients with rare subtypes. The nomogram showed good calibration, discrimination, and net benefits in clinical practice. Conclusion: The rare subtypes of HCC had distinct clinicopathological features and biological behaviors compared with classic HCC. Our findings could provide a valuable reference for clinicians. The constructed nomogram could accurately predict prognoses, which is beneficial for individualized management.

Keywords: hepatocellular carcinoma, pathological subtype, fibrolamellar carcinoma, scirrhous carcinoma, clear cell carcinoma, spindle cell carcinoma, pleomorphic carcinoma

Procedia PDF Downloads 42
799 A Comparative Study of the Impact of Membership in International Climate Change Treaties and the Environmental Kuznets Curve (EKC) in Line with Sustainable Development Theories

Authors: Mojtaba Taheri, Saied Reza Ameli

Abstract:

In this research, we have calculated the effect of membership in international climate change treaties for 20 developed countries based on the human development index (HDI) and compared this effect with the process of pollutant reduction in the Environmental Kuznets Curve (EKC) theory. For this purpose, the data related to The real GDP per capita with 2010 constant prices is selected from the World Development Indicators (WDI) database. Ecological Footprint (ECOFP) is the amount of biologically productive land needed to meet human needs and absorb carbon dioxide emissions. It is measured in global hectares (gha), and the data retrieved from the Global Ecological Footprint (2021) database will be used, and we will proceed by examining step by step and performing several series of targeted statistical regressions. We will examine the effects of different control variables, including Energy Consumption Structure (ECS) will be counted as the share of fossil fuel consumption in total energy consumption and will be extracted from The United States Energy Information Administration (EIA) (2021) database. Energy Production (EP) refers to the total production of primary energy by all energy-producing enterprises in one country at a specific time. It is a comprehensive indicator that shows the capacity of energy production in the country, and the data for its 2021 version, like the Energy Consumption Structure, is obtained from (EIA). Financial development (FND) is defined as the ratio of private credit to GDP, and to some extent based on the stock market value, also as a ratio to GDP, and is taken from the (WDI) 2021 version. Trade Openness (TRD) is the sum of exports and imports of goods and services measured as a share of GDP, and we use the (WDI) data (2021) version. Urbanization (URB) is defined as the share of the urban population in the total population, and for this data, we used the (WDI) data source (2021) version. The descriptive statistics of all the investigated variables are presented in the results section. Related to the theories of sustainable development, Environmental Kuznets Curve (EKC) is more significant in the period of study. In this research, we use more than fourteen targeted statistical regressions to purify the net effects of each of the approaches and examine the results.

Keywords: climate change, globalization, environmental economics, sustainable development, international climate treaty

Procedia PDF Downloads 35
798 Comparing Accuracy of Semantic and Radiomics Features in Prognosis of Epidermal Growth Factor Receptor Mutation in Non-Small Cell Lung Cancer

Authors: Mahya Naghipoor

Abstract:

Purpose: Non-small cell lung cancer (NSCLC) is the most common lung cancer type. Epidermal growth factor receptor (EGFR) mutation is the main reason which causes NSCLC. Computed tomography (CT) is used for diagnosis and prognosis of lung cancers because of low price and little invasion. Semantic analyses of qualitative CT features are based on visual evaluation by radiologist. However, the naked eye ability may not assess all image features. On the other hand, radiomics provides the opportunity of quantitative analyses for CT images features. The aim of this review study was comparing accuracy of semantic and radiomics features in prognosis of EGFR mutation in NSCLC. Methods: For this purpose, the keywords including: non-small cell lung cancer, epidermal growth factor receptor mutation, semantic, radiomics, feature, receiver operating characteristics curve (ROC) and area under curve (AUC) were searched in PubMed and Google Scholar. Totally 29 papers were reviewed and the AUC of ROC analyses for semantic and radiomics features were compared. Results: The results showed that the reported AUC amounts for semantic features (ground glass opacity, shape, margins, lesion density and presence or absence of air bronchogram, emphysema and pleural effusion) were %41-%79. For radiomics features (kurtosis, skewness, entropy, texture, standard deviation (SD) and wavelet) the AUC values were found %50-%86. Conclusions: In conclusion, the accuracy of radiomics analysis is a little higher than semantic in prognosis of EGFR mutation in NSCLC.

Keywords: lung cancer, radiomics, computer tomography, mutation

Procedia PDF Downloads 121
797 Imaging of Underground Targets with an Improved Back-Projection Algorithm

Authors: Alireza Akbari, Gelareh Babaee Khou

Abstract:

Ground Penetrating Radar (GPR) is an important nondestructive remote sensing tool that has been used in both military and civilian fields. Recently, GPR imaging has attracted lots of attention in detection of subsurface shallow small targets such as landmines and unexploded ordnance and also imaging behind the wall for security applications. For the monostatic arrangement in the space-time GPR image, a single point target appears as a hyperbolic curve because of the different trip times of the EM wave when the radar moves along a synthetic aperture and collects reflectivity of the subsurface targets. With this hyperbolic curve, the resolution along the synthetic aperture direction shows undesired low resolution features owing to the tails of hyperbola. However, highly accurate information about the size, electromagnetic (EM) reflectivity, and depth of the buried objects is essential in most GPR applications. Therefore hyperbolic curve behavior in the space-time GPR image is often willing to be transformed to a focused pattern showing the object's true location and size together with its EM scattering. The common goal in a typical GPR image is to display the information of the spatial location and the reflectivity of an underground object. Therefore, the main challenge of GPR imaging technique is to devise an image reconstruction algorithm that provides high resolution and good suppression of strong artifacts and noise. In this paper, at first, the standard back-projection (BP) algorithm that was adapted to GPR imaging applications used for the image reconstruction. The standard BP algorithm was limited with against strong noise and a lot of artifacts, which have adverse effects on the following work like detection targets. Thus, an improved BP is based on cross-correlation between the receiving signals proposed for decreasing noises and suppression artifacts. To improve the quality of the results of proposed BP imaging algorithm, a weight factor was designed for each point in region imaging. Compared to a standard BP algorithm scheme, the improved algorithm produces images of higher quality and resolution. This proposed improved BP algorithm was applied on the simulation and the real GPR data and the results showed that the proposed improved BP imaging algorithm has a superior suppression artifacts and produces images with high quality and resolution. In order to quantitatively describe the imaging results on the effect of artifact suppression, focusing parameter was evaluated.

Keywords: algorithm, back-projection, GPR, remote sensing

Procedia PDF Downloads 413
796 Multiscale Entropy Analysis of Electroencephalogram (EEG) of Alcoholic and Control Subjects

Authors: Lal Hussain, Wajid Aziz, Imtiaz Ahmed Awan, Sharjeel Saeed

Abstract:

Multiscale entropy analysis (MSE) is a useful technique recently developed to quantify the dynamics of physiological signals at different time scales. This study is aimed at investigating the electroencephalogram (EEG) signals to analyze the background activity of alcoholic and control subjects by inspecting various coarse-grained sequences formed at different time scales. EEG recordings of alcoholic and control subjects were taken from the publically available machine learning repository of University of California (UCI) acquired using 64 electrodes. The MSE analysis was performed on the EEG data acquired from all the electrodes of alcoholic and control subjects. Mann-Whitney rank test was used to find significant differences between the groups and result were considered statistically significant for p-values<0.05. The area under receiver operator curve was computed to find the degree separation between the groups. The mean ranks of MSE values at all the times scales for all electrodes were higher control subject as compared to alcoholic subjects. Higher mean ranks represent higher complexity and vice versa. The finding indicated that EEG signals acquired through electrodes C3, C4, F3, F7, F8, O1, O2, P3, T7 showed significant differences between alcoholic and control subjects at time scales 1 to 5. Moreover, all electrodes exhibit significance level at different time scales. Likewise, the highest accuracy and separation was obtained at the central region (C3 and C4), front polar regions (P3, O1, F3, F7, F8 and T8) while other electrodes such asFp1, Fp2, P4 and F4 shows no significant results.

Keywords: electroencephalogram (EEG), multiscale sample entropy (MSE), Mann-Whitney test (MMT), Receiver Operator Curve (ROC), complexity analysis

Procedia PDF Downloads 348
795 Comparative Evaluation of EBT3 Film Dosimetry Using Flat Bad Scanner, Densitometer and Spectrophotometer Methods and Its Applications in Radiotherapy

Authors: K. Khaerunnisa, D. Ryangga, S. A. Pawiro

Abstract:

Over the past few decades, film dosimetry has become a tool which is used in various radiotherapy modalities, either for clinical quality assurance (QA) or dose verification. The response of the film to irradiation is usually expressed in optical density (OD) or net optical density (netOD). While the film's response to radiation is not linear, then the use of film as a dosimeter must go through a calibration process. This study aimed to compare the function of the calibration curve of various measurement methods with various densitometer, using a flat bad scanner, point densitometer and spectrophotometer. For every response function, a radichromic film calibration curve is generated from each method by performing accuracy, precision and sensitivity analysis. netOD is obtained by measuring changes in the optical density (OD) of the film before irradiation and after irradiation when using a film scanner if it uses ImageJ to extract the pixel value of the film on the red channel of three channels (RGB), calculate the change in OD before and after irradiation when using a point densitometer, and calculate changes in absorbance before and after irradiation when using a spectrophotometer. the results showed that the three calibration methods gave readings with a netOD precision of doses below 3% for the uncertainty value of 1σ (one sigma). while the sensitivity of all three methods has the same trend in responding to film readings against radiation, it has a different magnitude of sensitivity. while the accuracy of the three methods provides readings below 3% for doses above 100 cGy and 200 cGy, but for doses below 100 cGy found above 3% when using point densitometers and spectrophotometers. when all three methods are used for clinical implementation, the results of the study show accuracy and precision below 2% for the use of scanners and spectrophotometers and above 3% for precision and accuracy when using point densitometers.

Keywords: Callibration Methods, Film Dosimetry EBT3, Flat Bad Scanner, Densitomete, Spectrophotometer

Procedia PDF Downloads 96
794 Working Memory Growth from Kindergarten to First Grade: Considering Impulsivity, Parental Discipline Methods and Socioeconomic Status

Authors: Ayse Cobanoglu

Abstract:

Working memory can be defined as a workspace that holds and regulates active information in mind. This study investigates individual changes in children's working memory from kindergarten to first grade. The main purpose of the study is whether parental discipline methods and child impulsive/overactive behaviors affect children's working memory initial status and growth rate, controlling for gender, minority status, and socioeconomic status (SES). A linear growth curve model with the first four waves of the Early Childhood Longitudinal Study-Kindergarten Cohort of 2011 (ECLS-K:2011) is performed to analyze the individual growth of children's working memory longitudinally (N=3915). Results revealed that there is a significant variation among students' initial status in the kindergarten fall semester as well as the growth rate during the first two years of schooling. While minority status, SES, and children's overactive/impulsive behaviors influenced children's initial status, only SES and minority status were significantly associated with the growth rate of working memory. For parental discipline methods, such as giving a warning and ignoring the child's negative behavior, are also negatively associated with initial working memory scores. Following that, students' working memory growth rate is examined, and students with lower SES as well as minorities showed a faster growth pattern during the first two years of schooling. However, the findings of parental disciplinary methods on working memory growth rates were mixed. It can be concluded that schooling helps low-SES minority students to develop their working memory.

Keywords: growth curve modeling, impulsive/overactive behaviors, parenting, working memory

Procedia PDF Downloads 97
793 Study of a Few Additional Posterior Projection Data to 180° Acquisition for Myocardial SPECT

Authors: Yasuyuki Takahashi, Hirotaka Shimada, Takao Kanzaki

Abstract:

A Dual-detector SPECT system is widely by use of myocardial SPECT studies. With 180-degree (180°) acquisition, reconstructed images are distorted in the posterior wall of myocardium due to the lack of sufficient data of posterior projection. We hypothesized that quality of myocardial SPECT images can be improved by the addition of data acquisition of only a few posterior projections to ordinary 180° acquisition. The proposed acquisition method (180° plus acquisition methods) uses the dual-detector SPECT system with a pair of detector arranged in 90° perpendicular. Sampling angle was 5°, and the acquisition range was 180° from 45° right anterior oblique to 45° left posterior oblique. After the acquisition of 180°, the detector moved to additional acquisition position of reverse side once for 2 projections, twice for 4 projections, or 3 times for 6 projections. Since these acquisition methods cannot be done in the present system, actual data acquisition was done by 360° with a sampling angle of 5°, and projection data corresponding to above acquisition position were extracted for reconstruction. We underwent the phantom studies and a clinical study. SPECT images were compared by profile curve analysis and also quantitatively by contrast ratio. The distortion was improved by 180° plus method. Profile curve analysis showed increased of cardiac cavity. Analysis with contrast ratio revealed that SPECT images of the phantoms and the clinical study were improved from 180° acquisition by the present methods. The difference in the contrast was not clearly recognized between 180° plus 2 projections, 180° plus 4 projections, and 180° plus 6 projections. 180° plus 2 projections method may be feasible for myocardial SPECT because distortion of the image and the contrast were improved.

Keywords: 180° plus acquisition method, a few posterior projections, dual-detector SPECT system, myocardial SPECT

Procedia PDF Downloads 259
792 Forecasting Market Share of Electric Vehicles in Taiwan Using Conjoint Models and Monte Carlo Simulation

Authors: Li-hsing Shih, Wei-Jen Hsu

Abstract:

Recently, the sale of electrical vehicles (EVs) has increased dramatically due to maturing technology development and decreasing cost. Governments of many countries have made regulations and policies in favor of EVs due to their long-term commitment to net zero carbon emissions. However, due to uncertain factors such as the future price of EVs, forecasting the future market share of EVs is a challenging subject for both the auto industry and local government. This study tries to forecast the market share of EVs using conjoint models and Monte Carlo simulation. The research is conducted in three phases. (1) A conjoint model is established to represent the customer preference structure on purchasing vehicles while five product attributes of both EV and internal combustion engine vehicles (ICEV) are selected. A questionnaire survey is conducted to collect responses from Taiwanese consumers and estimate the part-worth utility functions of all respondents. The resulting part-worth utility functions can be used to estimate the market share, assuming each respondent will purchase the product with the highest total utility. For example, attribute values of an ICEV and a competing EV are given respectively, two total utilities of the two vehicles of a respondent are calculated and then knowing his/her choice. Once the choices of all respondents are known, an estimate of market share can be obtained. (2) Among the attributes, future price is the key attribute that dominates consumers’ choice. This study adopts the assumption of a learning curve to predict the future price of EVs. Based on the learning curve method and past price data of EVs, a regression model is established and the probability distribution function of the price of EVs in 2030 is obtained. (3) Since the future price is a random variable from the results of phase 2, a Monte Carlo simulation is then conducted to simulate the choices of all respondents by using their part-worth utility functions. For instance, using one thousand generated future prices of an EV together with other forecasted attribute values of the EV and an ICEV, one thousand market shares can be obtained with a Monte Carlo simulation. The resulting probability distribution of the market share of EVs provides more information than a fixed number forecast, reflecting the uncertain nature of the future development of EVs. The research results can help the auto industry and local government make more appropriate decisions and future action plans.

Keywords: conjoint model, electrical vehicle, learning curve, Monte Carlo simulation

Procedia PDF Downloads 40
791 Forming Limit Analysis of DP600-800 Steels

Authors: Marcelo Costa Cardoso, Luciano Pessanha Moreira

Abstract:

In this work, the plastic behaviour of cold-rolled zinc coated dual-phase steel sheets DP600 and DP800 grades is firstly investigated with the help of uniaxial, hydraulic bulge and Forming Limit Curve (FLC) tests. The uniaxial tensile tests were performed in three angular orientations with respect to the rolling direction to evaluate the strain-hardening and plastic anisotropy. True stress-strain curves at large strains were determined from hydraulic bulge testing and fitted to a work-hardening equation. The limit strains are defined at both localized necking and fracture conditions according to Nakajima’s hemispherical punch procedure. Also, an elasto-plastic localization model is proposed in order to predict strain and stress based forming limit curves. The investigated dual-phase sheets showed a good formability in the biaxial stretching and drawing FLC regions. For both DP600 and DP800 sheets, the corresponding numerical predictions overestimated and underestimated the experimental limit strains in the biaxial stretching and drawing FLC regions, respectively. This can be attributed to the restricted failure necking condition adopted in the numerical model, which is not suitable to describe the tensile and shear fracture mechanisms in advanced high strength steels under equibiaxial and biaxial stretching conditions.

Keywords: advanced high strength steels, forming limit curve, numerical modelling, sheet metal forming

Procedia PDF Downloads 341
790 Microwave Dielectric Constant Measurements of Titanium Dioxide Using Five Mixture Equations

Authors: Jyh Sheen, Yong-Lin Wang

Abstract:

This research dedicates to find a different measurement procedure of microwave dielectric properties of ceramic materials with high dielectric constants. For the composite of ceramic dispersed in the polymer matrix, the dielectric constants of the composites with different concentrations can be obtained by various mixture equations. The other development of mixture rule is to calculate the permittivity of ceramic from measurements on composite. To do this, the analysis method and theoretical accuracy on six basic mixture laws derived from three basic particle shapes of ceramic fillers have been reported for dielectric constants of ceramic less than 40 at microwave frequency. Similar researches have been done for other well-known mixture rules. They have shown that both the physical curve matching with experimental results and low potential theory error are important to promote the calculation accuracy. Recently, a modified of mixture equation for high dielectric constant ceramics at microwave frequency has also been presented for strontium titanate (SrTiO3) which was selected from five more well known mixing rules and has shown a good accuracy for high dielectric constant measurements. However, it is still not clear the accuracy of this modified equation for other high dielectric constant materials. Therefore, the five more well known mixing rules are selected again to understand their application to other high dielectric constant ceramics. The other high dielectric constant ceramic, TiO2 with dielectric constant 100, was then chosen for this research. Their theoretical error equations are derived. In addition to the theoretical research, experimental measurements are always required. Titanium dioxide is an interesting ceramic for microwave applications. In this research, its powder is adopted as the filler material and polyethylene powder is like the matrix material. The dielectric constants of those ceramic-polyethylene composites with various compositions were measured at 10 GHz. The theoretical curves of the five published mixture equations are shown together with the measured results to understand the curve matching condition of each rule. Finally, based on the experimental observation and theoretical analysis, one of the five rules was selected and modified to a new powder mixture equation. This modified rule has show very good curve matching with the measurement data and low theoretical error. We can then calculate the dielectric constant of pure filler medium (titanium dioxide) by those mixing equations from the measured dielectric constants of composites. The accuracy on the estimating dielectric constant of pure ceramic by various mixture rules will be compared. This modified mixture rule has also shown good measurement accuracy on the dielectric constant of titanium dioxide ceramic. This study can be applied to the microwave dielectric properties measurements of other high dielectric constant ceramic materials in the future.

Keywords: microwave measurement, dielectric constant, mixture rules, composites

Procedia PDF Downloads 330
789 Statistical Assessment of Models for Determination of Soil–Water Characteristic Curves of Sand Soils

Authors: S. J. Matlan, M. Mukhlisin, M. R. Taha

Abstract:

Characterization of the engineering behavior of unsaturated soil is dependent on the soil-water characteristic curve (SWCC), a graphical representation of the relationship between water content or degree of saturation and soil suction. A reasonable description of the SWCC is thus important for the accurate prediction of unsaturated soil parameters. The measurement procedures for determining the SWCC, however, are difficult, expensive, and time-consuming. During the past few decades, researchers have laid a major focus on developing empirical equations for predicting the SWCC, with a large number of empirical models suggested. One of the most crucial questions is how precisely existing equations can represent the SWCC. As different models have different ranges of capability, it is essential to evaluate the precision of the SWCC models used for each particular soil type for better SWCC estimation. It is expected that better estimation of SWCC would be achieved via a thorough statistical analysis of its distribution within a particular soil class. With this in view, a statistical analysis was conducted in order to evaluate the reliability of the SWCC prediction models against laboratory measurement. Optimization techniques were used to obtain the best-fit of the model parameters in four forms of SWCC equation, using laboratory data for relatively coarse-textured (i.e., sandy) soil. The four most prominent SWCCs were evaluated and computed for each sample. The result shows that the Brooks and Corey model is the most consistent in describing the SWCC for sand soil type. The Brooks and Corey model prediction also exhibit compatibility with samples ranging from low to high soil water content in which subjected to the samples that evaluated in this study.

Keywords: soil-water characteristic curve (SWCC), statistical analysis, unsaturated soil, geotechnical engineering

Procedia PDF Downloads 309
788 Inversion of the Spectral Analysis of Surface Waves Dispersion Curves through the Particle Swarm Optimization Algorithm

Authors: A. Cerrato Casado, C. Guigou, P. Jean

Abstract:

In this investigation, the particle swarm optimization (PSO) algorithm is used to perform the inversion of the dispersion curves in the spectral analysis of surface waves (SASW) method. This inverse problem usually presents complicated solution spaces with many local minima that make difficult the convergence to the correct solution. PSO is a metaheuristic method that was originally designed to simulate social behavior but has demonstrated powerful capabilities to solve inverse problems with complex space solution and a high number of variables. The dispersion curve of the synthetic soils is constructed by the vertical flexibility coefficient method, which is especially convenient for soils where the stiffness does not increase gradually with depth. The reason is that these types of soil profiles are not normally dispersive since the dominant mode of Rayleigh waves is usually not coincident with the fundamental mode. Multiple synthetic soil profiles have been tested to show the characteristics of the convergence process and assess the accuracy of the final soil profile. In addition, the inversion procedure is applied to multiple real soils and the final profile compared with the available information. The combination of the vertical flexibility coefficient method to obtain the dispersion curve and the PSO algorithm to carry out the inversion process proves to be a robust procedure that is able to provide good solutions for complex soil profiles even with scarce prior information.

Keywords: dispersion, inverse problem, particle swarm optimization, SASW, soil profile

Procedia PDF Downloads 151
787 The Effect of Global Value Chain Participation on Environment

Authors: Piyaphan Changwatchai

Abstract:

Global value chain is important for current world economy through foreign direct investment. Multinational enterprises' efficient location seeking for each stage of production lead to global production network and more global value chain participation of several countries. Global value chain participation has several effects on participating countries in several aspects including the environment. The effect of global value chain participation on the environment is ambiguous. As a result, this research aims to study the effect of global value chain participation on countries' CO₂ emission and methane emission by using quantitative analysis with secondary panel data of sixty countries. The analysis is divided into two types of global value chain participation, which are forward global value chain participation and backward global value chain participation. The results show that, for forward global value chain participation, GDP per capita affects two types of pollutants in downward bell curve shape. Forward global value chain participation negatively affects CO₂ emission and methane emission. As for backward global value chain participation, GDP per capita affects two types of pollutants in downward bell curve shape. Backward global value chain participation negatively affects methane emission only. However, when considering Asian countries, forward global value chain participation positively affects CO₂ emission. The recommendations of this research are that countries participating in global value chain should promote production with effective environmental management in each stage of value chain. The examples of policies are providing incentives to private sectors, including domestic producers and MNEs, for green production technology and efficient environment management and engaging in international agreements in terms of green production. Furthermore, government should regulate each stage of production in value chain toward green production, especially for Asia countries.

Keywords: CO₂ emission, environment, global value chain participation, methane emission

Procedia PDF Downloads 160