Search results for: elliptic curve cryptography
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1197

Search results for: elliptic curve cryptography

897 Vehicle Maneuverability on Horizontal Curves on Hilly Terrain: A Study on Shillong Highway

Authors: Surendra Choudhary, Sapan Tiwari

Abstract:

The driver has two fundamental duties i) controlling the position of the vehicle along the longitudinal and lateral direction of movement ii) roadway width. Both of these duties are interdependent and are concurrently referred to as two-dimensional driver behavior. One of the main problems facing driver behavior modeling is to identify the parameters for describing the exemplary driving conduct and car maneuver under distinct traffic circumstances. Still, to date, there is no well-accepted theory that can comprehensively model the 2-D driver conduct (longitudinal and lateral). The primary objective of this research is to explore the vehicle's lateral longitudinal behavior in the heterogeneous condition of traffic on horizontal curves as well as the effect of road geometry on dynamic traffic parameters, i.e., car velocity and lateral placement. In this research, with their interrelationship, a thorough assessment of dynamic car parameters, i.e., speed, lateral acceleration, and turn radius. Also, horizontal curve road parameters, i.e., curvature radius, pavement friction, are performed. The dynamic parameters of the various types of car drivers are gathered using a VBOX GPS-based tool with high precision. The connection between dynamic car parameters and curve geometry is created after the removal of noise from the GPS trajectories. The major findings of the research are that car maneuvers with higher than the design limits of speed, acceleration, and lateral deviation on the studied curves of the highway. It can become lethal if the weather changes from dry to wet.

Keywords: geometry, maneuverability, terrain, trajectory, VBOX

Procedia PDF Downloads 143
896 A Novel Approach of NPSO on Flexible Logistic (S-Shaped) Model for Software Reliability Prediction

Authors: Pooja Rani, G. S. Mahapatra, S. K. Pandey

Abstract:

In this paper, we propose a novel approach of Neural Network and Particle Swarm Optimization methods for software reliability prediction. We first explain how to apply compound function in neural network so that we can derive a Flexible Logistic (S-shaped) Growth Curve (FLGC) model. This model mathematically represents software failure as a random process and can be used to evaluate software development status during testing. To avoid trapping in local minima, we have applied Particle Swarm Optimization method to train proposed model using failure test data sets. We drive our proposed model using computational based intelligence modeling. Thus, proposed model becomes Neuro-Particle Swarm Optimization (NPSO) model. We do test result with different inertia weight to update particle and update velocity. We obtain result based on best inertia weight compare along with Personal based oriented PSO (pPSO) help to choose local best in network neighborhood. The applicability of proposed model is demonstrated through real time test data failure set. The results obtained from experiments show that the proposed model has a fairly accurate prediction capability in software reliability.

Keywords: software reliability, flexible logistic growth curve model, software cumulative failure prediction, neural network, particle swarm optimization

Procedia PDF Downloads 344
895 Critical Study on the Sensitivity of Corrosion Fatigue Crack Growth Rate to Cyclic Waveform and Microstructure in Marine Steel

Authors: V. C. Igwemezie, A. N. Mehmanparast

Abstract:

The primary focus of this work is to understand how variations in the microstructure and cyclic waveform affect the corrosion fatigue crack growth (CFCG) in steel, especially in the Paris region of the da/dN vs. ΔK curve. This work is important because it provides fundamental information on the modelling, design, selection, and use of steels for various engineering applications in the marine environment. The corrosion fatigue tests data on normalized and thermomechanical control process (TMCP) ferritic-pearlitic steels by the authors were compared with several studies on different microstructures in the literature. The microstructures of these steels are radically different and general comparative fatigue crack growth resistance performance study on the effect of microstructure in these materials are very scarce and where available are limited to few studies. The results, for purposes of engineering application, in this study show less dependency of fatigue crack growth rate (FCGR) on yield strength, tensile strength, ductility, frequency and stress ratio in the range 0.1 – 0.7. The nature of the steel microstructure appears to be a major factor in determining the rate at which fatigue cracks propagate in the entire da/dN vs. ΔK sigmoidal curve. The study also shows that the sine wave shape is the most damaging fatigue waveform for ferritic-pearlitic steels. This tends to suggest that the test under sine waveform would be a conservative approach, regardless of the waveform for design of engineering structures.

Keywords: BS7910, corrosion-fatigue crack growth rate, cyclic waveform, microstructure, steel

Procedia PDF Downloads 155
894 Developing Pavement Structural Deterioration Curves

Authors: Gregory Kelly, Gary Chai, Sittampalam Manoharan, Deborah Delaney

Abstract:

A Structural Number (SN) can be calculated for a road pavement from the properties and thicknesses of the surface, base course, sub-base, and subgrade. Historically, the cost of collecting structural data has been very high. Data were initially collected using Benkelman Beams and now by Falling Weight Deflectometer (FWD). The structural strength of pavements weakens over time due to environmental and traffic loading factors, but due to a lack of data, no structural deterioration curve for pavements has been implemented in a Pavement Management System (PMS). International Roughness Index (IRI) is a measure of the road longitudinal profile and has been used as a proxy for a pavement’s structural integrity. This paper offers two conceptual methods to develop Pavement Structural Deterioration Curves (PSDC). Firstly, structural data are grouped in sets by design Equivalent Standard Axles (ESA). An ‘Initial’ SN (ISN), Intermediate SN’s (SNI) and a Terminal SN (TSN), are used to develop the curves. Using FWD data, the ISN is the SN after the pavement is rehabilitated (Financial Accounting ‘Modern Equivalent’). Intermediate SNIs, are SNs other than the ISN and TSN. The TSN was defined as the SN of the pavement when it was approved for pavement rehabilitation. The second method is to use Traffic Speed Deflectometer data (TSD). The road network already divided into road blocks, is grouped by traffic loading. For each traffic loading group, road blocks that have had a recent pavement rehabilitation, are used to calculate the ISN and those planned for pavement rehabilitation to calculate the TSN. The remaining SNs are used to complete the age-based or if available, historical traffic loading-based SNI’s.

Keywords: conceptual, pavement structural number, pavement structural deterioration curve, pavement management system

Procedia PDF Downloads 544
893 Optimization of Dez Dam Reservoir Operation Using Genetic Algorithm

Authors: Alireza Nikbakht Shahbazi, Emadeddin Shirali

Abstract:

Since optimization issues of water resources are complicated due to the variety of decision making criteria and objective functions, it is sometimes impossible to resolve them through regular optimization methods or, it is time or money consuming. Therefore, the use of modern tools and methods is inevitable in resolving such problems. An accurate and essential utilization policy has to be determined in order to use natural resources such as water reservoirs optimally. Water reservoir programming studies aim to determine the final cultivated land area based on predefined agricultural models and water requirements. Dam utilization rule curve is also provided in such studies. The basic information applied in water reservoir programming studies generally include meteorological, hydrological, agricultural and water reservoir related data, and the geometric characteristics of the reservoir. The system of Dez dam water resources was simulated applying the basic information in order to determine the capability of its reservoir to provide the objectives of the performed plan. As a meta-exploratory method, genetic algorithm was applied in order to provide utilization rule curves (intersecting the reservoir volume). MATLAB software was used in order to resolve the foresaid model. Rule curves were firstly obtained through genetic algorithm. Then the significance of using rule curves and the decrease in decision making variables in the system was determined through system simulation and comparing the results with optimization results (Standard Operating Procedure). One of the most essential issues in optimization of a complicated water resource system is the increasing number of variables. Therefore a lot of time is required to find an optimum answer and in some cases, no desirable result is obtained. In this research, intersecting the reservoir volume has been applied as a modern model in order to reduce the number of variables. Water reservoir programming studies has been performed based on basic information, general hypotheses and standards and applying monthly simulation technique for a statistical period of 30 years. Results indicated that application of rule curve prevents the extreme shortages and decrease the monthly shortages.

Keywords: optimization, rule curve, genetic algorithm method, Dez dam reservoir

Procedia PDF Downloads 265
892 A Very Efficient Pseudo-Random Number Generator Based On Chaotic Maps and S-Box Tables

Authors: M. Hamdi, R. Rhouma, S. Belghith

Abstract:

Generating random numbers are mainly used to create secret keys or random sequences. It can be carried out by various techniques. In this paper we present a very simple and efficient pseudo-random number generator (PRNG) based on chaotic maps and S-Box tables. This technique adopted two main operations one to generate chaotic values using two logistic maps and the second to transform them into binary words using random S-Box tables. The simulation analysis indicates that our PRNG possessing excellent statistical and cryptographic properties.

Keywords: Random Numbers, Chaotic map, S-box, cryptography, statistical tests

Procedia PDF Downloads 365
891 The Permutation of Symmetric Triangular Equilateral Group in the Cryptography of Private and Public Key

Authors: Fola John Adeyeye

Abstract:

In this paper, we propose a cryptosystem private and public key base on symmetric group Pn and validates its theoretical formulation. This proposed system benefits from the algebraic properties of Pn such as noncommutative high logical, computational speed and high flexibility in selecting key which makes the discrete permutation multiplier logic (DPML) resist to attack by any algorithm such as Pohlig-Hellman. One of the advantages of this scheme is that it explore all the possible triangular symmetries. Against these properties, the only disadvantage is that the law of permutation multiplicity only allow an operation from left to right. Many other cryptosystems can be transformed into their symmetric group.

Keywords: cryptosystem, private and public key, DPML, symmetric group Pn

Procedia PDF Downloads 202
890 Thermoluminescence Investigations of Tl2Ga2Se3S Layered Single Crystals

Authors: Serdar Delice, Mehmet Isik, Nizami Hasanli, Kadir Goksen

Abstract:

Researchers have donated great interest to ternary and quaternary semiconductor compounds especially with the improvement of the optoelectronic technology. The quaternary compound Tl2Ga2Se3S which was grown by Bridgman method carries the properties of ternary thallium chalcogenides group of semiconductors with layered structure. This compound can be formed from TlGaSe2 crystals replacing the one quarter of selenium atom by sulfur atom. Although Tl2Ga2Se3S crystals are not intentionally doped, some unintended defect types such as point defects, dislocations and stacking faults can occur during growth processes of crystals. These defects can cause undesirable problems in semiconductor materials especially produced for optoelectronic technology. Defects of various types in the semiconductor devices like LEDs and field effect transistor may act as a non-radiative or scattering center in electron transport. Also, quick recombination of holes with electrons without any energy transfer between charge carriers can occur due to the existence of defects. Therefore, the characterization of defects may help the researchers working in this field to produce high quality devices. Thermoluminescence (TL) is an effective experimental method to determine the kinetic parameters of trap centers due to defects in crystals. In this method, the sample is illuminated at low temperature by a light whose energy is bigger than the band gap of studied sample. Thus, charge carriers in the valence band are excited to delocalized band. Then, the charge carriers excited into conduction band are trapped. The trapped charge carriers are released by heating the sample gradually and these carriers then recombine with the opposite carriers at the recombination center. By this way, some luminescence is emitted from the samples. The emitted luminescence is converted to pulses by using an experimental setup controlled by computer program and TL spectrum is obtained. Defect characterization of Tl2Ga2Se3S single crystals has been performed by TL measurements at low temperatures between 10 and 300 K with various heating rate ranging from 0.6 to 1.0 K/s. The TL signal due to the luminescence from trap centers revealed one glow peak having maximum temperature of 36 K. Curve fitting and various heating rate methods were used for the analysis of the glow curve. The activation energy of 13 meV was found by the application of curve fitting method. This practical method established also that the trap center exhibits the characteristics of mixed (general) kinetic order. In addition, various heating rate analysis gave a compatible result (13 meV) with curve fitting as the temperature lag effect was taken into consideration. Since the studied crystals were not intentionally doped, these centers are thought to originate from stacking faults, which are quite possible in Tl2Ga2Se3S due to the weakness of the van der Waals forces between the layers. Distribution of traps was also investigated using an experimental method. A quasi-continuous distribution was attributed to the determined trap centers.

Keywords: chalcogenides, defects, thermoluminescence, trap centers

Procedia PDF Downloads 282
889 Analysis of Accurate Direct-Estimation of the Maximum Power Point and Thermal Characteristics of High Concentration Photovoltaic Modules

Authors: Yan-Wen Wang, Chu-Yang Chou, Jen-Cheng Wang, Min-Sheng Liao, Hsuan-Hsiang Hsu, Cheng-Ying Chou, Chen-Kang Huang, Kun-Chang Kuo, Joe-Air Jiang

Abstract:

Performance-related parameters of high concentration photovoltaic (HCPV) modules (e.g. current and voltage) are required when estimating the maximum power point using numerical and approximation methods. The maximum power point on the characteristic curve for a photovoltaic module varies when temperature or solar radiation is different. It is also difficult to estimate the output performance and maximum power point (MPP) due to the special characteristics of HCPV modules. Based on the p-n junction semiconductor theory, a brand new and simple method is presented in this study to directly evaluate the MPP of HCPV modules. The MPP of HCPV modules can be determined from an irradiated I-V characteristic curve, because there is a non-linear relationship between the temperature of a solar cell and solar radiation. Numerical simulations and field tests are conducted to examine the characteristics of HCPV modules during maximum output power tracking. The performance of the presented method is evaluated by examining the dependence of temperature and irradiation intensity on the MPP characteristics of HCPV modules. These results show that the presented method allows HCPV modules to achieve their maximum power and perform power tracking under various operation conditions. A 0.1% error is found between the estimated and the real maximum power point.

Keywords: energy performance, high concentrated photovoltaic, maximum power point, p-n junction semiconductor

Procedia PDF Downloads 584
888 The Neutrophil-to-Lymphocyte Ratio after Surgery for Hip Fracture in a New, Simple, and Objective Score to Predict Postoperative Mortality

Authors: Philippe Dillien, Patrice Forget, Harald Engel, Olivier Cornu, Marc De Kock, Jean Cyr Yombi

Abstract:

Introduction: Hip fracture precedes commonly death in elderly people. Identification of high-risk patients may contribute to target patients in whom optimal management, resource allocation and trials efficiency is needed. The aim of this study is to construct a predictive score of mortality after hip fracture on the basis of the objective prognostic factors available: Neutrophil-to-lymphocyte ratio (NLR), age, and sex. C-Reactive Protein (CRP), is also considered as an alternative to the NLR. Patients and methods: After the IRB approval, we analyzed our prospective database including 286 consecutive patients with hip fracture. A score was constructed combining age (1 point per decade above 74 years), sex (1 point for males), and NLR at postoperative day+5 (1 point if >5). A receiver-operating curve (ROC) curve analysis was performed. Results: From the 286 patients included, 235 were analyzed (72 males and 163 females, 30.6%/69.4%), with a median age of 84 (range: 65 to 102) years, mean NLR values of 6.47+/-6.07. At one year, 82/280 patients died (29.3%). Graphical analysis and log-rank test confirm a highly statistically significant difference (P<0.001). Performance analysis shows an AUC of 0.72 [95%CI 0.65-0.79]. CRP shows no advantage on NLR. Conclusion: We have developed a score based on age, sex and the NLR to predict the risk of mortality at one year in elderly patients after surgery for a hip fracture. After external validation, it may be included in clinical practice as in clinical research to stratify the risk of postoperative mortality.

Keywords: neutrophil-to-lymphocyte ratio, hip fracture, postoperative mortality, medical and health sciences

Procedia PDF Downloads 412
887 Solution for Thick Plate Resting on Winkler Foundation by Symplectic Geometry Method

Authors: Mei-Jie Xu, Yang Zhong

Abstract:

Based on the symplectic geometry method, the theory of Hamilton system can be applied in the analysis of problem solved using the theory of elasticity and in the solution of elliptic partial differential equations. With this technique, this paper derives the theoretical solution for a thick rectangular plate with four free edges supported on a Winkler foundation by variable separation method. In this method, the governing equation of thick plate was first transformed into state equations in the Hamilton space. The theoretical solution of this problem was next obtained by applying the method of variable separation based on the Hamilton system. Compared with traditional theoretical solutions for rectangular plates, this method has the advantage of not having to assume the form of deflection functions in the solution process. Numerical examples are presented to verify the validity of the proposed solution method.

Keywords: symplectic geometry method, Winkler foundation, thick rectangular plate, variable separation method, Hamilton system

Procedia PDF Downloads 305
886 Family of Density Curves of Queensland Soils from Compaction Tests, on a 3D Z-Plane Function of Moisture Content, Saturation, and Air-Void Ratio

Authors: Habib Alehossein, M. S. K. Fernando

Abstract:

Soil density depends on the volume of the voids and the proportion of the water and air in the voids. However, there is a limit to the contraction of the voids at any given compaction energy, whereby additional water is used to reduce the void volume further by lubricating the particles' frictional contacts. Hence, at an optimum moisture content and specific compaction energy, the density of unsaturated soil can be maximized where the void volume is minimum. However, when considering a full compaction curve and permutations and variations of all these components (soil, air, water, and energy), laboratory soil compaction tests can become expensive, time-consuming, and exhausting. Therefore, analytical methods constructed on a few test data can be developed and used to reduce such unnecessary efforts significantly. Concentrating on the compaction testing results, this study discusses the analytical modelling method developed for some fine-grained and coarse-grained soils of Queensland. Soil properties and characteristics, such as full functional compaction curves under various compaction energy conditions, were studied and developed for a few soil types. Using MATLAB, several generic analytical codes were created for this study, covering all possible compaction parameters and results as they occur in a soil mechanics lab. These MATLAB codes produce a family of curves to determine the relationships between the density, moisture content, void ratio, saturation, and compaction energy.

Keywords: analytical, MATLAB, modelling, compaction curve, void ratio, saturation, moisture content

Procedia PDF Downloads 90
885 Epidemiological and Clinical Characteristics of Five Rare Pathological Subtypes of Hepatocellular Carcinoma

Authors: Xiaoyuan Chen

Abstract:

Background: This study aimed to characterize the epidemiological and clinical features of five rare subtypes of hepatocellular carcinoma (HCC) and to create a competing risk nomogram for predicting cancer-specific survival. Methods: This study used the Surveillance, Epidemiology, and End Results database to analyze the clinicopathological data of 50,218 patients with classic HCC and five rare subtypes (ICD-O-3 Histology Code=8170/3-8175/3) between 2004 and 2018. The annual percent change (APC) was calculated using Joinpoint regression, and a nomogram was developed based on multivariable competing risk survival analyses. The prognostic performance of the nomogram was evaluated using the Akaike information criterion, Bayesian information criterion, C-index, calibration curve, and area under the receiver operating characteristic curve. Decision curve analysis was used to assess the clinical value of the models. Results: The incidence of scirrhous carcinoma showed a decreasing trend (APC=-6.8%, P=0.025), while the morbidity of other rare subtypes remained stable from 2004 to 2018. The incidence-based mortality plateau in all subtypes during the period. Clear cell carcinoma was the most common subtype (n=551, 1.1%), followed by fibrolamellar (n=241, 0.5%), scirrhous (n=82, 0.2%), spindle cell (n=61, 0.1%), and pleomorphic (n=17, ~0%) carcinomas. Patients with fibrolamellar carcinoma were younger and more likely to have non-cirrhotic liver and better prognoses. Scirrhous carcinoma shared almost the same macro clinical characteristics and outcomes as classic HCC. Clear cell carcinoma tended to occur in the Asia-Pacific elderly male population, and more than half of them were large HCC (Size>5cm). Sarcomatoid (including spindle cell and pleomorphic) carcinoma was associated with larger tumor size, poorer differentiation, and more dismal prognoses. The pathological subtype, T stage, M stage, surgery, alpha-fetoprotein, and cancer history were identified as independent predictors in patients with rare subtypes. The nomogram showed good calibration, discrimination, and net benefits in clinical practice. Conclusion: The rare subtypes of HCC had distinct clinicopathological features and biological behaviors compared with classic HCC. Our findings could provide a valuable reference for clinicians. The constructed nomogram could accurately predict prognoses, which is beneficial for individualized management.

Keywords: hepatocellular carcinoma, pathological subtype, fibrolamellar carcinoma, scirrhous carcinoma, clear cell carcinoma, spindle cell carcinoma, pleomorphic carcinoma

Procedia PDF Downloads 75
884 FPGA Implementation of the BB84 Protocol

Authors: Jaouadi Ikram, Machhout Mohsen

Abstract:

The development of a quantum key distribution (QKD) system on a field-programmable gate array (FPGA) platform is the subject of this paper. A quantum cryptographic protocol is designed based on the properties of quantum information and the characteristics of FPGAs. The proposed protocol performs key extraction, reconciliation, error correction, and privacy amplification tasks to generate a perfectly secret final key. We modeled the presence of the spy in our system with a strategy to reveal some of the exchanged information without being noticed. Using an FPGA card with a 100 MHz clock frequency, we have demonstrated the evolution of the error rate as well as the amounts of mutual information (between the two interlocutors and that of the spy) passing from one step to another in the key generation process.

Keywords: QKD, BB84, protocol, cryptography, FPGA, key, security, communication

Procedia PDF Downloads 183
883 A Comparative Study of the Impact of Membership in International Climate Change Treaties and the Environmental Kuznets Curve (EKC) in Line with Sustainable Development Theories

Authors: Mojtaba Taheri, Saied Reza Ameli

Abstract:

In this research, we have calculated the effect of membership in international climate change treaties for 20 developed countries based on the human development index (HDI) and compared this effect with the process of pollutant reduction in the Environmental Kuznets Curve (EKC) theory. For this purpose, the data related to The real GDP per capita with 2010 constant prices is selected from the World Development Indicators (WDI) database. Ecological Footprint (ECOFP) is the amount of biologically productive land needed to meet human needs and absorb carbon dioxide emissions. It is measured in global hectares (gha), and the data retrieved from the Global Ecological Footprint (2021) database will be used, and we will proceed by examining step by step and performing several series of targeted statistical regressions. We will examine the effects of different control variables, including Energy Consumption Structure (ECS) will be counted as the share of fossil fuel consumption in total energy consumption and will be extracted from The United States Energy Information Administration (EIA) (2021) database. Energy Production (EP) refers to the total production of primary energy by all energy-producing enterprises in one country at a specific time. It is a comprehensive indicator that shows the capacity of energy production in the country, and the data for its 2021 version, like the Energy Consumption Structure, is obtained from (EIA). Financial development (FND) is defined as the ratio of private credit to GDP, and to some extent based on the stock market value, also as a ratio to GDP, and is taken from the (WDI) 2021 version. Trade Openness (TRD) is the sum of exports and imports of goods and services measured as a share of GDP, and we use the (WDI) data (2021) version. Urbanization (URB) is defined as the share of the urban population in the total population, and for this data, we used the (WDI) data source (2021) version. The descriptive statistics of all the investigated variables are presented in the results section. Related to the theories of sustainable development, Environmental Kuznets Curve (EKC) is more significant in the period of study. In this research, we use more than fourteen targeted statistical regressions to purify the net effects of each of the approaches and examine the results.

Keywords: climate change, globalization, environmental economics, sustainable development, international climate treaty

Procedia PDF Downloads 71
882 Comparing Accuracy of Semantic and Radiomics Features in Prognosis of Epidermal Growth Factor Receptor Mutation in Non-Small Cell Lung Cancer

Authors: Mahya Naghipoor

Abstract:

Purpose: Non-small cell lung cancer (NSCLC) is the most common lung cancer type. Epidermal growth factor receptor (EGFR) mutation is the main reason which causes NSCLC. Computed tomography (CT) is used for diagnosis and prognosis of lung cancers because of low price and little invasion. Semantic analyses of qualitative CT features are based on visual evaluation by radiologist. However, the naked eye ability may not assess all image features. On the other hand, radiomics provides the opportunity of quantitative analyses for CT images features. The aim of this review study was comparing accuracy of semantic and radiomics features in prognosis of EGFR mutation in NSCLC. Methods: For this purpose, the keywords including: non-small cell lung cancer, epidermal growth factor receptor mutation, semantic, radiomics, feature, receiver operating characteristics curve (ROC) and area under curve (AUC) were searched in PubMed and Google Scholar. Totally 29 papers were reviewed and the AUC of ROC analyses for semantic and radiomics features were compared. Results: The results showed that the reported AUC amounts for semantic features (ground glass opacity, shape, margins, lesion density and presence or absence of air bronchogram, emphysema and pleural effusion) were %41-%79. For radiomics features (kurtosis, skewness, entropy, texture, standard deviation (SD) and wavelet) the AUC values were found %50-%86. Conclusions: In conclusion, the accuracy of radiomics analysis is a little higher than semantic in prognosis of EGFR mutation in NSCLC.

Keywords: lung cancer, radiomics, computer tomography, mutation

Procedia PDF Downloads 167
881 Imaging of Underground Targets with an Improved Back-Projection Algorithm

Authors: Alireza Akbari, Gelareh Babaee Khou

Abstract:

Ground Penetrating Radar (GPR) is an important nondestructive remote sensing tool that has been used in both military and civilian fields. Recently, GPR imaging has attracted lots of attention in detection of subsurface shallow small targets such as landmines and unexploded ordnance and also imaging behind the wall for security applications. For the monostatic arrangement in the space-time GPR image, a single point target appears as a hyperbolic curve because of the different trip times of the EM wave when the radar moves along a synthetic aperture and collects reflectivity of the subsurface targets. With this hyperbolic curve, the resolution along the synthetic aperture direction shows undesired low resolution features owing to the tails of hyperbola. However, highly accurate information about the size, electromagnetic (EM) reflectivity, and depth of the buried objects is essential in most GPR applications. Therefore hyperbolic curve behavior in the space-time GPR image is often willing to be transformed to a focused pattern showing the object's true location and size together with its EM scattering. The common goal in a typical GPR image is to display the information of the spatial location and the reflectivity of an underground object. Therefore, the main challenge of GPR imaging technique is to devise an image reconstruction algorithm that provides high resolution and good suppression of strong artifacts and noise. In this paper, at first, the standard back-projection (BP) algorithm that was adapted to GPR imaging applications used for the image reconstruction. The standard BP algorithm was limited with against strong noise and a lot of artifacts, which have adverse effects on the following work like detection targets. Thus, an improved BP is based on cross-correlation between the receiving signals proposed for decreasing noises and suppression artifacts. To improve the quality of the results of proposed BP imaging algorithm, a weight factor was designed for each point in region imaging. Compared to a standard BP algorithm scheme, the improved algorithm produces images of higher quality and resolution. This proposed improved BP algorithm was applied on the simulation and the real GPR data and the results showed that the proposed improved BP imaging algorithm has a superior suppression artifacts and produces images with high quality and resolution. In order to quantitatively describe the imaging results on the effect of artifact suppression, focusing parameter was evaluated.

Keywords: algorithm, back-projection, GPR, remote sensing

Procedia PDF Downloads 452
880 Multiscale Entropy Analysis of Electroencephalogram (EEG) of Alcoholic and Control Subjects

Authors: Lal Hussain, Wajid Aziz, Imtiaz Ahmed Awan, Sharjeel Saeed

Abstract:

Multiscale entropy analysis (MSE) is a useful technique recently developed to quantify the dynamics of physiological signals at different time scales. This study is aimed at investigating the electroencephalogram (EEG) signals to analyze the background activity of alcoholic and control subjects by inspecting various coarse-grained sequences formed at different time scales. EEG recordings of alcoholic and control subjects were taken from the publically available machine learning repository of University of California (UCI) acquired using 64 electrodes. The MSE analysis was performed on the EEG data acquired from all the electrodes of alcoholic and control subjects. Mann-Whitney rank test was used to find significant differences between the groups and result were considered statistically significant for p-values<0.05. The area under receiver operator curve was computed to find the degree separation between the groups. The mean ranks of MSE values at all the times scales for all electrodes were higher control subject as compared to alcoholic subjects. Higher mean ranks represent higher complexity and vice versa. The finding indicated that EEG signals acquired through electrodes C3, C4, F3, F7, F8, O1, O2, P3, T7 showed significant differences between alcoholic and control subjects at time scales 1 to 5. Moreover, all electrodes exhibit significance level at different time scales. Likewise, the highest accuracy and separation was obtained at the central region (C3 and C4), front polar regions (P3, O1, F3, F7, F8 and T8) while other electrodes such asFp1, Fp2, P4 and F4 shows no significant results.

Keywords: electroencephalogram (EEG), multiscale sample entropy (MSE), Mann-Whitney test (MMT), Receiver Operator Curve (ROC), complexity analysis

Procedia PDF Downloads 376
879 Comparative Evaluation of EBT3 Film Dosimetry Using Flat Bad Scanner, Densitometer and Spectrophotometer Methods and Its Applications in Radiotherapy

Authors: K. Khaerunnisa, D. Ryangga, S. A. Pawiro

Abstract:

Over the past few decades, film dosimetry has become a tool which is used in various radiotherapy modalities, either for clinical quality assurance (QA) or dose verification. The response of the film to irradiation is usually expressed in optical density (OD) or net optical density (netOD). While the film's response to radiation is not linear, then the use of film as a dosimeter must go through a calibration process. This study aimed to compare the function of the calibration curve of various measurement methods with various densitometer, using a flat bad scanner, point densitometer and spectrophotometer. For every response function, a radichromic film calibration curve is generated from each method by performing accuracy, precision and sensitivity analysis. netOD is obtained by measuring changes in the optical density (OD) of the film before irradiation and after irradiation when using a film scanner if it uses ImageJ to extract the pixel value of the film on the red channel of three channels (RGB), calculate the change in OD before and after irradiation when using a point densitometer, and calculate changes in absorbance before and after irradiation when using a spectrophotometer. the results showed that the three calibration methods gave readings with a netOD precision of doses below 3% for the uncertainty value of 1σ (one sigma). while the sensitivity of all three methods has the same trend in responding to film readings against radiation, it has a different magnitude of sensitivity. while the accuracy of the three methods provides readings below 3% for doses above 100 cGy and 200 cGy, but for doses below 100 cGy found above 3% when using point densitometers and spectrophotometers. when all three methods are used for clinical implementation, the results of the study show accuracy and precision below 2% for the use of scanners and spectrophotometers and above 3% for precision and accuracy when using point densitometers.

Keywords: Callibration Methods, Film Dosimetry EBT3, Flat Bad Scanner, Densitomete, Spectrophotometer

Procedia PDF Downloads 135
878 Working Memory Growth from Kindergarten to First Grade: Considering Impulsivity, Parental Discipline Methods and Socioeconomic Status

Authors: Ayse Cobanoglu

Abstract:

Working memory can be defined as a workspace that holds and regulates active information in mind. This study investigates individual changes in children's working memory from kindergarten to first grade. The main purpose of the study is whether parental discipline methods and child impulsive/overactive behaviors affect children's working memory initial status and growth rate, controlling for gender, minority status, and socioeconomic status (SES). A linear growth curve model with the first four waves of the Early Childhood Longitudinal Study-Kindergarten Cohort of 2011 (ECLS-K:2011) is performed to analyze the individual growth of children's working memory longitudinally (N=3915). Results revealed that there is a significant variation among students' initial status in the kindergarten fall semester as well as the growth rate during the first two years of schooling. While minority status, SES, and children's overactive/impulsive behaviors influenced children's initial status, only SES and minority status were significantly associated with the growth rate of working memory. For parental discipline methods, such as giving a warning and ignoring the child's negative behavior, are also negatively associated with initial working memory scores. Following that, students' working memory growth rate is examined, and students with lower SES as well as minorities showed a faster growth pattern during the first two years of schooling. However, the findings of parental disciplinary methods on working memory growth rates were mixed. It can be concluded that schooling helps low-SES minority students to develop their working memory.

Keywords: growth curve modeling, impulsive/overactive behaviors, parenting, working memory

Procedia PDF Downloads 135
877 Study of a Few Additional Posterior Projection Data to 180° Acquisition for Myocardial SPECT

Authors: Yasuyuki Takahashi, Hirotaka Shimada, Takao Kanzaki

Abstract:

A Dual-detector SPECT system is widely by use of myocardial SPECT studies. With 180-degree (180°) acquisition, reconstructed images are distorted in the posterior wall of myocardium due to the lack of sufficient data of posterior projection. We hypothesized that quality of myocardial SPECT images can be improved by the addition of data acquisition of only a few posterior projections to ordinary 180° acquisition. The proposed acquisition method (180° plus acquisition methods) uses the dual-detector SPECT system with a pair of detector arranged in 90° perpendicular. Sampling angle was 5°, and the acquisition range was 180° from 45° right anterior oblique to 45° left posterior oblique. After the acquisition of 180°, the detector moved to additional acquisition position of reverse side once for 2 projections, twice for 4 projections, or 3 times for 6 projections. Since these acquisition methods cannot be done in the present system, actual data acquisition was done by 360° with a sampling angle of 5°, and projection data corresponding to above acquisition position were extracted for reconstruction. We underwent the phantom studies and a clinical study. SPECT images were compared by profile curve analysis and also quantitatively by contrast ratio. The distortion was improved by 180° plus method. Profile curve analysis showed increased of cardiac cavity. Analysis with contrast ratio revealed that SPECT images of the phantoms and the clinical study were improved from 180° acquisition by the present methods. The difference in the contrast was not clearly recognized between 180° plus 2 projections, 180° plus 4 projections, and 180° plus 6 projections. 180° plus 2 projections method may be feasible for myocardial SPECT because distortion of the image and the contrast were improved.

Keywords: 180° plus acquisition method, a few posterior projections, dual-detector SPECT system, myocardial SPECT

Procedia PDF Downloads 295
876 Forecasting Market Share of Electric Vehicles in Taiwan Using Conjoint Models and Monte Carlo Simulation

Authors: Li-hsing Shih, Wei-Jen Hsu

Abstract:

Recently, the sale of electrical vehicles (EVs) has increased dramatically due to maturing technology development and decreasing cost. Governments of many countries have made regulations and policies in favor of EVs due to their long-term commitment to net zero carbon emissions. However, due to uncertain factors such as the future price of EVs, forecasting the future market share of EVs is a challenging subject for both the auto industry and local government. This study tries to forecast the market share of EVs using conjoint models and Monte Carlo simulation. The research is conducted in three phases. (1) A conjoint model is established to represent the customer preference structure on purchasing vehicles while five product attributes of both EV and internal combustion engine vehicles (ICEV) are selected. A questionnaire survey is conducted to collect responses from Taiwanese consumers and estimate the part-worth utility functions of all respondents. The resulting part-worth utility functions can be used to estimate the market share, assuming each respondent will purchase the product with the highest total utility. For example, attribute values of an ICEV and a competing EV are given respectively, two total utilities of the two vehicles of a respondent are calculated and then knowing his/her choice. Once the choices of all respondents are known, an estimate of market share can be obtained. (2) Among the attributes, future price is the key attribute that dominates consumers’ choice. This study adopts the assumption of a learning curve to predict the future price of EVs. Based on the learning curve method and past price data of EVs, a regression model is established and the probability distribution function of the price of EVs in 2030 is obtained. (3) Since the future price is a random variable from the results of phase 2, a Monte Carlo simulation is then conducted to simulate the choices of all respondents by using their part-worth utility functions. For instance, using one thousand generated future prices of an EV together with other forecasted attribute values of the EV and an ICEV, one thousand market shares can be obtained with a Monte Carlo simulation. The resulting probability distribution of the market share of EVs provides more information than a fixed number forecast, reflecting the uncertain nature of the future development of EVs. The research results can help the auto industry and local government make more appropriate decisions and future action plans.

Keywords: conjoint model, electrical vehicle, learning curve, Monte Carlo simulation

Procedia PDF Downloads 69
875 Rogue Waves Arising on the Standing Periodic Wave in the High-Order Ablowitz-Ladik Equation

Authors: Yanpei Zhen

Abstract:

The nonlinear Schrödinger (NLS) equation models wave dynamics in many physical problems related to fluids, plasmas, and optics. The standing periodic waves are known to be modulationally unstable, and rogue waves (localized perturbations in space and time) have been observed on their backgrounds in numerical experiments. The exact solutions for rogue waves arising on the periodic standing waves have been obtained analytically. It is natural to ask if the rogue waves persist on the standing periodic waves in the integrable discretizations of the integrable NLS equation. We study the standing periodic waves in the semidiscrete integrable system modeled by the high-order Ablowitz-Ladik (AL) equation. The standing periodic wave of the high-order AL equation is expressed by the Jacobi cnoidal elliptic function. The exact solutions are obtained by using the separation of variables and one-fold Darboux transformation. Since the cnoidal wave is modulationally unstable, the rogue waves are generated on the periodic background.

Keywords: Darboux transformation, periodic wave, Rogue wave, separating the variables

Procedia PDF Downloads 183
874 Forming Limit Analysis of DP600-800 Steels

Authors: Marcelo Costa Cardoso, Luciano Pessanha Moreira

Abstract:

In this work, the plastic behaviour of cold-rolled zinc coated dual-phase steel sheets DP600 and DP800 grades is firstly investigated with the help of uniaxial, hydraulic bulge and Forming Limit Curve (FLC) tests. The uniaxial tensile tests were performed in three angular orientations with respect to the rolling direction to evaluate the strain-hardening and plastic anisotropy. True stress-strain curves at large strains were determined from hydraulic bulge testing and fitted to a work-hardening equation. The limit strains are defined at both localized necking and fracture conditions according to Nakajima’s hemispherical punch procedure. Also, an elasto-plastic localization model is proposed in order to predict strain and stress based forming limit curves. The investigated dual-phase sheets showed a good formability in the biaxial stretching and drawing FLC regions. For both DP600 and DP800 sheets, the corresponding numerical predictions overestimated and underestimated the experimental limit strains in the biaxial stretching and drawing FLC regions, respectively. This can be attributed to the restricted failure necking condition adopted in the numerical model, which is not suitable to describe the tensile and shear fracture mechanisms in advanced high strength steels under equibiaxial and biaxial stretching conditions.

Keywords: advanced high strength steels, forming limit curve, numerical modelling, sheet metal forming

Procedia PDF Downloads 372
873 Step Height Calibration Using Hamming Window: Band-Pass Filter

Authors: Dahi Ghareab Abdelsalam Ibrahim

Abstract:

Calibration of step heights with high accuracy is needed for many applications in the industry. In general, step height consists of three bands: pass band, transition band (roll-off), and stop band. Abdelsalam used a convolution of the transfer functions of both Chebyshev type 2 and elliptic filters with WFF of the Fresnel transform in the frequency domain for producing a steeper roll-off with the removal of ripples in the pass band- and stop-bands. In this paper, we used a new method based on the Hamming window: band-pass filter for calibration of step heights in terms of perfect adjustment of pass-band, roll-off, and stop-band. The method is applied to calibrate a nominal step height of 40 cm. The step height is measured first by asynchronous dual-wavelength phase-shift interferometry. The measured step height is then calibrated by the simulation of the Hamming window: band-pass filter. The spectrum of the simulated band-pass filter is simulated at N = 881 and f0 = 0.24. We can conclude that the proposed method can calibrate any step height by adjusting only two factors which are N and f0.

Keywords: optical metrology, step heights, hamming window, band-pass filter

Procedia PDF Downloads 83
872 Microwave Dielectric Constant Measurements of Titanium Dioxide Using Five Mixture Equations

Authors: Jyh Sheen, Yong-Lin Wang

Abstract:

This research dedicates to find a different measurement procedure of microwave dielectric properties of ceramic materials with high dielectric constants. For the composite of ceramic dispersed in the polymer matrix, the dielectric constants of the composites with different concentrations can be obtained by various mixture equations. The other development of mixture rule is to calculate the permittivity of ceramic from measurements on composite. To do this, the analysis method and theoretical accuracy on six basic mixture laws derived from three basic particle shapes of ceramic fillers have been reported for dielectric constants of ceramic less than 40 at microwave frequency. Similar researches have been done for other well-known mixture rules. They have shown that both the physical curve matching with experimental results and low potential theory error are important to promote the calculation accuracy. Recently, a modified of mixture equation for high dielectric constant ceramics at microwave frequency has also been presented for strontium titanate (SrTiO3) which was selected from five more well known mixing rules and has shown a good accuracy for high dielectric constant measurements. However, it is still not clear the accuracy of this modified equation for other high dielectric constant materials. Therefore, the five more well known mixing rules are selected again to understand their application to other high dielectric constant ceramics. The other high dielectric constant ceramic, TiO2 with dielectric constant 100, was then chosen for this research. Their theoretical error equations are derived. In addition to the theoretical research, experimental measurements are always required. Titanium dioxide is an interesting ceramic for microwave applications. In this research, its powder is adopted as the filler material and polyethylene powder is like the matrix material. The dielectric constants of those ceramic-polyethylene composites with various compositions were measured at 10 GHz. The theoretical curves of the five published mixture equations are shown together with the measured results to understand the curve matching condition of each rule. Finally, based on the experimental observation and theoretical analysis, one of the five rules was selected and modified to a new powder mixture equation. This modified rule has show very good curve matching with the measurement data and low theoretical error. We can then calculate the dielectric constant of pure filler medium (titanium dioxide) by those mixing equations from the measured dielectric constants of composites. The accuracy on the estimating dielectric constant of pure ceramic by various mixture rules will be compared. This modified mixture rule has also shown good measurement accuracy on the dielectric constant of titanium dioxide ceramic. This study can be applied to the microwave dielectric properties measurements of other high dielectric constant ceramic materials in the future.

Keywords: microwave measurement, dielectric constant, mixture rules, composites

Procedia PDF Downloads 367
871 Statistical Assessment of Models for Determination of Soil–Water Characteristic Curves of Sand Soils

Authors: S. J. Matlan, M. Mukhlisin, M. R. Taha

Abstract:

Characterization of the engineering behavior of unsaturated soil is dependent on the soil-water characteristic curve (SWCC), a graphical representation of the relationship between water content or degree of saturation and soil suction. A reasonable description of the SWCC is thus important for the accurate prediction of unsaturated soil parameters. The measurement procedures for determining the SWCC, however, are difficult, expensive, and time-consuming. During the past few decades, researchers have laid a major focus on developing empirical equations for predicting the SWCC, with a large number of empirical models suggested. One of the most crucial questions is how precisely existing equations can represent the SWCC. As different models have different ranges of capability, it is essential to evaluate the precision of the SWCC models used for each particular soil type for better SWCC estimation. It is expected that better estimation of SWCC would be achieved via a thorough statistical analysis of its distribution within a particular soil class. With this in view, a statistical analysis was conducted in order to evaluate the reliability of the SWCC prediction models against laboratory measurement. Optimization techniques were used to obtain the best-fit of the model parameters in four forms of SWCC equation, using laboratory data for relatively coarse-textured (i.e., sandy) soil. The four most prominent SWCCs were evaluated and computed for each sample. The result shows that the Brooks and Corey model is the most consistent in describing the SWCC for sand soil type. The Brooks and Corey model prediction also exhibit compatibility with samples ranging from low to high soil water content in which subjected to the samples that evaluated in this study.

Keywords: soil-water characteristic curve (SWCC), statistical analysis, unsaturated soil, geotechnical engineering

Procedia PDF Downloads 338
870 Inversion of the Spectral Analysis of Surface Waves Dispersion Curves through the Particle Swarm Optimization Algorithm

Authors: A. Cerrato Casado, C. Guigou, P. Jean

Abstract:

In this investigation, the particle swarm optimization (PSO) algorithm is used to perform the inversion of the dispersion curves in the spectral analysis of surface waves (SASW) method. This inverse problem usually presents complicated solution spaces with many local minima that make difficult the convergence to the correct solution. PSO is a metaheuristic method that was originally designed to simulate social behavior but has demonstrated powerful capabilities to solve inverse problems with complex space solution and a high number of variables. The dispersion curve of the synthetic soils is constructed by the vertical flexibility coefficient method, which is especially convenient for soils where the stiffness does not increase gradually with depth. The reason is that these types of soil profiles are not normally dispersive since the dominant mode of Rayleigh waves is usually not coincident with the fundamental mode. Multiple synthetic soil profiles have been tested to show the characteristics of the convergence process and assess the accuracy of the final soil profile. In addition, the inversion procedure is applied to multiple real soils and the final profile compared with the available information. The combination of the vertical flexibility coefficient method to obtain the dispersion curve and the PSO algorithm to carry out the inversion process proves to be a robust procedure that is able to provide good solutions for complex soil profiles even with scarce prior information.

Keywords: dispersion, inverse problem, particle swarm optimization, SASW, soil profile

Procedia PDF Downloads 185
869 The Effect of Global Value Chain Participation on Environment

Authors: Piyaphan Changwatchai

Abstract:

Global value chain is important for current world economy through foreign direct investment. Multinational enterprises' efficient location seeking for each stage of production lead to global production network and more global value chain participation of several countries. Global value chain participation has several effects on participating countries in several aspects including the environment. The effect of global value chain participation on the environment is ambiguous. As a result, this research aims to study the effect of global value chain participation on countries' CO₂ emission and methane emission by using quantitative analysis with secondary panel data of sixty countries. The analysis is divided into two types of global value chain participation, which are forward global value chain participation and backward global value chain participation. The results show that, for forward global value chain participation, GDP per capita affects two types of pollutants in downward bell curve shape. Forward global value chain participation negatively affects CO₂ emission and methane emission. As for backward global value chain participation, GDP per capita affects two types of pollutants in downward bell curve shape. Backward global value chain participation negatively affects methane emission only. However, when considering Asian countries, forward global value chain participation positively affects CO₂ emission. The recommendations of this research are that countries participating in global value chain should promote production with effective environmental management in each stage of value chain. The examples of policies are providing incentives to private sectors, including domestic producers and MNEs, for green production technology and efficient environment management and engaging in international agreements in terms of green production. Furthermore, government should regulate each stage of production in value chain toward green production, especially for Asia countries.

Keywords: CO₂ emission, environment, global value chain participation, methane emission

Procedia PDF Downloads 191
868 An Amended Method for Assessment of Hypertrophic Scars Viscoelastic Parameters

Authors: Iveta Bryjova

Abstract:

Recording of viscoelastic strain-vs-time curves with the aid of the suction method and a follow-up analysis, resulting into evaluation of standard viscoelastic parameters, is a significant technique for non-invasive contact diagnostics of mechanical properties of skin and assessment of its conditions, particularly in acute burns, hypertrophic scarring (the most common complication of burn trauma) and reconstructive surgery. For elimination of the skin thickness contribution, usable viscoelastic parameters deduced from the strain-vs-time curves are restricted to the relative ones (i.e. those expressed as a ratio of two dimensional parameters), like grosselasticity, net-elasticity, biological elasticity or Qu’s area parameters, in literature and practice conventionally referred to as R2, R5, R6, R7, Q1, Q2, and Q3. With the exception of parameters R2 and Q1, the remaining ones substantially depend on the position of inflection point separating the elastic linear and viscoelastic segments of the strain-vs-time curve. The standard algorithm implemented in commercially available devices relies heavily on the experimental fact that the inflection time comes about 0.1 sec after the suction switch-on/off, which depreciates credibility of parameters thus obtained. Although the Qu’s US 7,556,605 patent suggests a method of improving the precision of the inflection determination, there is still room for nonnegligible improving. In this contribution, a novel method of inflection point determination utilizing the advantageous properties of the Savitzky–Golay filtering is presented. The method allows computation of derivatives of smoothed strain-vs-time curve, more exact location of inflection and consequently more reliable values of aforementioned viscoelastic parameters. An improved applicability of the five inflection-dependent relative viscoelastic parameters is demonstrated by recasting a former study under the new method, and by comparing its results with those provided by the methods that have been used so far.

Keywords: Savitzky–Golay filter, scarring, skin, viscoelasticity

Procedia PDF Downloads 304