Search results for: transmission error
977 Performance Analysis of New Types of Reference Targets Based on Spaceborne and Airborne SAR Data
Authors: Y. S. Zhou, C. R. Li, L. L. Tang, C. X. Gao, D. J. Wang, Y. Y. Guo
Abstract:
Triangular trihedral corner reflector (CR) has been widely used as point target for synthetic aperture radar (SAR) calibration and image quality assessment. The additional “tip” of the triangular plate does not contribute to the reflector’s theoretical RCS and if it interacts with a perfectly reflecting ground plane, it will yield an increase of RCS at the radar bore-sight and decrease the accuracy of SAR calibration and image quality assessment. Regarding this problem, two types of CRs were manufactured. One was the hexagonal trihedral CR. It is a self-illuminating CR with relatively small plate edge length, while large edge length usually introduces unexpected edge diffraction error. The other was the triangular trihedral CR with extended bottom plate which considers the effect of ‘tip’ into the total RCS. In order to assess the performance of the two types of new CRs, flight campaign over the National Calibration and Validation Site for High Resolution Remote Sensors was carried out. Six hexagonal trihedral CRs and two bottom-extended trihedral CRs, as well as several traditional triangular trihedral CRs, were deployed. KOMPSAT-5 X-band SAR image was acquired for the performance analysis of the hexagonal trihedral CRs. C-band airborne SAR images were acquired for the performance analysis of the bottom-extended trihedral CRs. The analysis results showed that the impulse response function of both the hexagonal trihedral CRs and bottom-extended trihedral CRs were much closer to the ideal sinc-function than the traditional triangular trihedral CRs. The flight campaign results validated the advantages of new types of CRs and they might be useful in the future SAR calibration mission.Keywords: synthetic aperture radar, calibration, corner reflector, KOMPSAT-5
Procedia PDF Downloads 272976 Optimization Modeling of the Hybrid Antenna Array for the DoA Estimation
Authors: Somayeh Komeylian
Abstract:
The direction of arrival (DoA) estimation is the crucial aspect of the radar technologies for detecting and dividing several signal sources. In this scenario, the antenna array output modeling involves numerous parameters including noise samples, signal waveform, signal directions, signal number, and signal to noise ratio (SNR), and thereby the methods of the DoA estimation rely heavily on the generalization characteristic for establishing a large number of the training data sets. Hence, we have analogously represented the two different optimization models of the DoA estimation; (1) the implementation of the decision directed acyclic graph (DDAG) for the multiclass least-squares support vector machine (LS-SVM), and (2) the optimization method of the deep neural network (DNN) radial basis function (RBF). We have rigorously verified that the LS-SVM DDAG algorithm is capable of accurately classifying DoAs for the three classes. However, the accuracy and robustness of the DoA estimation are still highly sensitive to technological imperfections of the antenna arrays such as non-ideal array design and manufacture, array implementation, mutual coupling effect, and background radiation and thereby the method may fail in representing high precision for the DoA estimation. Therefore, this work has a further contribution on developing the DNN-RBF model for the DoA estimation for overcoming the limitations of the non-parametric and data-driven methods in terms of array imperfection and generalization. The numerical results of implementing the DNN-RBF model have confirmed the better performance of the DoA estimation compared with the LS-SVM algorithm. Consequently, we have analogously evaluated the performance of utilizing the two aforementioned optimization methods for the DoA estimation using the concept of the mean squared error (MSE).Keywords: DoA estimation, Adaptive antenna array, Deep Neural Network, LS-SVM optimization model, Radial basis function, and MSE
Procedia PDF Downloads 100975 Evaluating Accuracy of Foetal Weight Estimation by Clinicians in Christian Medical College Hospital, India and Its Correlation to Actual Birth Weight: A Clinical Audit
Authors: Aarati Susan Mathew, Radhika Narendra Patel, Jiji Mathew
Abstract:
A retrospective study conducted at Christian Medical College (CMC) Teaching Hospital, Vellore, India on 14th August 2014 to assess the accuracy of clinically estimated foetal weight upon labour admission. Estimating foetal weight is a crucial factor in assessing maternal and foetal complications during and after labour. Medical notes of ninety-eight postnatal women who fulfilled the inclusion criteria were studied to evaluate the correlation between their recorded Estimated Foetal Weight (EFW) on admission and actual birth weight (ABW) of the newborn after delivery. Data concerning maternal and foetal demographics was also noted. Accuracy was determined by absolute percentage error and proportion of estimates within 10% of ABW. Actual birth weights ranged from 950-4080g. A strong positive correlation between EFW and ABW (r=0.904) was noted. Term deliveries (≥40 weeks) in the normal weight range (2500-4000g) had a 59.5% estimation accuracy (n=74) compared to pre-term (<40 weeks) with an estimation accuracy of 0% (n=2). Out of the term deliveries, macrosomic babies (>4000g) were underestimated by 25% (n=3) and low birthweight (LBW) babies were overestimated by 12.7% (n=9). Registrars who estimated foetal weight were accurate in babies within normal weight ranges. However, there needs to be an improvement in predicting weight of macrosomic and LBW foetuses. We have suggested the use of an amended version of the Johnson’s formula for the Indian population for improvement and a need to re-audit once implemented.Keywords: clinical palpation, estimated foetal weight, pregnancy, India, Johnson’s formula
Procedia PDF Downloads 363974 Environmental Controls on the Distribution of Intertidal Foraminifers in Sabkha Al-Kharrar, Saudi Arabia: Implications for Sea-Level Changes
Authors: Talha A. Al-Dubai, Rashad A. Bantan, Ramadan H. Abu-Zied, Brian G. Jones, Aaid G. Al-Zubieri
Abstract:
Contemporary foraminiferal samples sediments were collected from the intertidal sabkha of Al-Kharrar Lagoon, Saudi Arabia, to study the vertical distribution of Foraminifera and, based on a modern training set, their potential to develop a predictor of former sea-level changes in the area. Based on hierarchical cluster analysis, the intertidal sabkha is divided into three vertical zones (A, B & C) represented by three foraminiferal assemblages, where agglutinated species occupied Zone A and calcareous species occupied the other two zones. In Zone A (high intertidal), Agglutinella compressa, Clavulina angularis and C. multicamerata are dominant species with a minor presence of Peneroplis planatus, Coscinospira hemprichii, Sorites orbiculus, Quinqueloculina lamarckiana, Q. seminula, Ammonia convexa and A. tepida. In contrast, in Zone B (middle intertidal) the most abundant species are P. planatus, C. hemprichii, S. orbiculus, Q. lamarckiana, Q. seminula and Q. laevigata, while Zone C (low intertidal) is characterised by C. hemprichii, Q. costata, S. orbiculus, P. planatus, A. convexa, A. tepida, Spiroloculina communis and S. costigera. A transfer function for sea-level reconstruction was developed using a modern dataset of 75 contemporary sediment samples and 99 species collected from several transects across the sabkha. The model provided an error of 0.12m, suggesting that intertidal foraminifers are able to predict the past sea-level changes with high precision in Al-Kharrar Lagoon, and thus the future prediction of those changes in the area.Keywords: Lagoonal foraminifers, intertidal sabkha, vertical zonation, transfer function, sea level
Procedia PDF Downloads 169973 Detection and Quantification of Ochratoxin A in Food by Aptasensor
Authors: Moez Elsaadani, Noel Durand, Brice Sorli, Didier Montet
Abstract:
Governments and international instances are trying to improve the food safety system to prevent, reduce or avoid the increase of food borne diseases. This food risk is one of the major concerns for the humanity. The contamination by mycotoxins is a threat to the health and life of humans and animals. One of the most common mycotoxin contaminating feed and foodstuffs is Ochratoxin A (OTA), which is a secondary metabolite, produced by Aspergillus and Penicillium strains. OTA has a chronic toxic effect and proved to be mutagenic, nephrotoxic, teratogenic, immunosuppressive, and carcinogenic. On the other side, because of their high stability, specificity, affinity, and their easy chemical synthesis, aptamer based methods are applied to OTA biosensing as alternative to traditional analytical technique. In this work, five aptamers have been tested to confirm qualitatively and quantitatively their binding with OTA. In the same time, three different analytical methods were tested and compared based on their ability to detect and quantify the OTA. The best protocol that was established to quantify free OTA from linked OTA involved an ultrafiltration method in green coffee solution with. OTA was quantified by HPLC-FLD to calculate the binding percentage of all five aptamers. One aptamer (The most effective with 87% binding with OTA) has been selected to be our biorecognition element to study its electrical response (variation of electrical properties) in the presence of OTA in order to be able to make a pairing with a radio frequency identification (RFID). This device, which is characterized by its low cost, speed, and a simple wireless information transmission, will implement the knowledge on the mycotoxins molecular sensors (aptamers), an electronic device that will link the information, the quantification and make it available to operators.Keywords: aptamer, aptasensor, detection, Ochratoxin A
Procedia PDF Downloads 181972 Query in Grammatical Forms and Corpus Error Analysis
Authors: Katerina Florou
Abstract:
Two decades after coined the term "learner corpora" as collections of texts created by foreign or second language learners across various language contexts, and some years following suggestion to incorporate "focusing on form" within a Task-Based Learning framework, this study aims to explore how learner corpora, whether annotated with errors or not, can facilitate a focus on form in an educational setting. Argues that analyzing linguistic form serves the purpose of enabling students to delve into language and gain an understanding of different facets of the foreign language. This same objective is applicable when analyzing learner corpora marked with errors or in their raw state, but in this scenario, the emphasis lies on identifying incorrect forms. Teachers should aim to address errors or gaps in the students' second language knowledge while they engage in a task. Building on this recommendation, we compared the written output of two student groups: the first group (G1) employed the focusing on form phase by studying a specific aspect of the Italian language, namely the past participle, through examples from native speakers and grammar rules; the second group (G2) focused on form by scrutinizing their own errors and comparing them with analogous examples from a native speaker corpus. In order to test our hypothesis, we created four learner corpora. The initial two were generated during the task phase, with one representing each group of students, while the remaining two were produced as a follow-up activity at the end of the lesson. The results of the first comparison indicated that students' exposure to their own errors can enhance their grasp of a grammatical element. The study is in its second stage and more results are to be announced.Keywords: Corpus interlanguage analysis, task based learning, Italian language as F1, learner corpora
Procedia PDF Downloads 53971 Effect of Assumptions of Normal Shock Location on the Design of Supersonic Ejectors for Refrigeration
Authors: Payam Haghparast, Mikhail V. Sorin, Hakim Nesreddine
Abstract:
The complex oblique shock phenomenon can be simply assumed as a normal shock at the constant area section to simulate a sharp pressure increase and velocity decrease in 1-D thermodynamic models. The assumed normal shock location is one of the greatest sources of error in ejector thermodynamic models. Most researchers consider an arbitrary location without justifying it. Our study compares the effect of normal shock place on ejector dimensions in 1-D models. To this aim, two different ejector experimental test benches, a constant area-mixing ejector (CAM) and a constant pressure-mixing (CPM) are considered, with different known geometries, operating conditions and working fluids (R245fa, R141b). In the first step, in order to evaluate the real value of the efficiencies in the different ejector parts and critical back pressure, a CFD model was built and validated by experimental data for two types of ejectors. These reference data are then used as input to the 1D model to calculate the lengths and the diameters of the ejectors. Afterwards, the design output geometry calculated by the 1D model is compared directly with the corresponding experimental geometry. It was found that there is a good agreement between the ejector dimensions obtained by the 1D model, for both CAM and CPM, with experimental ejector data. Furthermore, it is shown that normal shock place affects only the constant area length as it is proven that the inlet normal shock assumption results in more accurate length. Taking into account previous 1D models, the results suggest the use of the assumed normal shock location at the inlet of the constant area duct to design the supersonic ejectors.Keywords: 1D model, constant area-mixing, constant pressure-mixing, normal shock location, ejector dimensions
Procedia PDF Downloads 194970 Into Composer’s Mind: Understanding the Process of Translating Emotions into Music
Authors: Sanam Preet Singh
Abstract:
Music in comparison to any other art form is more reactive and alive. It has the capacity to directly interact with the listener's mind and generate an emotional response. All the major research conducted in the area majorly relied on the listener’s perspective to draw an understanding of music and its effects. There is a very small number of studies which focused on the source from which music originates, the music composers. This study aims to understand the process of how music composers understand and perceive emotions and how they translate them into music, in simpler terms how music composers encode their compositions to express determining emotions. One-to-one in-depth semi structured interviews were conducted, with 8 individuals both male and female, who were professional to intermediate-level music composers and Thematic analysis was conducted to derive the themes. The analysis showed that there is no single process on which music composers rely, rather there are combinations of multiple micro processes, which constitute the understanding and translation of emotions into music. In terms of perception of emotions, the role of processes such as Rumination, mood influence and escapism was discovered in the analysis. Unique themes about the understanding of their top down and bottom up perceptions were also discovered. Further analysis also revealed the role of imagination and emotional trigger explaining how music composers make sense of emotions. The translation process of emotions revealed the role of articulation and instrumentalization, in encoding or translating emotions to a composition. Further, applications of the trial and error method, nature influences and flow in the translation process are also discussed. In the end themes such as parallels between musical patterns and emotions, comfort zones and relatability also emerged during the analysis.Keywords: comfort zones, escapism, flow, rumination
Procedia PDF Downloads 87969 Analysis and Control of Camera Type Weft Straightener
Authors: Jae-Yong Lee, Gyu-Hyun Bae, Yun-Soo Chung, Dae-Sub Kim, Jae-Sung Bae
Abstract:
In general, fabric is heat-treated using a stenter machine in order to dry and fix its shape. It is important to shape before the heat treatment because it is difficult to revert back once the fabric is formed. To produce the product of right shape, camera type weft straightener has been applied recently to capture and process fabric images quickly. It is more powerful in determining the final textile quality rather than photo-sensor. Positioning in front of a stenter machine, weft straightener helps to spread fabric evenly and control the angle between warp and weft constantly as right angle by handling skew and bow rollers. To process this tricky procedure, the structural analysis should be carried out in advance, based on which, its control technology can be drawn. A structural analysis is to figure out the specific contact/slippage characteristics between fabric and roller. We already examined the applicability of camera type weft straightener to plain weave fabric and found its possibility and the specific working condition of machine and rollers. In this research, we aimed to explore another applicability of camera type weft straightener. Namely, we tried to figure out camera type weft straightener can be used for fabrics. To find out the optimum condition, we increased the number of rollers. The analysis is done by ANSYS software using Finite Element Analysis method. The control function is demonstrated by experiment. In conclusion, the structural analysis of weft straightener is done to identify a specific characteristic between roller and fabrics. The control of skew and bow roller is done to decrease the error of the angle between warp and weft. Finally, it is proved that camera type straightener can also be used for the special fabrics.Keywords: camera type weft straightener, structure analysis, control, skew and bow roller
Procedia PDF Downloads 292968 The Impact of COVID-19 on the Mental Health of Residents of Saudi Arabia
Authors: Khaleel Alyahya, Faizah Alotaibi
Abstract:
The coronavirus disease 19 (COVID-19) pandemic has caused an increase in general fear and anxiety around the globe. With the public health measures, including lockdown and travel restrictions, the COVID-19 period further resulted in a sudden increase in the vulnerability of people too ill mental health. This becomes greater among individuals who have a history of mental illness or are undergoing treatment and do not have easy access to medication and medical consultations. The study aims to measure the impact of COVID-19 and the degree of distress with the DASS scale on the mental health of residents living in Saudi Arabia. The study is a quantitative, observational, and cross-sectional conducted in Saudi Arabia to measure the impact of COVID-19 on the mental health of both citizens and residents of Saudi Arabia during pandemics. The study ran from February 2021 to June 2021, and a validated questionnaire was used. The targeted population of the study was Saudi citizens and non-Saudi residents. A sample size of 800 participants was calculated with a single proportion formula at 95% level of significance and 5% allowable error. The result revealed that participants who were always doing exercise experienced the lowest level of depression, anxiety, and stress. The highest prevalence of severe and extremely severe depression was among participants who sometimes do exercise at 53.2% for each. Similar results were obtained for anxiety and stress, where the extremely severe form was reported by those who sometimes did exercise at 54.8% and 72.2%, respectively. There was an inverse association between physical activity levels and levels of depression, anxiety, and stress during the COVID-19. Similarly, the levels of depression, anxiety, and stress differed significantly according to the exercise frequency during COVID-19.Keywords: mental, COVID-19, pandemic, lockdown, depression, anxiety, stress
Procedia PDF Downloads 103967 Investigating Naming and Connected Speech Impairments in Moroccan AD Patients
Authors: Mounia El Jaouhari, Mira Goral, Samir Diouny
Abstract:
Introduction: Previous research has indicated that language impairments are recognized as a feature of many neurodegenerative disorders, including non-language-led dementia subtypes such as Alzheimer´s disease (AD). In this preliminary study, the focal aim is to quantify the semantic content of naming and connected speech samples of Moroccan patients diagnosed with AD using two tasks taken from the culturally adapted and validated Moroccan version of the Boston Diagnostic Aphasia Examination. Methods: Five individuals with AD and five neurologically healthy individuals matched for age, gender, and education will participate in the study. Participants with AD will be diagnosed on the basis of the Moroccan version of the Diagnostic and Statistial Manual of Mental Disorders (DSM-4) screening test, the Moroccan version of the Mini Mental State Examination (MMSE) test scores, and neuroimaging analyses. The participants will engage in two tasks taken from the MDAE-SF: 1) Picture description and 2) Naming. Expected findings: Consistent with previous studies conducted on English speaking AD patients, we expect to find significant word production and retrieval impairments in AD patients in all measures. Moreover, we expect to find category fluency impairments that further endorse semantic breakdown accounts. In sum, not only will the findings of the current study shed more light on the locus of word retrieval impairments noted in AD, but also reflect the nature of Arabic morphology. In addition, the error patterns are expected to be similar to those found in previous AD studies in other languages.Keywords: alzheimer's disease, anomia, connected speech, semantic impairments, moroccan arabic
Procedia PDF Downloads 142966 Trading off Accuracy for Speed in Powerdrill
Authors: Filip Buruiana, Alexander Hall, Reimar Hofmann, Thomas Hofmann, Silviu Ganceanu, Alexandru Tudorica
Abstract:
In-memory column-stores make interactive analysis feasible for many big data scenarios. PowerDrill is a system used internally at Google for exploration in logs data. Even though it is a highly parallelized column-store and uses in memory caching, interactive response times cannot be achieved for all datasets (note that it is common to analyze data with 50 billion records in PowerDrill). In this paper, we investigate two orthogonal approaches to optimize performance at the expense of an acceptable loss of accuracy. Both approaches can be implemented as outer wrappers around existing database engines and so they should be easily applicable to other systems. For the first optimization we show that memory is the limiting factor in executing queries at speed and therefore explore possibilities to improve memory efficiency. We adapt some of the theory behind data sketches to reduce the size of particularly expensive fields in our largest tables by a factor of 4.5 when compared to a standard compression algorithm. This saves 37% of the overall memory in PowerDrill and introduces a 0.4% relative error in the 90th percentile for results of queries with the expensive fields. We additionally evaluate the effects of using sampling on accuracy and propose a simple heuristic for annotating individual result-values as accurate (or not). Based on measurements of user behavior in our real production system, we show that these estimates are essential for interpreting intermediate results before final results are available. For a large set of queries this effectively brings down the 95th latency percentile from 30 to 4 seconds.Keywords: big data, in-memory column-store, high-performance SQL queries, approximate SQL queries
Procedia PDF Downloads 259965 The Internationalization of Capital Market Influencing Debt Sustainability's Impact on the Growth of the Nigerian Economy
Authors: Godwin Chigozie Okpara, Eugine Iheanacho
Abstract:
The paper set out to assess the sustainability of debt in the Nigerian economy. Precisely, it sought to determine the level of debt sustainability and its impact on the growth of the economy; whether internationalization of capital market has positively influenced debt sustainability’s impact on economic growth; and to ascertain the direction of causality between external debt sustainability and the growth of GDP. In the light of these objectives, ratio analysis was employed for the determination of debt sustainability. Our findings revealed that the periods 1986 – 1994 and 1999 – 2004 were periods of severe unsustainable borrowing. The unit root test showed that the variables of the growth model were integrated of order one, I(1) and the cointegration test provided evidence for long run stability. Considering the dawn of internationalization of capital market, the researcher employed the structural break approach using Chow Breakpoint test on the vector error correction model (VECM). The result of VECM showed that debt sustainability, measured by debt to GDP ratio exerts negative and significant impact on the growth of the economy while debt burden measured by debt-export ratio and debt service export ratio are negative though insignificant on the growth of GDP. The Cho test result indicated that internationalization of capital market has no significant effect on the debt overhang impact on the growth of the Economy. The granger causality test indicates a feedback effect from economic growth to debt sustainability growth indicators. On the bases of these findings, the researchers made some necessary recommendations which if followed religiously will go a long way to ameliorating debt burdens and engendering economic growth.Keywords: debt sustainability, internalization, capital market, cointegration, chow test
Procedia PDF Downloads 437964 Generation of Charged Nanoparticles and Their Contribution to the Thin Film and Nanowire Growth during Chemical Vapour Deposition
Authors: Seung-Min Yang, Seong-Han Park, Sang-Hoon Lee, Seung-Wan Yoo, Chan-Soo Kim, Nong-Moon Hwang
Abstract:
The theory of charged nanoparticles suggested that in many Chemical Vapour Depositions (CVD) processes, Charged Nanoparticles (CNPs) are generated in the gas-phase and become a building block of thin films and nanowires. Recently, the nanoparticle-based crystallization has become a big issue since the growth of nanorods or crystals by the building block of nanoparticles was directly observed by transmission electron microscopy observations in the liquid cell. In an effort to confirm charged gas-phase nuclei, that might be generated under conventional processing conditions of thin films and nanowires during CVD, we performed an in-situ measurement using differential mobility analyser and particle beam mass spectrometer. The size distribution and number density of CNPs were affected by process parameters such as precursor flow rate and working temperature. It was shown that many films and nanostructures, which have been believed to grow by individual atoms or molecules, actually grow by the building blocks of such charged nuclei. The electrostatic interaction between CNPs and the growing surface induces the self-assembly into films and nanowires. In addition, the charge-enhanced atomic diffusion makes CNPs liquid-like quasi solid. As a result, CNPs tend to land epitaxial on the growing surface, which results in the growth of single crystalline nanowires with a smooth surface.Keywords: chemical vapour deposition, charged nanoparticle, electrostatic force, nanostructure evolution, differential mobility analyser, particle beam mass spectrometer
Procedia PDF Downloads 451963 Relation between Biochemical Parameters and Bone Density in Postmenopausal Women with Osteoporosis
Authors: Shokouh Momeni, Mohammad Reza Salamat, Ali Asghar Rastegari
Abstract:
Background: Osteoporosis is the most prevalent metabolic bone disease in postmenopausal women associated with reduced bone mass and increased bone fracture. Measuring bone density in the lumbar spine and hip is a reliable measure of bone mass and can therefore specify the risk of fracture. Dual-energy X-ray absorptiometry(DXA) is an accurate non-invasive system measuring the bone density, with low margin of error and no complications. The present study aimed to investigate the relationship between biochemical parameters with bone density in postmenopausal women. Materials and methods: This cross-sectional study was conducted on 87 postmenopausal women referred to osteoporosis centers in Isfahan. Bone density was measured in the spine and hip area using DXA system. Serum levels of calcium, phosphorus, alkaline phosphatase and magnesium were measured by autoanalyzer and serum levels of vitamin D were measured by high-performance liquid chromatography(HPLC). Results: The mean parameters of calcium, phosphorus, alkaline phosphatase, vitamin D and magnesium did not show a significant difference between the two groups(P-value>0.05). In the control group, the relationship between alkaline phosphatase and BMC and BA in the spine was significant with a correlation coefficient of -0.402 and 0.258, respectively(P-value<0.05) and BMD and T-score in the femoral neck area showed a direct and significant relationship with phosphorus(Correlation=0.368; P-value=0.038). There was a significant relationship between the Z-score with calcium(Correlation=0.358; P-value=0.044). Conclusion: There was no significant relationship between the values of calcium, phosphorus, alkaline phosphatase, vitamin D and magnesium parameters and bone density (spine and hip) in postmenopausKeywords: osteoporosis, menopause, bone mineral density, vitamin d, calcium, magnesium, alkaline phosphatase, phosphorus
Procedia PDF Downloads 176962 The Relationships between Energy Consumption, Carbon Dioxide (CO2) Emissions, and GDP for Turkey: Time Series Analysis, 1980-2010
Authors: Jinhoa Lee
Abstract:
The relationships between environmental quality, energy use and economic output have created growing attention over the past decades among researchers and policy makers. Focusing on the empirical aspects of the role of carbon dioxide (CO2) emissions and energy use in affecting the economic output, this paper is an effort to fulfill the gap in a comprehensive case study at a country level using modern econometric techniques. To achieve the goal, this country-specific study examines the short-run and long-run relationships among energy consumption (using disaggregated energy sources: crude oil, coal, natural gas, and electricity), CO2 emissions and gross domestic product (GDP) for Turkey using time series analysis from the year 1980-2010. To investigate the relationships between the variables, this paper employs the Augmented Dickey-Fuller (ADF) test for stationarity, Johansen’s maximum likelihood method for cointegration and a Vector Error Correction Model (VECM) for both short- and long-run causality among the research variables for the sample. The long-run equilibrium in the VECM suggests no effects of the CO2 emissions and energy use on the GDP in Turkey. There exists a short-run bidirectional relationship between the electricity and natural gas consumption, and also there is a negative unidirectional causality running from the GDP to electricity use. Overall, the results partly support arguments that there are relationships between energy use and economic output; however, the effects may differ due to the source of energy such as in the case of Turkey for the period of 1980-2010. However, there is no significant relationship between the CO2 emissions and the GDP and between the CO2 emissions and the energy use both in the short term and long term.Keywords: CO2 emissions, energy consumption, GDP, Turkey, time series analysis
Procedia PDF Downloads 504961 Simulating the Dynamics of E-waste Production from Mobile Phone: Model Development and Case Study of Rwanda
Authors: Rutebuka Evariste, Zhang Lixiao
Abstract:
Mobile phone sales and stocks showed an exponential growth in the past years globally and the number of mobile phones produced each year was surpassing one billion in 2007, this soaring growth of related e-waste deserves sufficient attentions paid to it regionally and globally as long as 40% of its total weight is made from metallic which 12 elements are identified to be highly hazardous and 12 are less harmful. Different research and methods have been used to estimate the obsolete mobile phones but none has developed a dynamic model and handle the discrepancy resulting from improper approach and error in the input data. The study aim was to develop a comprehensive dynamic system model for simulating the dynamism of e-waste production from mobile phone regardless the country or region and prevail over the previous errors. The logistic model method combined with STELLA program has been used to carry out this study. Then the simulation for Rwanda has been conducted and compared with others countries’ results as model testing and validation. Rwanda is about 1.5 million obsoletes mobile phone with 125 tons of waste in 2014 with e-waste production peak in 2017. It is expected to be 4.17 million obsoletes with 351.97 tons by 2020 along with environmental impact intensity of 21times to 2005. Thus, it is concluded through the model testing and validation that the present dynamic model is competent and able deal with mobile phone e-waste production the fact that it has responded to the previous studies questions from Czech Republic, Iran, and China.Keywords: carrying capacity, dematerialization, logistic model, mobile phone, obsolescence, similarity, Stella, system dynamics
Procedia PDF Downloads 344960 Seroprevalence and Associated Factors of Hepatitis B and Hepatitis C Viral Infections among Prisoners in Tigrai, Northern Ethiopia
Authors: Belaynesh Tsegay Beyene, Teklay Gebrecherkos, Atsebaha Gebrekidan Kahsay, Mahmud Abdulkader
Abstract:
Background: Hepatitis B and C viruses are of important health and socioeconomic problem of the globe with remarkable diseases and deaths in Sub-Saharan African countries. The burden of hepatitis is unknown in the prison settings of Tigrai. Therefore, we aimed to describe the seroprevalence and associated factors of hepatitis B and C viruses among prisoners of Tigrai, Ethiopia. Methods: A cross-sectional study was carried out from February 2020 to May 2020 at the prison facilities of Tigrai. Demographics and associated factors were collected from 315 prisoners prospectively. Five milliliter of blood was collected and tested using rapid tests kits of HBsAg (Zhejiang orient Gene Biotech Co., Ltd., China) and HCV antibodies (Volkan Kozmetik Sanayi Ve Ticaret Ltd. STI, Turkey). Positive samples were confirmed using enzyme-linked immunosorbent assay (ELISA) (Beijing Wantai Biological Pharmacy Enterprise Co. Ltd). Data were analyzed using Statistical Package for Social Sciences (SPSS) version 20 and p < 0.05 was considered statistically significant. Results: The overall seroprevalence of HBV and HCV were 25 (7.9%) and 1(0.3%), respectively. The majority of hepatitis B viral infections were identified from the age groups of 18-25 years (10.7%) and unmarried prisoners (11.8%). Prisoners greater than 100 per cell [AOR =3.95, 95% CI= (1.15, 13.6, p =0.029)] and having history of alcohol consumption [AOR =3.01, 95% CI= (1.17, 7.74, p =0.022)] were significantly associated with HBV infections. Conclusions: The seroprevalence of HBV among prisoners was nearly high or borderline (7.9%) with a very low HCV prevalence (0.3%). HBV was most prevalent among young adults, large number of prisoners per cell and those who had history of alcohol consumption. This study recommends that there should be prison-focused intervention including regular health education by emphasis on the mode of transmission and introducing HBV screening policy for prisoners especially when they enter to the prison.Keywords: seroprevalence, HBV, HCV, prisoners, Tigrai
Procedia PDF Downloads 73959 Laser Registration and Supervisory Control of neuroArm Robotic Surgical System
Authors: Hamidreza Hoshyarmanesh, Hosein Madieh, Sanju Lama, Yaser Maddahi, Garnette R. Sutherland, Kourosh Zareinia
Abstract:
This paper illustrates the concept of an algorithm to register specified markers on the neuroArm surgical manipulators, an image-guided MR-compatible tele-operated robot for microsurgery and stereotaxy. Two range-finding algorithms, namely time-of-flight and phase-shift, are evaluated for registration and supervisory control. The time-of-flight approach is implemented in a semi-field experiment to determine the precise position of a tiny retro-reflective moving object. The moving object simulates a surgical tool tip. The tool is a target that would be connected to the neuroArm end-effector during surgery inside the magnet bore of the MR imaging system. In order to apply flight approach, a 905-nm pulsed laser diode and an avalanche photodiode are utilized as the transmitter and receiver, respectively. For the experiment, a high frequency time to digital converter was designed using a field-programmable gate arrays. In the phase-shift approach, a continuous green laser beam with a wavelength of 530 nm was used as the transmitter. Results showed that a positioning error of 0.1 mm occurred when the scanner-target point distance was set in the range of 2.5 to 3 meters. The effectiveness of this non-contact approach exhibited that the method could be employed as an alternative for conventional mechanical registration arm. Furthermore, the approach is not limited by physical contact and extension of joint angles.Keywords: 3D laser scanner, intraoperative MR imaging, neuroArm, real time registration, robot-assisted surgery, supervisory control
Procedia PDF Downloads 286958 Non-Destructive Technique for Detection of Voids in the IC Package Using Terahertz-Time Domain Spectrometer
Authors: Sung-Hyeon Park, Jin-Wook Jang, Hak-Sung Kim
Abstract:
In recent years, Terahertz (THz) time-domain spectroscopy (TDS) imaging method has been received considerable interest as a promising non-destructive technique for detection of internal defects. In comparison to other non-destructive techniques such as x-ray inspection method, scanning acoustic tomograph (SAT) and microwave inspection method, THz-TDS imaging method has many advantages: First, it can measure the exact thickness and location of defects. Second, it doesn’t require the liquid couplant while it is very crucial to deliver that power of ultrasonic wave in SAT method. Third, it didn’t damage to materials and be harmful to human bodies while x-ray inspection method does. Finally, it exhibits better spatial resolution than microwave inspection method. However, this technology couldn’t be applied to IC package because THz radiation can penetrate through a wide variety of materials including polymers and ceramics except of metals. Therefore, it is difficult to detect the defects in IC package which are composed of not only epoxy and semiconductor materials but also various metals such as copper, aluminum and gold. In this work, we proposed a special method for detecting the void in the IC package using THz-TDS imaging system. The IC package specimens for this study are prepared by Packaging Engineering Team in Samsung Electronics. Our THz-TDS imaging system has a special reflection mode called pitch-catch mode which can change the incidence angle in the reflection mode from 10 o to 70 o while the others have transmission and the normal reflection mode or the reflection mode fixed at certain angle. Therefore, to find the voids in the IC package, we investigated the appropriate angle as changing the incidence angle of THz wave emitter and detector. As the results, the voids in the IC packages were successfully detected using our THz-TDS imaging system.Keywords: terahertz, non-destructive technique, void, IC package
Procedia PDF Downloads 473957 The Security Trade-Offs in Resource Constrained Nodes for IoT Application
Authors: Sultan Alharby, Nick Harris, Alex Weddell, Jeff Reeve
Abstract:
The concept of the Internet of Things (IoT) has received much attention over the last five years. It is predicted that the IoT will influence every aspect of our lifestyles in the near future. Wireless Sensor Networks are one of the key enablers of the operation of IoTs, allowing data to be collected from the surrounding environment. However, due to limited resources, nature of deployment and unattended operation, a WSN is vulnerable to various types of attack. Security is paramount for reliable and safe communication between IoT embedded devices, but it does, however, come at a cost to resources. Nodes are usually equipped with small batteries, which makes energy conservation crucial to IoT devices. Nevertheless, security cost in terms of energy consumption has not been studied sufficiently. Previous research has used a security specification of 802.15.4 for IoT applications, but the energy cost of each security level and the impact on quality of services (QoS) parameters remain unknown. This research focuses on the cost of security at the IoT media access control (MAC) layer. It begins by studying the energy consumption of IEEE 802.15.4 security levels, which is followed by an evaluation for the impact of security on data latency and throughput, and then presents the impact of transmission power on security overhead, and finally shows the effects of security on memory footprint. The results show that security overhead in terms of energy consumption with a payload of 24 bytes fluctuates between 31.5% at minimum level over non-secure packets and 60.4% at the top security level of 802.15.4 security specification. Also, it shows that security cost has less impact at longer packet lengths, and more with smaller packet size. In addition, the results depicts a significant impact on data latency and throughput. Overall, maximum authentication length decreases throughput by almost 53%, and encryption and authentication together by almost 62%.Keywords: energy consumption, IEEE 802.15.4, IoT security, security cost evaluation
Procedia PDF Downloads 168956 Green Synthesis and Characterisation of Gold Nanoparticles from the Stem Bark and Leaves of Khaya Senegalensis and Its Cytotoxicity on MCF7 Cell Lines
Authors: Stephen Daniel Iduh, Evans Chidi Egwin, Oluwatosin Kudirat Shittu
Abstract:
The process for the development of reliable and eco-friendly metallic Nanoparticles is an important step in the field of Nanotechnology for biomedical application. To achieve this, use of natural sources like biological systems becomes essential. In the present work, extracellular biosynthesis of gold Nanoparticles using aqueous leave and stembark extracts of K. senegalensis has been attempted. The gold Nanoparticles produced were characterized using High Resolution scanning electron microscopy, Ultra Violet–Visible spectroscopy, zeta-sizer Nano, Energy-Dispersive X-ray (EDAX) Spectroscopy and Fourier Transmission Infrared (FTIR) Spectroscopy. The cytotoxicity of the synthesized gold nanoparticles on MCF-7 cell line was evaluated using MTT assay. The result showed a rapid development of Nano size and shaped particles within 5 minutes of reaction with Surface Plasmon Resonance at 520 and 525nm respectively. An average particle size of 20-90nm was confirmed. The amount of the extracts determines the core size of the AuNPs. The core size of the AuNPs decreases as the amount of extract increases and it causes the shift of Surface Plasmon Resonance band. The FTIR confirms the presence of biomolecules serving as reducing and capping agents on the synthesised gold nanoparticles. The MTT assay shows a significant effect of gold nanoparticles which is concentration dependent. This environment-friendly method of biological gold Nanoparticle synthesis has the potential and can be directly applied in cancer therapy.Keywords: biosynthesis, gold nanoparticles, characterization, calotropis procera, cytotoxicity
Procedia PDF Downloads 490955 Artificial intelligence and Law
Authors: Mehrnoosh Abouzari, Shahrokh Shahraei
Abstract:
With the development of artificial intelligence in the present age, intelligent machines and systems have proven their actual and potential capabilities and are mindful of increasing their presence in various fields of human life in the fields of industry, financial transactions, marketing, manufacturing, service affairs, politics, economics and various branches of the humanities .Therefore, despite the conservatism and prudence of law enforcement, the traces of artificial intelligence can be seen in various areas of law. Including judicial robotics capability estimation, intelligent judicial decision making system, intelligent defender and attorney strategy adjustment, dissemination and regulation of different and scattered laws in each case to achieve judicial coherence and reduce opinion, reduce prolonged hearing and discontent compared to the current legal system with designing rule-based systems, case-based, knowledge-based systems, etc. are efforts to apply AI in law. In this article, we will identify the ways in which AI is applied in its laws and regulations, identify the dominant concerns in this area and outline the relationship between these two areas in order to answer the question of how artificial intelligence can be used in different areas of law and what the implications of this application will be. The authors believe that the use of artificial intelligence in the three areas of legislative, judiciary and executive power can be very effective in governments' decisions and smart governance, and helping to reach smart communities across human and geographical boundaries that humanity's long-held dream of achieving is a global village free of violence and personalization and human error. Therefore, in this article, we are going to analyze the dimensions of how to use artificial intelligence in the three legislative, judicial and executive branches of government in order to realize its application.Keywords: artificial intelligence, law, intelligent system, judge
Procedia PDF Downloads 119954 Impact of Climate Change on Sea Level Rise along the Coastline of Mumbai City, India
Authors: Chakraborty Sudipta, A. R. Kambekar, Sarma Arnab
Abstract:
Sea-level rise being one of the most important impacts of anthropogenic induced climate change resulting from global warming and melting of icebergs at Arctic and Antarctic, the investigations done by various researchers both on Indian Coast and elsewhere during the last decade has been reviewed in this paper. The paper aims to ascertain the propensity of consistency of different suggested methods to predict the near-accurate future sea level rise along the coast of Mumbai. Case studies at East Coast, Southern Tip and West and South West coast of India have been reviewed. Coastal Vulnerability Index of several important international places has been compared, which matched with Intergovernmental Panel on Climate Change forecasts. The application of Geographic Information System mapping, use of remote sensing technology, both Multi Spectral Scanner and Thematic Mapping data from Landsat classified through Iterative Self-Organizing Data Analysis Technique for arriving at high, moderate and low Coastal Vulnerability Index at various important coastal cities have been observed. Instead of data driven, hindcast based forecast for Significant Wave Height, additional impact of sea level rise has been suggested. Efficacy and limitations of numerical methods vis-à-vis Artificial Neural Network has been assessed, importance of Root Mean Square error on numerical results is mentioned. Comparing between various computerized methods on forecast results obtained from MIKE 21 has been opined to be more reliable than Delft 3D model.Keywords: climate change, Coastal Vulnerability Index, global warming, sea level rise
Procedia PDF Downloads 132953 Numerical Simulation of Flow and Heat Transfer Characteristics with Various Working Conditions inside a Reactor of Wet Scrubber
Authors: Jonghyuk Yoon, Hyoungwoon Song, Youngbae Kim, Eunju Kim
Abstract:
Recently, with the rapid growth of semiconductor industry, lots of interests have been focused on after treatment system that remove the polluted gas produced from semiconductor manufacturing process, and a wet scrubber is the one of the widely used system. When it comes to mechanism of removing the gas, the polluted gas is removed firstly by chemical reaction in a reactor part. After that, the polluted gas stream is brought into contact with the scrubbing liquid, by spraying it with the liquid. Effective design of the reactor part inside the wet scrubber is highly important since removal performance of the polluted gas in the reactor plays an important role in overall performance and stability. In the present study, a CFD (Computational Fluid Dynamics) analysis was performed to figure out the thermal and flow characteristics inside unit a reactor of wet scrubber. In order to verify the numerical result, temperature distribution of the numerical result at various monitoring points was compared to the experimental result. The average error rates (12~15%) between them was shown and the numerical result of temperature distribution was in good agreement with the experimental data. By using validated numerical method, the effect of the reactor geometry on heat transfer rate was also taken into consideration. Uniformity of temperature distribution was improved about 15%. Overall, the result of present study could be useful information to identify the fluid behavior and thermal performance for various scrubber systems. This project is supported by the ‘R&D Center for the reduction of Non-CO₂ Greenhouse gases (RE201706054)’ funded by the Korea Ministry of Environment (MOE) as the Global Top Environment R&D Program.Keywords: semiconductor, polluted gas, CFD (Computational Fluid Dynamics), wet scrubber, reactor
Procedia PDF Downloads 143952 Efficacy of Learning: Digital Sources versus Print
Authors: Rahimah Akbar, Abdullah Al-Hashemi, Hanan Taqi, Taiba Sadeq
Abstract:
As technology continues to develop, teaching curriculums in both schools and universities have begun adopting a more computer/digital based approach to the transmission of knowledge and information, as opposed to the more old-fashioned use of textbooks. This gives rise to the question: Are there any differences in learning from a digital source over learning from a printed source, as in from a textbook? More specifically, which medium of information results in better long-term retention? A review of the confounding factors implicated in understanding the relationship between learning from the two different mediums was done. Alongside this, a 4-week cohort study involving 76 1st year English Language female students was performed, whereby the participants were divided into 2 groups. Group A studied material from a paper source (referred to as the Print Medium), and Group B studied material from a digital source (Digital Medium). The dependent variables were grading of memory recall indexed by a 4 point grading system, and total frequency of item repetition. The study was facilitated by advanced computer software called Super Memo. Results showed that, contrary to prevailing evidence, the Digital Medium group showed no statistically significant differences in terms of the shift from Remember (Episodic) to Know (Semantic) when all confounding factors were accounted for. The shift from Random Guess and Familiar to Remember occurred faster in the Digital Medium than it did in the Print Medium.Keywords: digital medium, print medium, long-term memory recall, episodic memory, semantic memory, super memo, forgetting index, frequency of repetitions, total time spent
Procedia PDF Downloads 289951 Security Issues on Smart Grid and Blockchain-Based Secure Smart Energy Management Systems
Authors: Surah Aldakhl, Dafer Alali, Mohamed Zohdy
Abstract:
The next generation of electricity grid infrastructure, known as the "smart grid," integrates smart ICT (information and communication technology) into existing grids in order to alleviate the drawbacks of existing one-way grid systems. Future power systems' efficiency and dependability are anticipated to significantly increase thanks to the Smart Grid, especially given the desire for renewable energy sources. The security of the Smart Grid's cyber infrastructure is a growing concern, though, as a result of the interconnection of significant power plants through communication networks. Since cyber-attacks can destroy energy data, beginning with personal information leaking from grid members, they can result in serious incidents like huge outages and the destruction of power network infrastructure. We shall thus propose a secure smart energy management system based on the Blockchain as a remedy for this problem. The power transmission and distribution system may undergo a transformation as a result of the inclusion of optical fiber sensors and blockchain technology in smart grids. While optical fiber sensors allow real-time monitoring and management of electrical energy flow, Blockchain offers a secure platform to safeguard the smart grid against cyberattacks and unauthorized access. Additionally, this integration makes it possible to see how energy is produced, distributed, and used in real time, increasing transparency. This strategy has advantages in terms of improved security, efficiency, dependability, and flexibility in energy management. An in-depth analysis of the advantages and drawbacks of combining blockchain technology with optical fiber is provided in this paper.Keywords: smart grids, blockchain, fiber optic sensor, security
Procedia PDF Downloads 119950 Evaluation of Ensemble Classifiers for Intrusion Detection
Authors: M. Govindarajan
Abstract:
One of the major developments in machine learning in the past decade is the ensemble method, which finds highly accurate classifier by combining many moderately accurate component classifiers. In this research work, new ensemble classification methods are proposed with homogeneous ensemble classifier using bagging and heterogeneous ensemble classifier using arcing and their performances are analyzed in terms of accuracy. A Classifier ensemble is designed using Radial Basis Function (RBF) and Support Vector Machine (SVM) as base classifiers. The feasibility and the benefits of the proposed approaches are demonstrated by the means of standard datasets of intrusion detection. The main originality of the proposed approach is based on three main parts: preprocessing phase, classification phase, and combining phase. A wide range of comparative experiments is conducted for standard datasets of intrusion detection. The performance of the proposed homogeneous and heterogeneous ensemble classifiers are compared to the performance of other standard homogeneous and heterogeneous ensemble methods. The standard homogeneous ensemble methods include Error correcting output codes, Dagging and heterogeneous ensemble methods include majority voting, stacking. The proposed ensemble methods provide significant improvement of accuracy compared to individual classifiers and the proposed bagged RBF and SVM performs significantly better than ECOC and Dagging and the proposed hybrid RBF-SVM performs significantly better than voting and stacking. Also heterogeneous models exhibit better results than homogeneous models for standard datasets of intrusion detection.Keywords: data mining, ensemble, radial basis function, support vector machine, accuracy
Procedia PDF Downloads 248949 Estimation of Source Parameters and Moment Tensor Solution through Waveform Modeling of 2013 Kishtwar Earthquake
Authors: Shveta Puri, Shiv Jyoti Pandey, G. M. Bhat, Neha Raina
Abstract:
TheJammu and Kashmir region of the Northwest Himalaya had witnessed many devastating earthquakes in the recent past and has remained unexplored for any kind of seismic investigations except scanty records of the earthquakes that occurred in this region in the past. In this study, we have used local seismic data of year 2013 that was recorded by the network of Broadband Seismographs in J&K. During this period, our seismic stations recorded about 207 earthquakes including two moderate events of Mw 5.7 on 1st May, 2013 and Mw 5.1 of 2nd August, 2013.We analyzed the events of Mw 3-4.6 and the main events only (for minimizing the error) for source parameters, b value and sense of movement through waveform modeling for understanding seismotectonic and seismic hazard of the region. It has been observed that most of the events are bounded between 32.9° N – 33.3° N latitude and 75.4° E – 76.1° E longitudes, Moment Magnitude (Mw) ranges from Mw 3 to 5.7, Source radius (r), from 0.21 to 3.5 km, stress drop, from 1.90 bars to 71.1 bars and Corner frequency, from 0.39 – 6.06 Hz. The b-value for this region was found to be 0.83±0 from these events which are lower than the normal value (b=1), indicating the area is under high stress. The travel time inversion and waveform inversion method suggest focal depth up to 10 km probably above the detachment depth of the Himalayan region. Moment tensor solution of the (Mw 5.1, 02:32:47 UTC) main event of 2ndAugust suggested that the source fault is striking at 295° with dip of 33° and rake value of 85°. It was found that these events form intense clustering of small to moderate events within a narrow zone between Panjal Thrust and Kishtwar Window. Moment tensor solution of the main events and their aftershocks indicating thrust type of movement is occurring in this region.Keywords: b-value, moment tensor, seismotectonics, source parameters
Procedia PDF Downloads 313948 ChaQra: A Cellular Unit of the Indian Quantum Network
Authors: Shashank Gupta, Iteash Agarwal, Vijayalaxmi Mogiligidda, Rajesh Kumar Krishnan, Sruthi Chennuri, Deepika Aggarwal, Anwesha Hoodati, Sheroy Cooper, Ranjan, Mohammad Bilal Sheik, Bhavya K. M., Manasa Hegde, M. Naveen Krishna, Amit Kumar Chauhan, Mallikarjun Korrapati, Sumit Singh, J. B. Singh, Sunil Sud, Sunil Gupta, Sidhartha Pant, Sankar, Neha Agrawal, Ashish Ranjan, Piyush Mohapatra, Roopak T., Arsh Ahmad, Nanjunda M., Dilip Singh
Abstract:
Major research interests on quantum key distribution (QKD) are primarily focussed on increasing 1. point-to-point transmission distance (1000 Km), 2. secure key rate (Mbps), 3. security of quantum layer (device-independence). It is great to push the boundaries on these fronts, but these isolated approaches are neither scalable nor cost-effective due to the requirements of specialised hardware and different infrastructure. Current and future QKD network requires addressing different sets of challenges apart from distance, key rate, and quantum security. In this regard, we present ChaQra -a sub-quantum network with core features as 1) Crypto agility (integration in the already deployed telecommunication fibres), 2) Software defined networking (SDN paradigm for routing different nodes), 3) reliability (addressing denial-of-service with hybrid quantum safe cryptography), 4) upgradability (modules upgradation based on scientific and technological advancements), 5) Beyond QKD (using QKD network for distributed computing, multi-party computation etc). Our results demonstrate a clear path to create and accelerate quantum secure Indian subcontinent under the national quantum mission.Keywords: quantum network, quantum key distribution, quantum security, quantum information
Procedia PDF Downloads 56