Search results for: resolution down converter
425 A Simple and Empirical Refraction Correction Method for UAV-Based Shallow-Water Photogrammetry
Authors: I GD Yudha Partama, A. Kanno, Y. Akamatsu, R. Inui, M. Goto, M. Sekine
Abstract:
The aerial photogrammetry of shallow water bottoms has the potential to be an efficient high-resolution survey technique for shallow water topography, thanks to the advent of convenient UAV and automatic image processing techniques Structure-from-Motion (SfM) and Multi-View Stereo (MVS)). However, it suffers from the systematic overestimation of the bottom elevation, due to the light refraction at the air-water interface. In this study, we present an empirical method to correct for the effect of refraction after the usual SfM-MVS processing, using common software. The presented method utilizes the empirical relation between the measured true depth and the estimated apparent depth to generate an empirical correction factor. Furthermore, this correction factor was utilized to convert the apparent water depth into a refraction-corrected (real-scale) water depth. To examine its effectiveness, we applied the method to two river sites, and compared the RMS errors in the corrected bottom elevations with those obtained by three existing methods. The result shows that the presented method is more effective than the two existing methods: The method without applying correction factor and the method utilizes the refractive index of water (1.34) as correction factor. In comparison with the remaining existing method, which used the additive terms (offset) after calculating correction factor, the presented method performs well in Site 2 and worse in Site 1. However, we found this linear regression method to be unstable when the training data used for calibration are limited. It also suffers from a large negative bias in the correction factor when the apparent water depth estimated is affected by noise, according to our numerical experiment. Overall, the good accuracy of refraction correction method depends on various factors such as the locations, image acquisition, and GPS measurement conditions. The most effective method can be selected by using statistical selection (e.g. leave-one-out cross validation).Keywords: bottom elevation, MVS, river, SfM
Procedia PDF Downloads 298424 Democratising Rivers: Local River Conflicts in Rajasthan
Authors: Renu Sisodia
Abstract:
This paper attempted to explore and explain the local level river water conflicts in the larger context of state - society relations. This study also covered causes of local level river water conflicts in the catchment area of Bandi and Arvari river of Rajasthan. The focus of the study was on the emergence of community driven, decentralised management of river water bodies and strategies used by local communities to protect and manage river water conflicts. The research is conducted through the process of designing a framework based on essential theoretical and practical findings supported by primary and secondary data. Two in depth case study is conducted to understand the phenomenon in depth. The first field site is Bandi River of Pali district, which is about the struggle between textile industries, community and the State government in which water pollution is said to be one of the driving force of the conflict. Finding shows that the state is supporting textile industries in Pali district have not been adherent to the environmental ethics. Present legal infrastructure and local institutions fail to resolve the serious problem of water pollution in Bandi River and its adverse impact on the local community as a result local community resistance against the local administration and the state government. The second case illustrates the plight of Arvari River in Alwar district. Tussle for the ownership of fisheries between local community, the private fish contractor and State government has been the main bone of contestation. To resolve this conflict local community formed conflict management mechanism named as Arvari Parliament. Arvari Parliament has its own principle and rules to resolve water conflicts related to ownership of the river and use of the river water. The research findings also highlight the co-existence between conventional and modern practices in resolving conflicts.Keywords: water, water pollution, water conflicts, water scarcity, conflict resolution, local community
Procedia PDF Downloads 482423 Determining the Extent and Direction of Relief Transformations Caused by Ski Run Construction Using LIDAR Data
Authors: Joanna Fidelus-Orzechowska, Dominika Wronska-Walach, Jaroslaw Cebulski
Abstract:
Mountain areas are very often exposed to numerous transformations connected with the development of tourist infrastructure. In mountain areas in Poland ski tourism is very popular, so agricultural areas are often transformed into tourist areas. The construction of new ski runs can change the direction and rate of slope development. The main aim of this research was to determine geomorphological and hydrological changes within slopes caused by ski run constructions. The study was conducted in the Remiaszów catchment in the Inner Polish Carpathians (southern Poland). The mean elevation of the catchment is 859 m a.s.l. and the maximum is 946 m a.s.l. The surface area of the catchment is 1.16 km2, of which 16.8% is the area of the two studied ski runs. The studied ski runs were constructed in 2014 and 2015. In order to determine the relief transformations connected with new ski run construction high resolution LIDAR data was analyzed. The general relief changes in the studied catchment were determined on the basis of ALS (Airborne Laser Scanning ) data obtained before (2013) and after (2016) ski run construction. Based on the two sets of ALS data a digital elevation models of differences (DoDs) was created, which made it possible to determine the quantitative relief changes in the entire studied catchment. Additionally, cross and longitudinal profiles were calculated within slopes where new ski runs were built. Detailed data on relief changes within selected test surfaces was obtained based on TLS (Terrestrial Laser Scanning). Hydrological changes within the analyzed catchment were determined based on the convergence and divergence index. The study shows that the construction of the new ski runs caused significant geomorphological and hydrological changes in the entire studied catchment. However, the most important changes were identified within the ski slopes. After the construction of ski runs the entire catchment area lowered about 0.02 m. Hydrological changes in the studied catchment mainly led to the interruption of surface runoff pathways and changes in runoff direction and geometry.Keywords: hydrological changes, mountain areas, relief transformations, ski run construction
Procedia PDF Downloads 142422 Development of a Software System for Management and Genetic Analysis of Biological Samples for Forensic Laboratories
Authors: Mariana Lima, Rodrigo Silva, Victor Stange, Teodiano Bastos
Abstract:
Due to the high reliability reached by DNA tests, since the 1980s this kind of test has allowed the identification of a growing number of criminal cases, including old cases that were unsolved, now having a chance to be solved with this technology. Currently, the use of genetic profiling databases is a typical method to increase the scope of genetic comparison. Forensic laboratories must process, analyze, and generate genetic profiles of a growing number of samples, which require time and great storage capacity. Therefore, it is essential to develop methodologies capable to organize and minimize the spent time for both biological sample processing and analysis of genetic profiles, using software tools. Thus, the present work aims the development of a software system solution for laboratories of forensics genetics, which allows sample, criminal case and local database management, minimizing the time spent in the workflow and helps to compare genetic profiles. For the development of this software system, all data related to the storage and processing of samples, workflows and requirements that incorporate the system have been considered. The system uses the following software languages: HTML, CSS, and JavaScript in Web technology, with NodeJS platform as server, which has great efficiency in the input and output of data. In addition, the data are stored in a relational database (MySQL), which is free, allowing a better acceptance for users. The software system here developed allows more agility to the workflow and analysis of samples, contributing to the rapid insertion of the genetic profiles in the national database and to increase resolution of crimes. The next step of this research is its validation, in order to operate in accordance with current Brazilian national legislation.Keywords: database, forensic genetics, genetic analysis, sample management, software solution
Procedia PDF Downloads 369421 A Ku/K Band Power Amplifier for Wireless Communication and Radar Systems
Authors: Meng-Jie Hsiao, Cam Nguyen
Abstract:
Wide-band devices in Ku band (12-18 GHz) and K band (18-27 GHz) have received significant attention for high-data-rate communications and high-resolution sensing. Especially, devices operating around 24 GHz is attractive due to the 24-GHz unlicensed applications. One of the most important components in RF systems is power amplifier (PA). Various PAs have been developed in the Ku and K bands on GaAs, InP, and silicon (Si) processes. Although the PAs using GaAs or InP process could have better power handling and efficiency than those realized on Si, it is very hard to integrate the entire system on the same substrate for GaAs or InP. Si, on the other hand, facilitates single-chip systems. Hence, good PAs on Si substrate are desirable. Especially, Si-based PA having good linearity is necessary for next generation communication protocols implemented on Si. We report a 16.5 to 25.5 GHz Si-based PA having flat saturated power of 19.5 ± 1.5 dBm, output 1-dB power compression (OP1dB) of 16.5 ± 1.5 dBm, and 15-23 % power added efficiency (PAE). The PA consists of a drive amplifier, two main amplifiers, and lump-element Wilkinson power divider and combiner designed and fabricated in TowerJazz 0.18µm SiGe BiCMOS process having unity power gain frequency (fMAX) of more than 250 GHz. The PA is realized as a cascode amplifier implementing both heterojunction bipolar transistor (HBT) and n-channel metal–oxide–semiconductor field-effect transistor (NMOS) devices for gain, frequency response, and linearity consideration. Particularly, a body-floating technique is utilized for the NMOS devices to improve the voltage swing and eliminate parasitic capacitances. The developed PA has measured flat gain of 20 ± 1.5 dB across 16.5-25.5 GHz. At 24 GHz, the saturated power, OP1dB, and maximum PAE are 20.8 dBm, 18.1 dBm, and 23%, respectively. Its high performance makes it attractive for use in Ku/K-band, especially 24 GHz, communication and radar systems. This paper was made possible by NPRP grant # 6-241-2-102 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors.Keywords: power amplifiers, amplifiers, communication systems, radar systems
Procedia PDF Downloads 109420 Assimilating Multi-Mission Satellites Data into a Hydrological Model
Authors: Mehdi Khaki, Ehsan Forootan, Joseph Awange, Michael Kuhn
Abstract:
Terrestrial water storage, as a source of freshwater, plays an important role in human lives. Hydrological models offer important tools for simulating and predicting water storages at global and regional scales. However, their comparisons with 'reality' are imperfect mainly due to a high level of uncertainty in input data and limitations in accounting for all complex water cycle processes, uncertainties of (unknown) empirical model parameters, as well as the absence of high resolution (both spatially and temporally) data. Data assimilation can mitigate this drawback by incorporating new sets of observations into models. In this effort, we use multi-mission satellite-derived remotely sensed observations to improve the performance of World-Wide Water Resources Assessment system (W3RA) hydrological model for estimating terrestrial water storages. For this purpose, we assimilate total water storage (TWS) data from the Gravity Recovery And Climate Experiment (GRACE) and surface soil moisture data from the Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E) into W3RA. This is done to (i) improve model estimations of water stored in ground and soil moisture, and (ii) assess the impacts of each satellite of data (from GRACE and AMSR-E) and their combination on the final terrestrial water storage estimations. These data are assimilated into W3RA using the Ensemble Square-Root Filter (EnSRF) filtering technique over Mississippi Basin (the United States) and Murray-Darling Basin (Australia) between 2002 and 2013. In order to evaluate the results, independent ground-based groundwater and soil moisture measurements within each basin are used.Keywords: data assimilation, GRACE, AMSR-E, hydrological model, EnSRF
Procedia PDF Downloads 288419 Geospatial Techniques for Impact Assessment of Canal Rehabilitation Program in Sindh, Pakistan
Authors: Sumaira Zafar, Arjumand Zaidi, Muhammad Arslan Hafeez
Abstract:
Indus Basin Irrigation System (IBIS) is the largest contiguous irrigation system of the world comprising Indus River and its tributaries, canals, distributaries, and watercourses. A big challenge faced by IBIS is transmission losses through seepage and leaks that account to 41 percent of the total water derived from the river and about 40 percent of that is through watercourses. Irrigation system rehabilitation programs in Pakistan are focused on improvement of canal system at the watercourse level (tertiary channels). Under these irrigation system management programs more than 22,800 watercourses have been improved or lined out of 43,000 (12,900 Kilometers) watercourses. The evaluation of the improvement work is required at this stage to testify the success of the programs. In this paper, emerging technologies of GIS and satellite remote sensing are used for impact assessment of watercourse rehabilitation work in Sindh. To evaluate the efficiency of the improved watercourses, few parameters are selected like soil moisture along watercourses, availability of water at tail end and changes in cultivable command areas. Improved watercourses details and maps are acquired from National Program for Improvement of Watercourses (NPIW) and Space and Upper Atmospheric Research Commission (SUPARCO). High resolution satellite images of Google Earth for the year of 2004 to 2013 are used for digitizing command areas. Temporal maps of cultivable command areas show a noticeable increase in the cultivable land served by improved watercourses. Field visits are conducted to validate the results. Interviews with farmers and landowners also reveal their overall satisfaction in terms of availability of water at the tail end and increased crop production.Keywords: geospatial, impact assessment, watercourses, GIS, remote sensing, seepage, canal lining
Procedia PDF Downloads 349418 Implementing of Indoor Air Quality Index in Hong Kong
Authors: Kwok W. Mui, Ling T. Wong, Tsz W. Tsang
Abstract:
Many Hong Kong people nowadays spend most of their lifetime working indoor. Since poor Indoor Air Quality (IAQ) potentially leads to discomfort, ill health, low productivity and even absenteeism in workplaces, a call for establishing statutory IAQ control to safeguard the well-being of residents is urgently required. Although policies, strategies, and guidelines for workplace IAQ diagnosis have been developed elsewhere and followed with remedial works, some of those workplaces or buildings have relatively late stage of the IAQ problems when the investigation or remedial work started. Screening for IAQ problems should be initiated as it will provide information as a minimum provision of IAQ baseline requisite to the resolution of the problems. It is not practical to sample all air pollutants that exit. Nevertheless, as a statutory control, reliable, rapid screening is essential in accordance with a compromise strategy, which balances costs against detection of key pollutants. This study investigates the feasibility of using an IAQ index as a parameter of IAQ control in Hong Kong. The index is a screening parameter to identify the unsatisfactory workplace IAQ and will highlight where a fully effective IAQ monitoring and assessment is needed for an intensive diagnosis. There already exist a number of representative common indoor pollutants based on some extensive IAQ assessments. The selection of pollutants is surrogate to IAQ control consists of dilution, mitigation, and emission control. The IAQ Index and assessment will look at high fractional quantities of these common measurement parameters. With the support of the existing comprehensive regional IAQ database and the IAQ Index by the research team as the pre-assessment probability, and the unsatisfactory IAQ prevalence as the post-assessment probability from this study, thresholds of maintaining the current measures and performing a further IAQ test or IAQ remedial measures will be proposed. With justified resources, the proposed IAQ Index and assessment protocol might be a useful tool for setting up a practical public IAQ surveillance programme and policy in Hong Kong.Keywords: assessment, index, indoor air quality, surveillance programme
Procedia PDF Downloads 266417 Thermal and Solar Performances of Adsorption Solar Refrigerating Machine
Authors: Nadia Allouache
Abstract:
Solar radiation is by far the largest and the most world’s abundant, clean and permanent energy source. The amount of solar radiation intercepted by the Earth is much higher than annual global energy use. The energy available from the sun is greater than about 5200 times the global world’s need in 2006. In recent years, many promising technologies have been developed to harness the sun's energy. These technologies help in environmental protection, economizing energy, and sustainable development, which are the major issues of the world in the 21st century. One of these important technologies is the solar cooling systems that make use of either absorption or adsorption technologies. The solar adsorption cooling systems are good alternative since they operate with environmentally benign refrigerants that are natural, free from CFCs, and therefore they have a zero ozone depleting potential (ODP). A numerical analysis of thermal and solar performances of an adsorption solar refrigerating system using different adsorbent/adsorbate pairs such as activated carbon AC35 and activated carbon BPL/Ammoniac; is undertaken in this study. The modeling of the adsorption cooling machine requires the resolution of the equation describing the energy and mass transfer in the tubular adsorber that is the most important component of the machine. The Wilson and Dubinin- Astakhov models of the solid-adsorbat equilibrium are used to calculate the adsorbed quantity. The porous medium is contained in the annular space and the adsorber is heated by solar energy. Effect of key parameters on the adsorbed quantity and on the thermal and solar performances are analysed and discussed. The performances of the system that depends on the incident global irradiance during a whole day depends on the weather conditions: the condenser temperature and the evaporator temperature. The AC35/methanol pair is the best pair comparing to the BPL/Ammoniac in terms of system performances.Keywords: activated carbon-methanol pair, activated carbon-ammoniac pair, adsorption, performance coefficients, numerical analysis, solar cooling system
Procedia PDF Downloads 70416 Design, Control and Implementation of 300Wp Single Phase Photovoltaic Micro Inverter for Village Nano Grid Application
Authors: Ramesh P., Aby Joseph
Abstract:
Micro Inverters provide Module Embedded Solution for harvesting energy from small-scale solar photovoltaic (PV) panels. In addition to higher modularity & reliability (25 years of life), the MicroInverter has inherent advantages such as avoidance of long DC cables, eliminates module mismatch losses, minimizes partial shading effect, improves safety and flexibility in installations etc. Due to the above-stated benefits, the renewable energy technology with Solar Photovoltaic (PV) Micro Inverter becomes more widespread in Village Nano Grid application ensuring grid independence for rural communities and areas without access to electricity. While the primary objective of this paper is to discuss the problems related to rural electrification, this concept can also be extended to urban installation with grid connectivity. This work presents a comprehensive analysis of the power circuit design, control methodologies and prototyping of 300Wₚ Single Phase PV Micro Inverter. This paper investigates two different topologies for PV Micro Inverters, based on the first hand on Single Stage Flyback/ Forward PV Micro-Inverter configuration and the other hand on the Double stage configuration including DC-DC converter, H bridge DC-AC Inverter. This work covers Power Decoupling techniques to reduce the input filter capacitor size to buffer double line (100 Hz) ripple energy and eliminates the use of electrolytic capacitors. The propagation of the double line oscillation reflected back to PV module will affect the Maximum Power Point Tracking (MPPT) performance. Also, the grid current will be distorted. To mitigate this issue, an independent MPPT control algorithm is developed in this work to reject the propagation of this double line ripple oscillation to PV side to improve the MPPT performance and grid side to improve current quality. Here, the power hardware topology accepts wide input voltage variation and consists of suitably rated MOSFET switches, Galvanically Isolated gate drivers, high-frequency magnetics and Film capacitors with a long lifespan. The digital controller hardware platform inbuilt with the external peripheral interface is developed using floating point microcontroller TMS320F2806x from Texas Instruments. The firmware governing the operation of the PV Micro Inverter is written in C language and was developed using code composer studio Integrated Development Environment (IDE). In this work, the prototype hardware for the Single Phase Photovoltaic Micro Inverter with Double stage configuration was developed and the comparative analysis between the above mentioned configurations with experimental results will be presented.Keywords: double line oscillation, micro inverter, MPPT, nano grid, power decoupling
Procedia PDF Downloads 133415 Nanoscale Mapping of the Mechanical Modifications Occurring in the Brain Tumour Microenvironment by Atomic Force Microscopy: The Case of the Highly Aggressive Glioblastoma and the Slowly Growing Meningioma
Authors: Gabriele Ciasca, Tanya E. Sassun, Eleonora Minelli, Manila Antonelli, Massimiliano Papi, Antonio Santoro, Felice Giangaspero, Roberto Delfini, Marco De Spirito
Abstract:
Glioblastoma multiforme (GBM) is an extremely aggressive brain tumor, characterized by a diffuse infiltration of neoplastic cells into the brain parenchyma. Although rarely considered, mechanical cues play a key role in the infiltration process that is extensively mediated by the tumor microenvironment stiffness and, more in general, by the occurrence of aberrant interactions between neoplastic cells and the extracellular matrix (ECM). Here we provide a nano-mechanical characterization of the viscoelastic response of human GBM tissues by indentation-type atomic force microscopy. High-resolution elasticity maps show a large difference between the biomechanics of GBM tissues and the healthy peritumoral regions, opening possibilities to optimize the tumor resection area. Moreover, we unveil the nanomechanical signature of necrotic regions and anomalous vasculature, that are two major hallmarks useful for glioma staging. Actually, the morphological grading of GBM relies mainly on histopathological findings that make extensive use of qualitative parameters. Our findings have the potential to positively impact on the development of novel quantitative methods to assess the tumor grade, which can be used in combination with conventional histopathological examinations. In order to provide a more in-depth description of the role of mechanical cues in tumor progression, we compared the nano-mechanical fingerprint of GBM tissues with that of grade-I (WHO) meningioma, a benign lesion characterized by a completely different growth pathway with the respect to GBM, that, in turn hints at a completely different role of the biomechanical interactions.Keywords: AFM, nano-mechanics, nanomedicine, brain tumors, glioblastoma
Procedia PDF Downloads 339414 A Numerical Model for Simulation of Blood Flow in Vascular Networks
Authors: Houman Tamaddon, Mehrdad Behnia, Masud Behnia
Abstract:
An accurate study of blood flow is associated with an accurate vascular pattern and geometrical properties of the organ of interest. Due to the complexity of vascular networks and poor accessibility in vivo, it is challenging to reconstruct the entire vasculature of any organ experimentally. The objective of this study is to introduce an innovative approach for the reconstruction of a full vascular tree from available morphometric data. Our method consists of implementing morphometric data on those parts of the vascular tree that are smaller than the resolution of medical imaging methods. This technique reconstructs the entire arterial tree down to the capillaries. Vessels greater than 2 mm are obtained from direct volume and surface analysis using contrast enhanced computed tomography (CT). Vessels smaller than 2mm are reconstructed from available morphometric and distensibility data and rearranged by applying Murray’s Laws. Implementation of morphometric data to reconstruct the branching pattern and applying Murray’s Laws to every vessel bifurcation simultaneously, lead to an accurate vascular tree reconstruction. The reconstruction algorithm generates full arterial tree topography down to the first capillary bifurcation. Geometry of each order of the vascular tree is generated separately to minimize the construction and simulation time. The node-to-node connectivity along with the diameter and length of every vessel segment is established and order numbers, according to the diameter-defined Strahler system, are assigned. During the simulation, we used the averaged flow rate for each order to predict the pressure drop and once the pressure drop is predicted, the flow rate is corrected to match the computed pressure drop for each vessel. The final results for 3 cardiac cycles is presented and compared to the clinical data.Keywords: blood flow, morphometric data, vascular tree, Strahler ordering system
Procedia PDF Downloads 271413 Modelling of Exothermic Reactions during Carbon Fibre Manufacturing and Coupling to Surrounding Airflow
Authors: Musa Akdere, Gunnar Seide, Thomas Gries
Abstract:
Carbon fibres are fibrous materials with a carbon atom amount of more than 90%. They combine excellent mechanicals properties with a very low density. Thus carbon fibre reinforced plastics (CFRP) are very often used in lightweight design and construction. The precursor material is usually polyacrylonitrile (PAN) based and wet-spun. During the production of carbon fibre, the precursor has to be stabilized thermally to withstand the high temperatures of up to 1500 °C which occur during carbonization. Even though carbon fibre has been used since the late 1970s in aerospace application, there is still no general method available to find the optimal production parameters and the trial-and-error approach is most often the only resolution. To have a much better insight into the process the chemical reactions during stabilization have to be analyzed particularly. Therefore, a model of the chemical reactions (cyclization, dehydration, and oxidation) based on the research of Dunham and Edie has been developed. With the presented model, it is possible to perform a complete simulation of the fibre undergoing all zones of stabilization. The fiber bundle is modeled as several circular fibers with a layer of air in-between. Two thermal mechanisms are considered to be the most important: the exothermic reactions inside the fiber and the convective heat transfer between the fiber and the air. The exothermic reactions inside the fibers are modeled as a heat source. Differential scanning calorimetry measurements have been performed to estimate the amount of heat of the reactions. To shorten the required time of a simulation, the number of fibers is decreased by similitude theory. Experiments were conducted to validate the simulation results of the fibre temperature during stabilization. The experiments for the validation were conducted on a pilot scale stabilization oven. To measure the fibre bundle temperature, a new measuring method is developed. The comparison of the results shows that the developed simulation model gives good approximations for the temperature profile of the fibre bundle during the stabilization process.Keywords: carbon fibre, coupled simulation, exothermic reactions, fibre-air-interface
Procedia PDF Downloads 272412 A Geophysical Study for Delineating the Subsurface Minerals at El Qusier Area, Central Eastern Desert, Egypt
Authors: Ahmed Khalil, Elhamy Tarabees, Svetlana Kovacikova
Abstract:
The Red Sea Mountains have been famous for their ore deposits since ancient times. Also, petrographic analysis and previous potential field surveys indicated large unexplored accumulations of ore minerals in the area. Therefore, the main goal of the presented study is to contribute to the discovery of hitherto unknown ore mineral deposits in the Red Sea region. To achieve this goal, we used two geophysical techniques: land magnetic survey and magnetotelluric data. A high-resolution land magnetic survey has been acquired using two proton magnetometers, one instrument used as a base station for the diurnal correction and the other used to measure the magnetic field along the study area. Two hundred eighty land magnetic stations were measured over a mesh-like area with a 500m spacing interval. The necessary reductions concerning daily variation, regional gradient and time observation were applied. Then, the total intensity anomaly map was constructed and transformed into the reduced magnetic pole (RTP). The magnetic interpretation was carried out using the analytical signal as well as regional–residual separation is carried out using the power spectrum. Also, the tilt derivative method (TDR) technique is applied to delineate the structure and hidden anomalies. Data analysis has been performed using trend analysis and Euler deconvolution. The results indicate that magnetic contacts are not the dominant geological feature of the study area. The magnetotleruric survey consisted of two profiles with a total of 8 broadband measurement points with a duration of about 24 hours crossing a wadi um Gheig approximately 50 km south of El Quseir. Collected data have been inverted to the electrical resistivity model using the 3D modular 3D inversion technique ModEM. The model revealed a non-conductive body in its central part, probably corresponding to a dolerite dyke, with which possible ore mineralization could be related.Keywords: magnetic survey, magnetotelluric, mineralization, 3d modeling
Procedia PDF Downloads 23411 Imaginal and in Vivo Exposure Blended with Emdr: Becoming Unstuck, an Integrated Inpatient Treatment for Post-Traumatic Stress Disorder
Authors: Merrylord Harb-Azar
Abstract:
Traditionally, PTSD treatment has involved trauma-focused cognitive behaviour therapy (TF CBT) to consolidate traumatic memories. A piloted integrated treatment of TF CBT and eye movement desensitisation reprocessing therapy (EMDR) of eight phases will fasten the rate memory is being consolidated and enhance cognitive functioning in patients with PTSD. Patients spend a considerable amount of time in treatment managing their traumas experienced firsthand, or from aversive details ranging from war, assaults, accidents, abuse, hostage related, riots, or natural disasters. The time spent in treatment or as inpatient affects overall quality of life, relationships, cognitive functioning, and overall sense of identity. EMDR is being offered twice a week in conjunction with the standard prolonged exposure as an inpatient in a private hospital. Prolonged exposure for up to 5 hours per day elicits the affect response required for EMDR sessions in the afternoon to unlock unprocessed memories and facilitate consolidation in the amygdala and hippocampus. Results are indicating faster consolidation of memories, reduction in symptoms in a shorter period of time, reduction in admission time, which is enhancing the quality of life and relationships, and improved cognition. The impact of events scale (IES) results demonstrate a significant reduction in symptoms, trauma symptoms inventory (TSI), and posttraumatic stressor disorder check list (PCL) that demonstrates large effect sizes to date. An integrated treatment approach for PTSD achieves a faster resolution of memories, improves cognition, and reduces the amount of time spent in therapy.Keywords: EMDR enhances cognitive functioning, faster consolidation of trauma memory, integrated treatment of TF CBT and EMDR, reduction in inpatient admission time
Procedia PDF Downloads 143410 Motif Search-Aided Screening of the Pseudomonas syringae pv. Maculicola Genome for Genes Encoding Tertiary Alcohol Ester Hydrolases
Authors: M. L. Mangena, N. Mokoena, K. Rashamuse, M. G. Tlou
Abstract:
Tertiary alcohol ester (TAE) hydrolases are a group of esterases (EC 3.1.1.-) that catalyze the kinetic resolution of TAEs and as a result, they are sought-after for the production of optically pure tertiary alcohols (TAs) which are useful as building blocks for number biologically active compounds. What sets these enzymes apart is, the presence of a GGG(A)X-motif in the active site which appears to be the main reason behind their activity towards the sterically demanding TAEs. The genome of Pseudomonas syringae pv. maculicola (Psm) comprises a multitude of genes that encode esterases. We therefore, hypothesize that some of these genes encode TAE hydrolases. In this study, Psm was screened for TAE hydrolase activity using the linalyl acetate (LA) plate assay and a positive reaction was observed. As a result, the genome of Psm was screened for esterases with a GGG(A)X-motif using the motif search tool and two potential TAE hydrolase genes (PsmEST1 and 2, 1100 and 1000bp, respectively) were identified, PsmEST1 was amplified by PCR and the gene sequenced for confirmation. Analysis of the sequence data with the SingnalP 4.1 server revealed that the protein comprises a signal peptide (22 amino acid residues) on the N-terminus. Primers specific for the gene encoding the mature protein (without the signal peptide) were designed such that they contain NdeI and XhoI restriction sites for directional cloning of the PCR products into pET28a. The gene was expressed in E. coli JM109 (DE3) and the clones screened for TAE hydrolase activity using the LA plate assay. A positive clone was selected, overexpressed and the protein purified using nickel affinity chromatography. The activity of the esterase towards LA was confirmed using thin layer chromatography.Keywords: hydrolases, tertiary alcohol esters, tertiary alcohols, screening, Pseudomonas syringae pv., maculicola genome, esterase activity, linalyl acetate
Procedia PDF Downloads 354409 Generation of Ultra-Broadband Supercontinuum Ultrashort Laser Pulses with High Energy
Authors: Walid Tawfik
Abstract:
The interaction of intense short nano- and picosecond laser pulses with plasma leads to reach variety of important applications, including time-resolved laser induced breakdown spectroscopy (LIBS), soft x-ray lasers, and laser-driven accelerators. The progress in generating of femtosecond down to sub-10 fs optical pulses has opened a door for scientists with an essential tool in many ultrafast phenomena, such as femto-chemistry, high field physics, and high harmonic generation (HHG). The advent of high-energy laser pulses with durations of few optical cycles provided scientists with very high electric fields, and produce coherent intense UV to NIR radiation with high energy which allows for the investigation of ultrafast molecular dynamics with femtosecond resolution. In this work, we could experimentally achieve the generation of a two-octave-wide supercontinuum ultrafast pulses extending from ultraviolet at 3.5 eV to the near-infrared at 1.3 eV in neon-filled capillary fiber. These pulses are created due to nonlinear self-phase modulation (SPM) in neon as a nonlinear medium. The measurements of the generated pulses were performed using spectral phase interferometry for direct electric-field reconstruction. A full characterization of the output pulses was studied. The output pulse characterization includes the pulse width, the beam profile, and the spectral bandwidth. Under optimization conditions, the reconstructed pulse intensity autocorrelation function was exposed for the shorts possible pulse duration to achieve transform-limited pulses with energies up to 600µJ. Furthermore, the effect of variation of neon pressure on the pulse-width was studied. The nonlinear SPM found to be increased with the neon pressure. The obtained results may give an opportunity to monitor and control ultrafast transit interaction in femtosecond chemistry.Keywords: femtosecond laser, ultrafast, supercontinuum, ultra-broadband
Procedia PDF Downloads 201408 Shape Management Method of Large Structure Based on Octree Space Partitioning
Authors: Gichun Cha, Changgil Lee, Seunghee Park
Abstract:
The objective of the study is to construct the shape management method contributing to the safety of the large structure. In Korea, the research of the shape management is lack because of the new attempted technology. Terrestrial Laser Scanning (TLS) is used for measurements of large structures. TLS provides an efficient way to actively acquire accurate the point clouds of object surfaces or environments. The point clouds provide a basis for rapid modeling in the industrial automation, architecture, construction or maintenance of the civil infrastructures. TLS produce a huge amount of point clouds. Registration, Extraction and Visualization of data require the processing of a massive amount of scan data. The octree can be applied to the shape management of the large structure because the scan data is reduced in the size but, the data attributes are maintained. The octree space partitioning generates the voxel of 3D space, and the voxel is recursively subdivided into eight sub-voxels. The point cloud of scan data was converted to voxel and sampled. The experimental site is located at Sungkyunkwan University. The scanned structure is the steel-frame bridge. The used TLS is Leica ScanStation C10/C5. The scan data was condensed 92%, and the octree model was constructed with 2 millimeter in resolution. This study presents octree space partitioning for handling the point clouds. The basis is created by shape management of the large structures such as double-deck tunnel, building and bridge. The research will be expected to improve the efficiency of structural health monitoring and maintenance. "This work is financially supported by 'U-City Master and Doctor Course Grant Program' and the National Research Foundation of Korea(NRF) grant funded by the Korea government (MSIP) (NRF- 2015R1D1A1A01059291)."Keywords: 3D scan data, octree space partitioning, shape management, structural health monitoring, terrestrial laser scanning
Procedia PDF Downloads 296407 Low-Cost, Portable Optical Sensor with Regression Algorithm Models for Accurate Monitoring of Nitrites in Environments
Authors: David X. Dong, Qingming Zhang, Meng Lu
Abstract:
Nitrites enter waterways as runoff from croplands and are discharged from many industrial sites. Excessive nitrite inputs to water bodies lead to eutrophication. On-site rapid detection of nitrite is of increasing interest for managing fertilizer application and monitoring water source quality. Existing methods for detecting nitrites use spectrophotometry, ion chromatography, electrochemical sensors, ion-selective electrodes, chemiluminescence, and colorimetric methods. However, these methods either suffer from high cost or provide low measurement accuracy due to their poor selectivity to nitrites. Therefore, it is desired to develop an accurate and economical method to monitor nitrites in environments. We report a low-cost optical sensor, in conjunction with a machine learning (ML) approach to enable high-accuracy detection of nitrites in water sources. The sensor works under the principle of measuring molecular absorptions of nitrites at three narrowband wavelengths (295 nm, 310 nm, and 357 nm) in the ultraviolet (UV) region. These wavelengths are chosen because they have relatively high sensitivity to nitrites; low-cost light-emitting devices (LEDs) and photodetectors are also available at these wavelengths. A regression model is built, trained, and utilized to minimize cross-sensitivities of these wavelengths to the same analyte, thus achieving precise and reliable measurements with various interference ions. The measured absorbance data is input to the trained model that can provide nitrite concentration prediction for the sample. The sensor is built with i) a miniature quartz cuvette as the test cell that contains a liquid sample under test, ii) three low-cost UV LEDs placed on one side of the cell as light sources, with each LED providing a narrowband light, and iii) a photodetector with a built-in amplifier and an analog-to-digital converter placed on the other side of the test cell to measure the power of transmitted light. This simple optical design allows measuring the absorbance data of the sample at the three wavelengths. To train the regression model, absorbances of nitrite ions and their combination with various interference ions are first obtained at the three UV wavelengths using a conventional spectrophotometer. Then, the spectrophotometric data are inputs to different regression algorithm models for training and evaluating high-accuracy nitrite concentration prediction. Our experimental results show that the proposed approach enables instantaneous nitrite detection within several seconds. The sensor hardware costs about one hundred dollars, which is much cheaper than a commercial spectrophotometer. The ML algorithm helps to reduce the average relative errors to below 3.5% over a concentration range from 0.1 ppm to 100 ppm of nitrites. The sensor has been validated to measure nitrites at three sites in Ames, Iowa, USA. This work demonstrates an economical and effective approach to the rapid, reagent-free determination of nitrites with high accuracy. The integration of the low-cost optical sensor and ML data processing can find a wide range of applications in environmental monitoring and management.Keywords: optical sensor, regression model, nitrites, water quality
Procedia PDF Downloads 70406 The Urgency of Berth Deepening at the Port of Durban
Authors: Rowen Naicker, Dhiren Allopi
Abstract:
One of the major problems the Port of Durban is experiencing is addressing shallow spots aggravated by megaships that berth. In the recent years, the vessels that call at the Port have increased in size which calls for draughts that are much deeper. For this reason, these larger vessels can only berth under high tide to avoid the risk of running aground. In addition to this, the ships cannot sail in fully laden which does not make it feasible for ship owners. Further during the berthing materials are displaced from the seabed which result in shallow spots being developed. The permitted draft (under-keel allowance) for the Durban Container Terminal (DCT) is currently 12.2 m. Transnet National Ports Authority (TNPA) are currently investing in a dredging fleet worth almost two billion rand. One of the highlights of this investment would be the building of grab hopper dredger that would be dedicated to the Port by 2017. TNPA are trying various techniques to dissolve the reduction of draughts by implementing dredging maintenance projects but is this sufficient? The ideal resolution would be the deepening and widening of the berths. Plans for this project is in place, but the implementation process is a matter of urgency. The intention of this project will be to accommodate three big vessels rather than two which in turn will improve the turnaround time in the port. The berthing will then no longer depend on high tide to avoid ships running aground. The aim of this paper is to prove the implementation of deepening and widening of the Port of Durban is a matter of urgency. If the plan to deepen and widen the berths at DCT is delayed it will mean a loss of business for the South African economy. If larger vessels cannot be accommodated in the Port of Durban, it will bypass the busiest container handling facility in the Southern hemisphere. Shipping companies are compelled to use larger ships as opposed to smaller vessels to lower port and fuel costs. A delay in the expansion of DCT could also result in an escalation of costs.Keywords: DCT, deepening, berth, port
Procedia PDF Downloads 398405 Aeromagnetic Data Interpretation and Source Body Evaluation Using Standard Euler Deconvolution Technique in Obudu Area, Southeastern Nigeria
Authors: Chidiebere C. Agoha, Chukwuebuka N. Onwubuariri, Collins U.amasike, Tochukwu I. Mgbeojedo, Joy O. Njoku, Lawson J. Osaki, Ifeyinwa J. Ofoh, Francis B. Akiang, Dominic N. Anuforo
Abstract:
In order to interpret the airborne magnetic data and evaluate the approximate location, depth, and geometry of the magnetic sources within Obudu area using the standard Euler deconvolution method, very high-resolution aeromagnetic data over the area was acquired, processed digitally and analyzed using Oasis Montaj 8.5 software. Data analysis and enhancement techniques, including reduction to the equator, horizontal derivative, first and second vertical derivatives, upward continuation and regional-residual separation, were carried out for the purpose of detailed data Interpretation. Standard Euler deconvolution for structural indices of 0, 1, 2, and 3 was also carried out and respective maps were obtained using the Euler deconvolution algorithm. Results show that the total magnetic intensity ranges from -122.9nT to 147.0nT, regional intensity varies between -106.9nT to 137.0nT, while residual intensity ranges between -51.5nT to 44.9nT clearly indicating the masking effect of deep-seated structures over surface and shallow subsurface magnetic materials. Results also indicated that the positive residual anomalies have an NE-SW orientation, which coincides with the trend of major geologic structures in the area. Euler deconvolution for all the considered structural indices has depth to magnetic sources ranging from the surface to more than 2000m. Interpretation of the various structural indices revealed the locations and depths of the source bodies and the existence of geologic models, including sills, dykes, pipes, and spherical structures. This area is characterized by intrusive and very shallow basement materials and represents an excellent prospect for solid mineral exploration and development.Keywords: Euler deconvolution, horizontal derivative, Obudu, structural indices
Procedia PDF Downloads 79404 Multidisciplinary Approach for a Tsunami Reconstruction Plan in Coquimbo, Chile
Authors: Ileen Van den Berg, Reinier J. Daals, Chris E. M. Heuberger, Sven P. Hildering, Bob E. Van Maris, Carla M. Smulders, Rafael Aránguiz
Abstract:
Chile is located along the subduction zone of the Nazca plate beneath the South American plate, where large earthquakes and tsunamis have taken place throughout history. The last significant earthquake (Mw 8.2) occurred in September 2015 and generated a destructive tsunami, which mainly affected the city of Coquimbo (71.33°W, 29.96°S). The inundation area consisted of a beach, damaged seawall, damaged railway, wetland and old neighborhood; therefore, local authorities started a reconstruction process immediately after the event. Moreover, a seismic gap has been identified in the same area, and another large event could take place in the near future. The present work proposed an integrated tsunami reconstruction plan for the city of Coquimbo that considered several variables such as safety, nature & recreation, neighborhood welfare, visual obstruction, infrastructure, construction process, and durability & maintenance. Possible future tsunami scenarios are simulated by means of the Non-hydrostatic Evolution of Ocean WAVEs (NEOWAVE) model with 5 nested grids and a higher grid resolution of ~10 m. Based on the score from a multi-criteria analysis, the costs of the alternatives and a preference for a multifunctional solution, the alternative that includes an elevated coastal road with floodgates to reduce tsunami overtopping and control the return flow of a tsunami was selected as the best solution. It was also observed that the wetlands are significantly restored to their former configuration; moreover, the dynamic behavior of the wetlands is stimulated. The numerical simulation showed that the new coastal protection decreases damage and the probability of loss of life by delaying tsunami arrival time. In addition, new evacuation routes and a smaller inundation zone in the city increase safety for the area.Keywords: tsunami, Coquimbo, Chile, reconstruction, numerical simulation
Procedia PDF Downloads 239403 A General Form of Characteristics Method Applied on Minimum Length Nozzles Design
Authors: Merouane Salhi, Mohamed Roudane, Abdelkader Kirad
Abstract:
In this work, we present a new form of characteristics method, which is a technique for solving partial differential equations. Typically, it applies to first-order equations; the aim of this method is to reduce a partial differential equation to a family of ordinary differential equations along which the solution can be integrated from some initial data. This latter developed under the real gas theory, because when the thermal and the caloric imperfections of a gas increases, the specific heat and their ratio do not remain constant anymore and start to vary with the gas parameters. The gas doesn’t stay perfect. Its state equation change and it becomes for a real gas. The presented equations of the characteristics remain valid whatever area or field of study. Here we need have inserted the developed Prandtl Meyer function in the mathematical system to find a new model when the effect of stagnation pressure is taken into account. In this case, the effects of molecular size and intermolecular attraction forces intervene to correct the state equation, the thermodynamic parameters and the value of Prandtl Meyer function. However, with the assumptions that Berthelot’s state equation accounts for molecular size and intermolecular force effects, expressions are developed for analyzing the supersonic flow for thermally and calorically imperfect gas. The supersonic parameters depend directly on the stagnation parameters of the combustion chamber. The resolution has been made by the finite differences method using the corrector predictor algorithm. As results, the developed mathematical model used to design 2D minimum length nozzles under effect of the stagnation parameters of fluid flow. A comparison for air with the perfect gas PG and high temperature models on the one hand and our results by the real gas theory on the other of nozzles shapes and characteristics are made.Keywords: numerical methods, nozzles design, real gas, stagnation parameters, supersonic expansion, the characteristics method
Procedia PDF Downloads 240402 To Include or Not to Include: Resolving Ethical Concerns over the 20% High Quality Cassava Flour Inclusion in Wheat Flour Policy in Nigeria
Authors: Popoola I. Olayinka, Alamu E. Oladeji, B. Maziya-Dixon
Abstract:
Cassava, an indigenous crop grown locally by subsistence farmers in Nigeria has potential to bring economic benefits to the country. Consumption of bread and other confectionaries has been on the rise due to lifestyle changes of Nigerian consumers. However, wheat, being the major ingredient for bread and confectionery production does not thrive well under Nigerian climate hence the huge spending on wheat importation. To reduce spending on wheat importation, the Federal Government of Nigeria intends passing into law mandatory inclusion of 20% high-quality cassava flour (HQCF) in wheat flour. While the proposed policy may reduce post harvest loss of cassava, and also increase food security and domestic agricultural productivity, there are downsides to the policy which include reduction in nutritional quality and low sensory appeal of cassava-wheat bread, reluctance of flour millers to use HQCF, technology and processing challenges among others. The policy thus presents an ethical dilemma which must be resolved for its successful implementation. While inclusion of HQCF to wheat flour in bread and confectionery is a topic that may have been well addressed, resolving the ethical dilemma resulting from the act has not received much attention. This paper attempts to resolve this dilemma using various approaches in food ethics (cost benefits, utilitarianism, deontological and deliberative). The Cost-benefit approach did not provide adequate resolution of the dilemma as all the costs and benefits of the policy could not be stated in the quantitative term. The utilitarianism approach suggests that the policy delivers greatest good to the greatest number while the deontological approach suggests that the act (inclusion of HQCF to wheat flour) is right hence the policy is not utterly wrong. The deliberative approach suggests a win-win situation through deliberation with the parties involved.Keywords: HQCF, ethical dilemma, food security, composite flour, cassava bread
Procedia PDF Downloads 405401 Preparation and Characterization of Dendrimer-Encapsulated Ytterbium Nanoparticles to Produce a New Nano-Radio Pharmaceutical
Authors: Aghaei Amirkhizi Navideh, Sadjadi Soodeh Sadat, Moghaddam Banaem Leila, Athari Allaf Mitra, Johari Daha Fariba
Abstract:
Dendrimers are good candidates for preparing metal nanoparticles because they can structurally and chemically well-defined templates and robust stabilizers. Poly amidoamine (PAMAM) dendrimer-based multifunctional cancer therapeutic conjugates have been designed and synthesized in pharmaceutical industry. In addition, encapsulated nanoparticle surfaces are accessible to substrates so that catalytic reactions can be carried out. For preparation of dendimer-metal nanocomposite, a dendrimer solution containing an average of 55 Yb+3 ions per dendrimer was prepared. Prior to reduction, the pH of this solution was adjusted to 7.5 using NaOH. NaBH4 was used to reduce the dendrimer-encapsulated Yb+3 to the zerovalent metal. The pH of the resulting solution was then adjusted to 3, using HClO4, to decompose excess BH4-. The UV-Vis absorption spectra of the mixture were recorded to ensure the formation of Yb-G5-NH2 complex. High-resolution electron microscopy (HRTEM) and size distribution results provide additional information about dendimer-metal nanocomposite shape, size, and size distribution of the particles. The resulting mixture was irradiated in Tehran Research Reactor 2h and neutron fluxes were 3×1011 n/cm2.Sec and the specific activity was 7MBq. Radiochemical and chemical and radionuclide quality control testes were carried. Gamma Spectroscopy and High-performance Liquid Chromatography HPLC, Thin-Layer Chromatography TLC were recorded. The injection of resulting solution to solid tumor in mice shows that it could be resized the tumor. The studies about solid tumors and nano composites show that ytterbium encapsulated-dendrimer radiopharmaceutical could be introduced as a new therapeutic for the treatment of solid tumors.Keywords: nano-radio pharmaceutical, ytterbium, PAMAM, dendrimers
Procedia PDF Downloads 500400 Sustainable Development of Adsorption Solar Cooling Machine
Authors: N. Allouache, W. Elgahri, A. Gahfif, M. Belmedani
Abstract:
Solar radiation is by far the largest and the most world’s abundant, clean and permanent energy source. The amount of solar radiation intercepted by the Earth is much higher than annual global energy use. The energy available from the sun is greater than about 5200 times the global world’s need in 2006. In recent years, many promising technologies have been developed to harness the sun's energy. These technologies help in environmental protection, economizing energy, and sustainable development, which are the major issues of the world in the 21st century. One of these important technologies is the solar cooling systems that make use of either absorption or adsorption technologies. The solar adsorption cooling systems are a good alternative since they operate with environmentally benign refrigerants that are natural, free from CFCs, and therefore they have a zero ozone depleting potential (ODP). A numerical analysis of thermal and solar performances of an adsorption solar refrigerating system using different adsorbent/adsorbate pairs, such as activated carbon AC35 and activated carbon BPL/Ammoniac; is undertaken in this study. The modeling of the adsorption cooling machine requires the resolution of the equation describing the energy and mass transfer in the tubular adsorber, that is the most important component of the machine. The Wilson and Dubinin- Astakhov models of the solid-adsorbat equilibrium are used to calculate the adsorbed quantity. The porous medium is contained in the annular space, and the adsorber is heated by solar energy. Effect of key parameters on the adsorbed quantity and on the thermal and solar performances are analysed and discussed. The performances of the system that depends on the incident global irradiance during a whole day depends on the weather conditions: the condenser temperature and the evaporator temperature. The AC35/methanol pair is the best pair comparing to the BPL/Ammoniac in terms of system performances.Keywords: activated carbon-methanol pair, activated carbon-ammoniac pair, adsorption, performance coefficients, numerical analysis, solar cooling system
Procedia PDF Downloads 74399 Correlation of Serum Apelin Level with Coronary Calcium Score in Patients with Suspected Coronary Artery Disease
Authors: M. Zeitoun, K. Abdallah, M. Rashwan
Abstract:
Introduction: A growing body of evidence indicates that apelin, a relatively recent member of the adipokines family, has a potential anti-atherogenic effect. An association between low serum apelin state and coronary artery disease (CAD) was previously reported; however, the relationship between apelin and the atherosclerotic burden was unclear. Objectives: Our aim was to explore the correlation of serum apelin level with coronary calcium score (CCS) as a quantitative marker of coronary atherosclerosis. Methods: This observational cross-sectional study enrolled 100 consecutive subjects referred for cardiac multi-detector computed tomography (MDCT) for assessment of CAD (mean age 54 ± 9.7 years, 51 male and 49 females). Clinical parameters, glycemic and lipid profile, high sensitivity CRP (hsCRP), homeostasis model assessment of insulin resistance (HOMA-IR), serum creatinine and complete blood count were assessed. Serum apelin levels were determined using a commercially available Enzyme Immunoassay (EIA) Kit. High-resolution non-contrast CT images were acquired by a 64-raw MDCT and CCS was calculated using the Agatston scoring method. Results: Forty-three percent of the studied subjects had positive coronary artery calcification (CAC). The mean CCS was 79 ± 196.5 Agatston units. Subjects with detectable CAC had significantly higher fasting plasma glucose, HbA1c, and WBCs count than subjects without detectable CAC (p < 0.05). Most importantly, subjects with detectable CAC had significantly lower serum apelin level than subjects without CAC (1.3 ± 0.4 ng/ml vs. 2.8 ± 0.6 ng/ml, p < 0.001). In addition, there was a statistically significant inverse correlation between serum apelin levels and CCS (r = 0.591, p < 0.001); on multivariate analysis this correlation was found to be independent of traditional cardiovascular risk factors and hs-CRP. Conclusion:To the best of our knowledge, this is the first report of an independent association between apelin and CCS in patients with suspected CAD. Apelin emerges as a possible novel biomarker for CAD, but this result remains to be proved prospectively.Keywords: HbA1c, apelin, adipokines, coronary calcium score (CCS), coronary artery disease (CAD)
Procedia PDF Downloads 340398 A History of Taiwan’s Secret Nuclear Program
Authors: Hsiao-ting Lin
Abstract:
This paper analyzes the history of Taiwan’s secret program to develop its nuclear weapons during the Cold War. In July 1971, US President Richard Nixon shocked the world when he announced that his national security adviser Henry Kissinger had made a secret trip to China and that he himself had accepted an invitation to travel to Beijing. This huge breakthrough in the US-PRC relationship was followed by Taipei’s loss of political legitimacy and international credibility as a result of its UN debacle in the fall that year. Confronted with the Nixon White House’s opening to the PRC, leaders in Taiwan felt being betrayed and abandoned, and they were obliged to take countermeasures for the sake of national interest and regime survival. Taipei’s endeavor to create an effective nuclear program, including the possible development of nuclear weapons capabilities, fully demonstrates the government’s resolution to pursue its own national policy, even if such a policy was guaranteed to undermine its relations with the United States. With hindsight, Taiwan’s attempt to develop its own nuclear weapons did not succeed in sabotaging the warming of US-PRC relations. Worse, it was forced to come to a full stop when, in early 1988, the US government pressured Taipei to close related facilities and programs on the island. However, Taiwan’s abortive attempt to develop its nuclear capability did influence Washington’s and Beijing’s handling of their new relationship. There did develop recognition of a common American and PRC interest in avoiding a nuclearized Taiwan. From this perspective, Beijing’s interests would best be served by allowing the island to remain under loose and relatively benign American influence. As for the top leaders on Taiwan, such a policy choice demonstrated how they perceived the shifting dynamics of international politics in the 1960s and 1970s and how they struggled to break free and pursue their own independent national policy within the rigid framework of the US-Taiwan alliance during the Cold War.Keywords: taiwan, richard nixon, nuclear program, chiang Kai-shek, chiang ching-kuo
Procedia PDF Downloads 129397 A Carrier Phase High Precision Ranging Theory Based on Frequency Hopping
Authors: Jie Xu, Zengshan Tian, Ze Li
Abstract:
Previous indoor ranging or localization systems achieving high accuracy time of flight (ToF) estimation relied on two key points. One is to do strict time and frequency synchronization between the transmitter and receiver to eliminate equipment asynchronous errors such as carrier frequency offset (CFO), but this is difficult to achieve in a practical communication system. The other one is to extend the total bandwidth of the communication because the accuracy of ToF estimation is proportional to the bandwidth, and the larger the total bandwidth, the higher the accuracy of ToF estimation obtained. For example, ultra-wideband (UWB) technology is implemented based on this theory, but high precision ToF estimation is difficult to achieve in common WiFi or Bluetooth systems with lower bandwidth compared to UWB. Therefore, it is meaningful to study how to achieve high-precision ranging with lower bandwidth when the transmitter and receiver are asynchronous. To tackle the above problems, we propose a two-way channel error elimination theory and a frequency hopping-based carrier phase ranging algorithm to achieve high accuracy ranging under asynchronous conditions. The two-way channel error elimination theory uses the symmetry property of the two-way channel to solve the asynchronous phase error caused by the asynchronous transmitter and receiver, and we also study the effect of the two-way channel generation time difference on the phase according to the characteristics of different hardware devices. The frequency hopping-based carrier phase ranging algorithm uses frequency hopping to extend the equivalent bandwidth and incorporates a carrier phase ranging algorithm with multipath resolution to achieve a ranging accuracy comparable to that of UWB at 400 MHz bandwidth in the typical 80 MHz bandwidth of commercial WiFi. Finally, to verify the validity of the algorithm, we implement this theory using a software radio platform, and the actual experimental results show that the method proposed in this paper has a median ranging error of 5.4 cm in the 5 m range, 7 cm in the 10 m range, and 10.8 cm in the 20 m range for a total bandwidth of 80 MHz.Keywords: frequency hopping, phase error elimination, carrier phase, ranging
Procedia PDF Downloads 122396 In-vitro Metabolic Fingerprinting Using Plasmonic Chips by Laser Desorption/Ionization Mass Spectrometry
Authors: Vadanasundari Vedarethinam, Kun Qian
Abstract:
The metabolic analysis is more distal over proteomics and genomics engaging in clinics and needs rationally distinct techniques, designed materials, and device for clinical diagnosis. Conventional techniques such as spectroscopic techniques, biochemical analyzers, and electrochemical have been used for metabolic diagnosis. Currently, there are four major challenges including (I) long-term process in sample pretreatment; (II) difficulties in direct metabolic analysis of biosamples due to complexity (III) low molecular weight metabolite detection with accuracy and (IV) construction of diagnostic tools by materials and device-based platforms for real case application in biomedical applications. Development of chips with nanomaterial is promising to address these critical issues. Mass spectroscopy (MS) has displayed high sensitivity and accuracy, throughput, reproducibility, and resolution for molecular analysis. Particularly laser desorption/ ionization mass spectrometry (LDI MS) combined with devices affords desirable speed for mass measurement in seconds and high sensitivity with low cost towards large scale uses. We developed a plasmonic chip for clinical metabolic fingerprinting as a hot carrier in LDI MS by series of chips with gold nanoshells on the surface through controlled particle synthesis, dip-coating, and gold sputtering for mass production. We integrated the optimized chip with microarrays for laboratory automation and nanoscaled experiments, which afforded direct high-performance metabolic fingerprinting by LDI MS using 500 nL of serum, urine, cerebrospinal fluids (CSF) and exosomes. Further, we demonstrated on-chip direct in-vitro metabolic diagnosis of early-stage lung cancer patients using serum and exosomes without any pretreatment or purifications. To our best knowledge, this work initiates a bionanotechnology based platform for advanced metabolic analysis toward large-scale diagnostic use.Keywords: plasmonic chip, metabolic fingerprinting, LDI MS, in-vitro diagnostics
Procedia PDF Downloads 161