Search results for: mobility ratio
1439 Pilot-free Image Transmission System of Joint Source Channel Based on Multi-Level Semantic Information
Authors: Linyu Wang, Liguo Qiao, Jianhong Xiang, Hao Xu
Abstract:
In semantic communication, the existing joint Source Channel coding (JSCC) wireless communication system without pilot has unstable transmission performance and can not effectively capture the global information and location information of images. In this paper, a pilot-free image transmission system of joint source channel based on multi-level semantic information (Multi-level JSCC) is proposed. The transmitter of the system is composed of two networks. The feature extraction network is used to extract the high-level semantic features of the image, compress the information transmitted by the image, and improve the bandwidth utilization. Feature retention network is used to preserve low-level semantic features and image details to improve communication quality. The receiver also is composed of two networks. The received high-level semantic features are fused with the low-level semantic features after feature enhancement network in the same dimension, and then the image dimension is restored through feature recovery network, and the image location information is effectively used for image reconstruction. This paper verifies that the proposed multi-level JSCC algorithm can effectively transmit and recover image information in both AWGN channel and Rayleigh fading channel, and the peak signal-to-noise ratio (PSNR) is improved by 1~2dB compared with other algorithms under the same simulation conditions.Keywords: deep learning, JSCC, pilot-free picture transmission, multilevel semantic information, robustness
Procedia PDF Downloads 1201438 Improving Biodegradation Behavior of Fabricated WE43 Magnesium Alloy by High-Temperature Oxidation
Authors: Jinge Liu, Shuyuan Min, Bingchuan Liu, Bangzhao Yin, Bo Peng, Peng Wen, Yun Tian
Abstract:
WE43 magnesium alloy can be additively manufactured via laser powder bed fusion (LPBF) for biodegradable applications, but the as-built WE43 exhibits an excessively rapid corrosion rate. High-temperature oxidation (HTO) was performed on the as-built WE43 to improve its biodegradation behavior. A sandwich structure including an oxide layer at the surface, a transition layer in the middle, and the matrix was generated influenced by the oxidation reaction and diffusion of RE atoms when heated at 525 ℃for 8 hours. The oxide layer consisted of Y₂O₃ and Nd₂O₃ oxides with a thickness of 2-3 μm. The transition layer is composed of α-Mg and Y₂O₃ with a thickness of 60-70 μm, while Mg24RE5 could be observed except α-Mg and Y₂O₃. The oxide layer and transition layer appeared to have an effective passivation effect. The as-built WE43 lost 40% weight after the in vitro immersion test for three days and finally broke into debris after seven days of immersion. The high-temperature oxidation samples kept the structural integrity and lost only 6.88 % weight after 28-day immersion. The corrosion rate of HTO samples was significantly controlled, which improved the biocompatibility of the as-built WE43 at the same time. The samples after HTO had better osteogenic capability according to ALP activity. Moreover, as built WE43 performed unqualified in cell adhesion and hemolytic test due to its excessively rapid corrosion rate. While as for HTO samples, cells adhered well, and the hemolysis ratio was only 1.59%.Keywords: laser powder bed fusion, biodegradable metal, high temperature oxidation, biodegradation behavior, WE43
Procedia PDF Downloads 1051437 Offline High Voltage Diagnostic Test Findings on 15MVA Generator of Basochhu Hydropower Plant
Authors: Suprit Pradhan, Tshering Yangzom
Abstract:
Even with availability of the modern day online insulation diagnostic technologies like partial discharge monitoring, the measurements like Dissipation Factor (tanδ), DC High Voltage Insulation Currents, Polarization Index (PI) and Insulation Resistance Measurements are still widely used as a diagnostic tools to assess the condition of stator insulation in hydro power plants. To evaluate the condition of stator winding insulation in one of the generators that have been operated since 1999, diagnostic tests were performed on the stator bars of 15 MVA generators of Basochhu Hydropower Plant. This paper presents diagnostic study done on the data gathered from the measurements which were performed in 2015 and 2016 as part of regular maintenance as since its commissioning no proper aging data were maintained. Measurement results of Dissipation Factor, DC High Potential tests and Polarization Index are discussed with regard to their effectiveness in assessing the ageing condition of the stator insulation. After a brief review of the theoretical background, the strengths of each diagnostic method in detecting symptoms of insulation deterioration are identified. The interesting results observed from Basochhu Hydropower Plant is taken into consideration to conclude that Polarization Index and DC High Voltage Insulation current measurements are best suited for the detection of humidity and contamination problems and Dissipation Factor measurement is a robust indicator of long-term ageing caused by oxidative degradation.Keywords: dissipation Factor (tanδ), polarization Index (PI), DC High Voltage Insulation Current, insulation resistance (IR), Tan Delta Tip-Up, dielectric absorption ratio
Procedia PDF Downloads 3121436 High School Gain Analytics From National Assessment Program – Literacy and Numeracy and Australian Tertiary Admission Rankin Linkage
Authors: Andrew Laming, John Hattie, Mark Wilson
Abstract:
Nine Queensland Independent high schools provided deidentified student-matched ATAR and NAPLAN data for all 1217 ATAR graduates since 2020 who also sat NAPLAN at the school. Graduating cohorts from the nine schools contained a mean 100 ATAR graduates with previous NAPLAN data from their school. Excluded were vocational students (mean=27) and any ATAR graduates without NAPLAN data (mean=20). Based on Index of Community Socio-Educational Access (ICSEA) prediction, all schools had larger that predicted proportions of their students graduating with ATARs. There were an additional 173 students not releasing their ATARs to their school (14%), requiring this data to be inferred by schools. Gain was established by first converting each student’s strongest NAPLAN domain to a statewide percentile, then subtracting this result from final ATAR. The resulting ‘percentile shift’ was corrected for plausible ATAR participation at each NAPLAN level. Strongest NAPLAN domain had the highest correlation with ATAR (R2=0.58). RESULTS School mean NAPLAN scores fitted ICSEA closely (R2=0.97). Schools achieved a mean cohort gain of two ATAR rankings, but only 66% of students gained. This ranged from 46% of top-NAPLAN decile students gaining, rising to 75% achieving gains outside the top decile. The 54% of top-decile students whose ATAR fell short of prediction lost a mean 4.0 percentiles (or 6.2 percentiles prior to correction for regression to the mean). 71% of students in smaller schools gained, compared to 63% in larger schools. NAPLAN variability in each of the 13 ICSEA1100 cohorts was 17%, with both intra-school and inter-school variation of these values extremely low (0.3% to 1.8%). Mean ATAR change between years in each school was just 1.1 ATAR ranks. This suggests consecutive school cohorts and ICSEA-similar schools share very similar distributions and outcomes over time. Quantile analysis of the NAPLAN/ATAR revealed heteroscedasticity, but splines offered little additional benefit over simple linear regression. The NAPLAN/ATAR R2 was 0.33. DISCUSSION Standardised data like NAPLAN and ATAR offer educators a simple no-cost progression metric to analyse performance in conjunction with their internal test results. Change is expressed in percentiles, or ATAR shift per student, which is layperson intuitive. Findings may also reduce ATAR/vocational stream mismatch, reveal proportions of cohorts meeting or falling short of expectation and demonstrate by how much. Finally, ‘crashed’ ATARs well below expectation are revealed, which schools can reasonably work to minimise. The percentile shift method is neither value-add nor a growth percentile. In the absence of exit NAPLAN testing, this metric is unable to discriminate academic gain from legitimate ATAR-maximizing strategies. But by controlling for ICSEA, ATAR proportion variation and student mobility, it uncovers progression to ATAR metrics which are not currently publicly available. However achieved, ATAR maximisation is a sought-after private good. So long as standardised nationwide data is available, this analysis offers useful analytics for educators and reasonable predictivity when counselling subsequent cohorts about their ATAR prospects.Keywords: NAPLAN, ATAR, analytics, measurement, gain, performance, data, percentile, value-added, high school, numeracy, reading comprehension, variability, regression to the mean
Procedia PDF Downloads 681435 Continuous FAQ Updating for Service Incident Ticket Resolution
Authors: Kohtaroh Miyamoto
Abstract:
As enterprise computing becomes more and more complex, the costs and technical challenges of IT system maintenance and support are increasing rapidly. One popular approach to managing IT system maintenance is to prepare and use an FAQ (Frequently Asked Questions) system to manage and reuse systems knowledge. Such an FAQ system can help reduce the resolution time for each service incident ticket. However, there is a major problem where over time the knowledge in such FAQs tends to become outdated. Much of the knowledge captured in the FAQ requires periodic updates in response to new insights or new trends in the problems addressed in order to maintain its usefulness for problem resolution. These updates require a systematic approach to define the exact portion of the FAQ and its content. Therefore, we are working on a novel method to hierarchically structure the FAQ and automate the updates of its structure and content. We use structured information and the unstructured text information with the timelines of the information in the service incident tickets. We cluster the tickets by structured category information, by keywords, and by keyword modifiers for the unstructured text information. We also calculate an urgency score based on trends, resolution times, and priorities. We carefully studied the tickets of one of our projects over a 2.5-year time period. After the first 6 months, we started to create FAQs and confirmed they improved the resolution times. We continued observing over the next 2 years to assess the ongoing effectiveness of our method for the automatic FAQ updates. We improved the ratio of tickets covered by the FAQ from 32.3% to 68.9% during this time. Also, the average time reduction of ticket resolution was between 31.6% and 43.9%. Subjective analysis showed more than 75% reported that the FAQ system was useful in reducing ticket resolution times.Keywords: FAQ system, resolution time, service incident tickets, IT system maintenance
Procedia PDF Downloads 3381434 Convertible Lease, Risky Debt and Financial Structure with Growth Option
Authors: Ons Triki, Fathi Abid
Abstract:
The basic objective of this paper is twofold. It resides in designing a model for a contingent convertible lease contract that can ensure the financial stability of a company and recover the losses of the parties to the lease in the event of default. It also aims to compare the convertible lease contract on inefficiencies resulting from the debt-overhang problem and asset substitution with other financing policies. From this perspective, this paper highlights the interaction between investments and financing policies in a dynamic model with existing assets and a growth option where the investment cost is financed by a contingent convertible lease and equity. We explore the impact of the contingent convertible lease on the capital structure. We also check the reliability and effectiveness of the use of the convertible lease contract as a means of financing. Findings show that the rental convertible contract with a sufficiently high conversion ratio has less severe inefficiencies arising from risk-shifting and debt overhang than those entailed by risky debt and pure-equity financing. The problem of underinvestment pointed out by Mauer and Ott (2000) and the problem of overinvestment mentioned by Hackbarth and Mauer (2012) may be reduced under contingent convertible lease financing. Our findings predict that the firm value under contingent convertible lease financing increases globally with asset volatility instead of decreasing with business risk. The study reveals that convertible leasing contracts can stand for a reliable solution to ensure the lessee and quickly recover the counterparties of the lease upon default.Keywords: contingent convertible lease, growth option, debt overhang, risk-shifting, capital structure
Procedia PDF Downloads 721433 Improvement of Ventilation and Thermal Comfort Using the Atrium Design for Traditional Folk Houses-Fujian Earthen Building
Authors: Ying-Ming Su
Abstract:
Fujian earthen building which was known as a classic for ecological buildings was listed on the world heritage in 2008 (UNESCO) in China. Its design strategy can be applied to modern architecture planning and design. This study chose two different cases (Round Atrium: Er-Yi Building, Double Round Atrium: Zhen-Chen Building) of earthen building in Fu-Jian to compare the ventilation effects of different atrium forms. We adopt field measurements and computational fluid dynamics (CFD) simulation of temperature, humidity, and wind environment to identify the relationship between external environment and atrium about comfort and to confirm the relationship about atrium H/W (height/width). Results indicate that, through the atrium convection effect, it makes the natural wind guides to each space surrounded and keeps indoor comfort. It illustrates that the smaller the ratio of the H/W which is the relationship between the height and the width of an atrium is, the greater the wind speed generated within the street valley. Moreover, the wind speed is very close to the reference wind speed. This field measurement verifies that the value of H/W has great influence of solar radiation heat and sunshine shadows. The ventilation efficiency is: Er-Yi Building (H/W =0.2778) > Zhen-Chen Building (H/W=0.3670). Comparing the cases with the same shape but with different H/W, through the different size patios, airflow revolves in the atriums and can be brought into each interior space. The atrium settings meet the need of building ventilation, and can adjust the humidity and temperature within the buildings. It also creates good ventilation effect.Keywords: traditional folk houses, atrium, tulou, ventilation, building microclimate
Procedia PDF Downloads 4741432 Two Component Source Apportionment Based on Absorption and Size Distribution Measurement
Authors: Tibor Ajtai, Noémi Utry, Máté Pintér, Gábor Szabó, Zoltán Bozóki
Abstract:
Beyond its climate and health related issues ambient light absorbing carbonaceous particulate matter (LAC) has also become a great scientific interest in terms of its regulations recently. It has been experimentally demonstrated in recent studies, that LAC is dominantly composed of traffic and wood burning aerosol particularly under wintertime urban conditions, when the photochemical and biological activities are negligible. Several methods have been introduced to quantitatively apportion aerosol fractions emitted by wood burning and traffic but most of them require costly and time consuming off-line chemical analysis. As opposed to chemical features, the microphysical properties of airborne particles such as optical absorption and size distribution can be easily measured on-line, with high accuracy and sensitivity, especially under highly polluted urban conditions. Recently a new method has been proposed for the apportionment of wood burning and traffic aerosols based on the spectral dependence of their absorption quantified by the Aerosol Angström Exponent (AAE). In this approach the absorption coefficient is deduced from transmission measurement on a filter accumulated aerosol sample and the conversion factor between the measured optical absorption and the corresponding mass concentration (the specific absorption cross section) are determined by on-site chemical analysis. The recently developed multi-wavelength photoacoustic instruments provide novel, in-situ approach towards the reliable and quantitative characterization of carbonaceous particulate matter. Therefore, it also opens up novel possibilities on the source apportionment through the measurement of light absorption. In this study, we demonstrate an in-situ spectral characterization method of the ambient carbon fraction based on light absorption and size distribution measurements using our state-of-the-art multi-wavelength photoacoustic instrument (4λ-PAS) and Single Mobility Particle Sizer (SMPS) The carbonaceous particulate selective source apportionment study was performed for ambient particulate matter in the city center of Szeged, Hungary where the dominance of traffic and wood burning aerosol has been experimentally demonstrated earlier. The proposed model is based on the parallel, in-situ measurement of optical absorption and size distribution. AAEff and AAEwb were deduced from the measured data using the defined correlation between the AOC(1064nm)/AOC(266nm) and N100/N20 ratios. σff(λ) and σwb(λ) were determined with the help of the independently measured temporal mass concentrations in the PM1 mode. Furthermore, the proposed optical source apportionment is based on the assumption that the light absorbing fraction of PM is exclusively related to traffic and wood burning. This assumption is indirectly confirmed here by the fact that the measured size distribution is composed of two unimodal size distributions identified to correspond to traffic and wood burning aerosols. The method offers the possibility of replacing laborious chemical analysis with simple in-situ measurement of aerosol size distribution data. The results by the proposed novel optical absorption based source apportionment method prove its applicability whenever measurements are performed at an urban site where traffic and wood burning are the dominant carbonaceous sources of emission.Keywords: absorption, size distribution, source apportionment, wood burning, traffic aerosol
Procedia PDF Downloads 2271431 Geospatial Techniques and VHR Imagery Use for Identification and Classification of Slums in Gujrat City, Pakistan
Authors: Muhammad Ameer Nawaz Akram
Abstract:
The 21st century has been revealed that many individuals around the world are living in urban settlements than in rural zones. The evolution of numerous cities in emerging and newly developed countries is accompanied by the rise of slums. The precise definition of a slum varies countries to countries, but the universal harmony is that slums are dilapidated settlements facing severe poverty and have lacked access to sanitation, water, electricity, good living styles, and land tenure. The slum settlements always vary in unique patterns within and among the countries and cities. The core objective of this study is the spatial identification and classification of slums in Gujrat city Pakistan from very high-resolution GeoEye-1 (0.41m) satellite imagery. Slums were first identified using GPS for sample site identification and ground-truthing; through this process, 425 slums were identified. Then Object-Oriented Analysis (OOA) was applied to classify slums on digital image. Spatial analysis softwares, e.g., ArcGIS 10.3, Erdas Imagine 9.3, and Envi 5.1, were used for processing data and performing the analysis. Results show that OOA provides up to 90% accuracy for the identification of slums. Jalal Cheema and Allah Ho colonies are severely affected by slum settlements. The ratio of criminal activities is also higher here than in other areas. Slums are increasing with the passage of time in urban areas, and they will be like a hazardous problem in coming future. So now, the executive bodies need to make effective policies and move towards the amelioration process of the city.Keywords: slums, GPS, satellite imagery, object oriented analysis, zonal change detection
Procedia PDF Downloads 1341430 Synthesis, Characterization, and Application of Novel Trihexyltetradecyl Phosphonium Chloride for Extractive Desulfurization of Liquid Fuel
Authors: Swapnil A. Dharaskar, Kailas L. Wasewar, Mahesh N. Varma, Diwakar Z. Shende
Abstract:
Owing to the stringent environmental regulations in many countries for production of ultra low sulfur petroleum fractions intending to reduce sulfur emissions results in enormous interest in this area among the scientific community. The requirement of zero sulfur emissions enhances the prominence for more advanced techniques in desulfurization. Desulfurization by extraction is a promising approach having several advantages over conventional hydrodesulphurization. Present work is dealt with various new approaches for desulfurization of ultra clean gasoline, diesel and other liquid fuels by extraction with ionic liquids. In present paper experimental data on extractive desulfurization of liquid fuel using trihexyl tetradecyl phosphonium chloride has been presented. The FTIR, 1H-NMR, and 13C-NMR have been discussed for the molecular confirmation of synthesized ionic liquid. Further, conductivity, solubility, and viscosity analysis of ionic liquids were carried out. The effects of reaction time, reaction temperature, sulfur compounds, ultrasonication, and recycling of ionic liquid without regeneration on removal of dibenzothiphene from liquid fuel were also investigated. In extractive desulfurization process, the removal of dibenzothiophene in n-dodecane was 84.5% for mass ratio of 1:1 in 30 min at 30OC under the mild reaction conditions. Phosphonium ionic liquids could be reused five times without a significant decrease in activity. Also, the desulfurization of real fuels, multistage extraction was examined. The data and results provided in present paper explore the significant insights of phosphonium based ionic liquids as novel extractant for extractive desulfurization of liquid fuels.Keywords: ionic liquid, PPIL, desulfurization, liquid fuel, extraction
Procedia PDF Downloads 6091429 Production of New Hadron States in Effective Field Theory
Authors: Qi Wu, Dian-Yong Chen, Feng-Kun Guo, Gang Li
Abstract:
In the past decade, a growing number of new hadron states have been observed, which are dubbed as XYZ states in the heavy quarkonium mass regions. In this work, we present our study on the production of some new hadron states. In particular, we investigate the processes Υ(5S,6S)→ Zb (10610)/Zb (10650)π, Bc→ Zc (3900)/Zc (4020)π and Λb→ Pc (4312)/Pc (4440)/Pc (4457)K. (1) For the production of Zb (10610)/Zb (10650) from Υ(5S,6S) decay, two types of bottom-meson loops were discussed within a nonrelativistic effective field theory. We found that the loop contributions with all intermediate states being the S-wave ground state bottom mesons are negligible, while the loops with one bottom meson being the broad B₀* or B₁' resonance could provide the dominant contributions to the Υ(5S)→ Zb⁽'⁾ π. (2) For the production of Zc (3900)/Zc (4020) from Bc decay, the branching ratios of Bc⁺→ Z (3900)⁺ π⁰ and Bc⁺→ Zc (4020)⁺ π⁰ are estimated to be of order of 10⁽⁻⁴⁾ and 10⁽⁻⁷⁾ in an effective Lagrangian approach. The large production rate of Zc (3900) could provide an important source of the production of Zc (3900) from the semi-exclusive decay of b-flavored hadrons reported by D0 Collaboration, which can be tested by the exclusive measurements in LHCb. (3) For the production of Pc (4312), Pc (4440) and Pc (4457) from Λb decay, the ratio of the branching fraction of Λb→ Pc K was predicted in a molecular scenario by using an effective Lagrangian approach, which is weakly dependent on our model parameter. We also find the ratios of the productions of the branching fractions of Λb→ Pc K and Pc→ J/ψ p can be well interpreted in the molecular scenario. Moreover, the estimated branching fractions of Λb→ Pc K are of order 10⁽⁻⁶⁾, which could be tested by further measurements in LHCb Collaboration.Keywords: effective Lagrangian approach, hadron loops, molecular states, new hadron states
Procedia PDF Downloads 1321428 Advanced Magnetic Resonance Imaging in Differentiation of Neurocysticercosis and Tuberculoma
Authors: Rajendra N. Ghosh, Paramjeet Singh, Niranjan Khandelwal, Sameer Vyas, Pratibha Singhi, Naveen Sankhyan
Abstract:
Background: Tuberculoma and neurocysticercosis (NCC) are two most common intracranial infections in developing country. They often simulate on neuroimaging and in absence of typical imaging features cause significant diagnostic dilemmas. Differentiation is extremely important to avoid empirical exposure to antitubercular medications or nonspecific treatment causing disease progression. Purpose: Better characterization and differentiation of CNS tuberculoma and NCC by using morphological and multiple advanced functional MRI. Material and Methods: Total fifty untreated patients (20 tuberculoma and 30 NCC) were evaluated by using conventional and advanced sequences like CISS, SWI, DWI, DTI, Magnetization transfer (MT), T2Relaxometry (T2R), Perfusion and Spectroscopy. rCBV,ADC,FA,T2R,MTR values and metabolite ratios were calculated from lesion and normal parenchyma. Diagnosis was confirmed by typical biochemical, histopathological and imaging features. Results: CISS was most useful sequence for scolex detection (90% on CISS vs 73% on routine sequences). SWI showed higher scolex detection ability. Mean values of ADC, FA,T2R from core and rCBV from wall of lesion were significantly different in tuberculoma and NCC (P < 0.05). Mean values of rCBV, ADC, T2R and FA for tuberculoma and NCC were (3.36 vs1.3), (1.09x10⁻³vs 1.4x10⁻³), (0.13 x10⁻³ vs 0.09 x10⁻³) and (88.65 ms vs 272.3 ms) respectively. Tuberculomas showed high lipid peak, more choline and lower creatinine with Ch/Cr ratio > 1. T2R value was most significant parameter for differentiation. Cut off values for each significant parameters have proposed. Conclusion: Quantitative MRI in combination with conventional sequences can better characterize and differentiate similar appearing tuberculoma and NCC and may be incorporated in routine protocol which may avoid brain biopsy and empirical therapy.Keywords: advanced functional MRI, differentiation, neurcysticercosis, tuberculoma
Procedia PDF Downloads 5671427 The Feasibility of Anaerobic Digestion at 45⁰C
Authors: Nuruol S. Mohd, Safia Ahmed, Rumana Riffat, Baoqiang Li
Abstract:
Anaerobic digestion at mesophilic and thermophilic temperatures have been widely studied and evaluated by numerous researchers. Limited extensive research has been conducted on anaerobic digestion in the intermediate zone of 45°C, mainly due to the notion that limited microbial activity occurs within this zone. The objectives of this research were to evaluate the performance and the capability of anaerobic digestion at 45°C in producing class A biosolids, in comparison to a mesophilic and thermophilic anaerobic digestion system operated at 35°C and 55°C, respectively. In addition to that, the investigation on the possible inhibition factors affecting the performance of the digestion system at this temperature will be conducted as well. The 45°C anaerobic digestion systems were not able to achieve comparable methane yield and high-quality effluent compared to the mesophilic system, even though the systems produced biogas with about 62-67% methane. The 45°C digesters suffered from high acetate accumulation, but sufficient buffering capacity was observed as the pH, alkalinity and volatile fatty acids (VFA)-to-alkalinity ratio were within recommended values. The accumulation of acetate observed in 45°C systems were presumably due to the high temperature which contributed to high hydrolysis rate. Consequently, it produced a large amount of toxic salts that combined with the substrate making them not readily available to be consumed by methanogens. Acetate accumulation, even though contributed to 52 to 71% reduction in acetate degradation process, could not be considered as completely inhibitory. Additionally, at 45°C, no ammonia inhibition was observed and the digesters were able to achieve volatile solids (VS) reduction of 47.94±4.17%. The pathogen counts were less than 1,000 MPN/g total solids, thus, producing Class A biosolids.Keywords: 45°C anaerobic digestion, acetate accumulation, class A biosolids, salt toxicity
Procedia PDF Downloads 3041426 Toxic Ingredients Contained in Our Cosmetics
Authors: El Alia Boularas, H. Bekkar, H. Larachi, H. Rezk-kallah
Abstract:
Introduction: Notwithstanding cosmetics are used in life every day, these products are not all innocuous and harmless, as they may contain ingredients responsible for allergic reactions and, possibly, for other health problems. Additionally, environmental pollution should be taken into account. Thus, it is time to investigate what is ‘hidden behind beauty’. Aims: 1.To investigate prevalence of 13 chemical ingredients in cosmetics being object of concern, which the Algerians use regularly. 2.To know the profile of questioned consumers and describe their opinion on cosmetics. Methods: The survey was carried out in year 2013 over a period of 3 months, among Algerian Internet users having an e-mail address or a Facebook account.The study investigated 13 chemical agents showing health and environmental problems, selected after analysis of the recent studies published on the subject, the lists of national and international regulatory references on chemical hazards, and querying the database Skin Deep presented by the Environmental Working Group. Results: 300 people distributed all over the Algerian territory participated in the survey, providing information about 731 cosmetics; 86% aged from 20 to 39 years, with a sex ratio=0,27. A percentage of 43% of the analyzed cosmetics contained at least one of the 13 toxic ingredients. The targeted ingredient that has been most frequently reported was ‘perfume’ followed by parabens and PEG.85% of the participants declared that cosmetics ‘can contain toxic substances’, 27% asserted that they verify regularly the list of ingredients when they buy cosmetics, 61% said that they try to avoid the toxic ingredients, among whom 24 % were more vigilant on the presence of parabens, 95% were in favour of the strengthening of the Algerian laws on cosmetics. Conclusion: The results of the survey provide the indication of a widespread presence of toxic chemical ingredients in personal care products that Algerians use daily.Keywords: Algerians consumers, cosmetics, survey, toxic ingredients
Procedia PDF Downloads 2771425 A Statistical Analysis on the Comparison of First and Second Waves of COVID-19 and Importance of Early Actions in Public Health for Third Wave in India
Authors: Maitri Dave
Abstract:
Coronaviruses (CoV) is such infectious virus which has impacted globally in a more dangerous manner causing severe lung problems and leaving behind more serious diseases among the people. This pandemic has affected globally and created severe respiratory problems, and damaged the lungs. India has reported the first case of COVID-19 in January 2020. The first wave of COVID-19 took place from April to September of 2020. Soon after, a second peak is also noticed in the month of March 2021, which in turn becomes more dangerous due to a lack of supply of medical equipment. It created resource deficiency globally, specifically in India, where some necessary life-saving equipment like ventilators and oxygenators were not sufficient to cater to the demand-supply ratio effectively. Through carefully examining such a situation, India began to execute the process of vaccination in the month of January 2021 and successfully administered 25,46,71,259 doses of vaccines till now, which is only 15.5% of the total population while only 3.6% of the total population is fully vaccinated. India has authorized the British Oxford–AstraZeneca vaccine (Covishield), the Indian BBV152 (Covaxin) vaccine, and the Russian Sputnik V vaccine for emergency use. In the present study, we have collected all the data state wisely of both first and second wave and analyzed them using MS Excel Version 2019 and SPSS Statistics Version 26. Following the trends, we have predicted the characteristics of the upcoming third wave of COVID-19 and recommended some strategies, early actions, and measures that can be taken by the public health system in India to combat the third wave more effectively.Keywords: COVID-19, vaccination, Covishiled, Coronavirus
Procedia PDF Downloads 2161424 Effect of L-Dopa on Performance and Carcass Characteristics in Broiler Chickens
Authors: B. R. O. Omidiwura, A. F. Agboola, E. A. Iyayi
Abstract:
Pure form of L-Dopa is used to enhance muscular development, fat breakdown and suppress Parkinson disease in humans. However, the L-Dopa in mucuna seed, when present with other antinutritional factors, causes nutritional disorders in monogastric animals. Information on the utilisation of pure L-Dopa in monogastric animals is scanty. Therefore, effect of L-Dopa on growth performance and carcass characteristics in broiler chickens was investigated. Two hundred and forty one-day-old chicks were allotted to six treatments, which consisted of a positive control (PC) with standard energy (3100Kcal/Kg) and negative control (NC) with high energy (3500Kcal/Kg). The rest 4 diets were NC+0.1, NC+0.2, NC+0.3 and NC+0.4% L-Dopa, respectively. All treatments had 4 replicates in a completely randomized design. Body weight gain, final weight, feed intake, dressed weight and carcass characteristics were determined. Body weight gain and final weight of birds fed PC were 1791.0 and 1830.0g, NC+0.1% L-Dopa were 1827.7 and 1866.7g and NC+0.2% L-Dopa were 1871.9 and 1910.9g respectively, and the feed intake of PC (3231.5g), were better than other treatments. The dressed weight at 1375.0g and 1357.1g of birds fed NC+0.1% and NC+0.2% L-Dopa, respectively, were similar but better than other treatments. Also, the thigh (202.5g and 194.9g) and the breast meat (413.8g and 410.8g) of birds fed NC+0.1% and NC+0.2% L-Dopa, respectively, were similar but better than birds fed other treatments. The drum stick of birds fed NC+0.1% L-Dopa (220.5g) was observed to be better than birds on other diets. Meat to bone ratio and relative organ weights were not affected across treatments. L-Dopa extract, at levels tested, had no detrimental effect on broilers, rather better bird performance and carcass characteristics were observed especially at 0.1% and 0.2% L-Dopa inclusion rates. Therefore, 0.2% inclusion is recommended in diets of broiler chickens for improved performance and carcass characteristics.Keywords: broilers, carcass characteristics, l-dopa, performance
Procedia PDF Downloads 3091423 An Investigation into Why Liquefaction Charts Work: A Necessary Step toward Integrating the States of Art and Practice
Authors: Tarek Abdoun, Ricardo Dobry
Abstract:
This paper is a systematic effort to clarify why field liquefaction charts based on Seed and Idriss’ Simplified Procedure work so well. This is a necessary step toward integrating the states of the art (SOA) and practice (SOP) for evaluating liquefaction and its effects. The SOA relies mostly on laboratory measurements and correlations with void ratio and relative density of the sand. The SOP is based on field measurements of penetration resistance and shear wave velocity coupled with empirical or semi-empirical correlations. This gap slows down further progress in both SOP and SOA. The paper accomplishes its objective through: a literature review of relevant aspects of the SOA including factors influencing threshold shear strain and pore pressure buildup during cyclic strain-controlled tests; a discussion of factors influencing field penetration resistance and shear wave velocity; and a discussion of the meaning of the curves in the liquefaction charts separating liquefaction from no liquefaction, helped by recent full-scale and centrifuge results. It is concluded that the charts are curves of constant cyclic strain at the lower end (Vs1 < 160 m/s), with this strain being about 0.03 to 0.05% for earthquake magnitude, Mw ≈ 7. It is also concluded, in a more speculative way, that the curves at the upper end probably correspond to a variable increasing cyclic strain and Ko, with this upper end controlled by over consolidated and preshaken sands, and with cyclic strains needed to cause liquefaction being as high as 0.1 to 0.3%. These conclusions are validated by application to case histories corresponding to Mw ≈ 7, mostly in the San Francisco Bay Area of California during the 1989 Loma Prieta earthquake.Keywords: permeability, lateral spreading, liquefaction, centrifuge modeling, shear wave velocity charts
Procedia PDF Downloads 2971422 Cyclic Behaviour of Wide Beam-Column Joints with Shear Strength Ratios of 1.0 and 1.7
Authors: Roy Y. C. Huang, J. S. Kuang, Hamdolah Behnam
Abstract:
Beam-column connections play an important role in the reinforced concrete moment resisting frame (RCMRF), which is one of the most commonly used structural systems around the world. The premature failure of such connections would severely limit the seismic performance and increase the vulnerability of RCMRF. In the past decades, researchers primarily focused on investigating the structural behaviour and failure mechanisms of conventional beam-column joints, the beam width of which is either smaller than or equal to the column width, while studies in wide beam-column joints were scarce. This paper presents the preliminary experimental results of two full-scale exterior wide beam-column connections, which are mainly designed and detailed according to ACI 318-14 and ACI 352R-02, under reversed cyclic loading. The ratios of the design shear force to the nominal shear strength of these specimens are 1.0 and 1.7, respectively, so as to probe into differences of the joint shear strength between experimental results and predictions by design codes of practice. Flexural failure dominated in the specimen with ratio of 1.0 in which full-width plastic hinges were observed, while both beam hinges and post-peak joint shear failure occurred for the other specimen. No sign of premature joint shear failure was found which is inconsistent with ACI codes’ prediction. Finally, a modification of current codes of practice is provided to accurately predict the joint shear strength in wide beam-column joint.Keywords: joint shear strength, reversed cyclic loading, seismic vulnerability, wide beam-column joints
Procedia PDF Downloads 3231421 Preparation of Carbon Nanofiber Reinforced HDPE Using Dialkylimidazolium as a Dispersing Agent: Effect on Thermal and Rheological Properties
Authors: J. Samuel, S. Al-Enezi, A. Al-Banna
Abstract:
High-density polyethylene reinforced with carbon nanofibers (HDPE/CNF) have been prepared via melt processing using dialkylimidazolium tetrafluoroborate (ionic liquid) as a dispersion agent. The prepared samples were characterized by thermogravimetric (TGA) and differential scanning calorimetric (DSC) analyses. The samples blended with imidazolium ionic liquid exhibit higher thermal stability. DSC analysis showed clear miscibility of ionic liquid in the HDPE matrix and showed single endothermic peak. The melt rheological analysis of HDPE/CNF composites was performed using an oscillatory rheometer. The influence of CNF and ionic liquid concentration (ranging from 0, 0.5, and 1 wt%) on the viscoelastic parameters was investigated at 200 °C with an angular frequency range of 0.1 to 100 rad/s. The rheological analysis shows the shear-thinning behavior for the composites. An improvement in the viscoelastic properties was observed as the nanofiber concentration increases. The progress in the modulus values was attributed to the structural rigidity imparted by the high aspect ratio CNF. The modulus values and complex viscosity of the composites increased significantly at low frequencies. Composites blended with ionic liquid exhibit slightly lower values of complex viscosity and modulus over the corresponding HDPE/CNF compositions. Therefore, reduction in melt viscosity is an additional benefit for polymer composite processing as a result of wetting effect by polymer-ionic liquid combinations.Keywords: high-density polyethylene, carbon nanofibers, ionic liquid, complex viscosity
Procedia PDF Downloads 1271420 Comparison of Soil Test Extractants for Determination of Available Soil Phosphorus
Authors: Violina Angelova, Stefan Krustev
Abstract:
The aim of this work was to evaluate the effectiveness of different soil test extractants for the determination of available soil phosphorus in five internationally certified standard soils, sludge and clay (NCS DC 85104, NCS DC 85106, ISE 859, ISE 952, ISE 998). The certified samples were extracted with the following methods/extractants: CaCl₂, CaCl₂ and DTPA (CAT), double lactate (DL), ammonium lactate (AL), calcium acetate lactate (CAL), Olsen, Mehlich 3, Bray and Kurtz I, and Morgan, which are commonly used in soil testing laboratories. The phosphorus in soil extracts was measured colorimetrically using Spectroquant Pharo 100 spectrometer. The methods used in the study were evaluated according to the recovery of available phosphorus, facility of application and rapidity of performance. The relationships between methods are examined statistically. A good agreement of the results from different soil test was established for all certified samples. In general, the P values extracted by the nine extraction methods significantly correlated with each other. When grouping the soils according to pH, organic carbon content and clay content, weaker extraction methods showed analogous trends; also among the stronger extraction methods, common tendencies were found. Other factors influencing the extraction force of the different methods include soil: solution ratio, as well as the duration and power of shaking the samples. The mean extractable P in certified samples was found to be in the order of CaCl₂ < CAT < Morgan < Bray and Kurtz I < Olsen < CAL < DL < Mehlich 3 < AL. Although the nine methods extracted different amounts of P from the certified samples, values of P extracted by the different methods were strongly correlated among themselves. Acknowledgment: The financial support by the Bulgarian National Science Fund Projects DFNI Н04/9 and DFNI Н06/21 are greatly appreciated.Keywords: available soil phosphorus, certified samples, determination, soil test extractants
Procedia PDF Downloads 1511419 Design and Manufacture of a Hybrid Gearbox Reducer System
Authors: Ahmed Mozamel, Kemal Yildizli
Abstract:
Due to mechanical energy losses and a competitive of minimizing these losses and increases the machine efficiency, the need for contactless gearing system has raised. In this work, one stage of mechanical planetary gear transmission system integrated with one stage of magnetic planetary gear system is designed as a two-stage hybrid gearbox system. The permanent magnets internal energy in the form of the magnetic field is used to create meshing between contactless magnetic rotors in order to provide self-system protection against overloading and decrease the mechanical loss of the transmission system by eliminating the friction losses. Classical methods, such as analytical, tabular method and the theory of elasticity are used to calculate the planetary gear design parameters. The finite element method (ANSYS Maxwell) is used to predict the behaviors of a magnetic gearing system. The concentric magnetic gearing system has been modeled and analyzed by using 2D finite element method (ANSYS Maxwell). In addition to that, design and manufacturing processes of prototype components (a planetary gear, concentric magnetic gear, shafts and the bearings selection) of a gearbox system are investigated. The output force, the output moment, the output power and efficiency of the hybrid gearbox system are experimentally evaluated. The viability of applying a magnetic force to transmit mechanical power through a non-contact gearing system is presented. The experimental test results show that the system is capable to operate continuously within the range of speed from 400 rpm to 3000 rpm with the reduction ratio of 2:1 and maximum efficiency of 91%.Keywords: hybrid gearbox, mechanical gearboxes, magnetic gears, magnetic torque
Procedia PDF Downloads 1521418 Community Forest Management and Ecological and Economic Sustainability: A Two-Way Street
Authors: Sony Baral, Harald Vacik
Abstract:
This study analyzes the sustainability of community forest management in two community forests in Terai and Hills of Nepal, representing four forest types: 1) Shorearobusta, 2) Terai hardwood, 3) Schima-Castanopsis, and 4) other Hills. The sustainability goals for this region include maintaining and enhancing the forest stocks. Considering this, we analysed changes in species composition, stand density, growing stock volume, and growth-to-removal ratio at 3-5 year intervals from 2005-2016 within 109 permanent forest plots (57 in the Terai and 52 in the Hills). To complement inventory data, forest users, forest committee members, and forest officials were consulted. The results indicate that the relative representation of economically valuable tree species has increased. Based on trends in stand density, both forests are being sustainably managed. Pole-sized trees dominated the diameter distribution, however, with a limited number of mature trees and declined regeneration. The forests were over-harvested until 2013 but under-harvested in the recent period in the Hills. In contrast, both forest types were under-harvested throughout the inventory period in the Terai. We found that the ecological dimension of sustainable forest management is strongly achieved while the economic dimension is lacking behind the current potential. Thus, we conclude that maintaining a large number of trees in the forest does not necessarily ensure both ecological and economical sustainability. Instead, priority should be given on a rational estimation of the annual harvest rates to enhance forest resource conditions together with regular benefits to the local communities.Keywords: community forests, diversity, growing stock, forest management, sustainability, nepal
Procedia PDF Downloads 971417 Understanding Mathematics Achievements among U. S. Middle School Students: A Bayesian Multilevel Modeling Analysis with Informative Priors
Authors: Jing Yuan, Hongwei Yang
Abstract:
This paper aims to understand U.S. middle school students’ mathematics achievements by examining relevant student and school-level predictors. Through a variance component analysis, the study first identifies evidence supporting the use of multilevel modeling. Then, a multilevel analysis is performed under Bayesian statistical inference where prior information is incorporated into the modeling process. During the analysis, independent variables are entered sequentially in the order of theoretical importance to create a hierarchy of models. By evaluating each model using Bayesian fit indices, a best-fit and most parsimonious model is selected where Bayesian statistical inference is performed for the purpose of result interpretation and discussion. The primary dataset for Bayesian modeling is derived from the Program for International Student Assessment (PISA) in 2012 with a secondary PISA dataset from 2003 analyzed under the traditional ordinary least squares method to provide the information needed to specify informative priors for a subset of the model parameters. The dependent variable is a composite measure of mathematics literacy, calculated from an exploratory factor analysis of all five PISA 2012 mathematics achievement plausible values for which multiple evidences are found supporting data unidimensionality. The independent variables include demographics variables and content-specific variables: mathematics efficacy, teacher-student ratio, proportion of girls in the school, etc. Finally, the entire analysis is performed using the MCMCpack and MCMCglmm packages in R.Keywords: Bayesian multilevel modeling, mathematics education, PISA, multilevel
Procedia PDF Downloads 3361416 Advances and Challenges in Assessing Students’ Learning Competencies in 21st Century Higher Education
Authors: O. Zlatkin-Troitschanskaia, J. Fischer, C. Lautenbach, H. A. Pant
Abstract:
In 21st century higher education (HE), the diversity among students has increased in recent years due to the internationalization and higher mobility. Offering and providing equal and fair opportunities based on students’ individual skills and abilities instead of their social or cultural background is one of the major aims of HE. In this context, valid, objective and transparent assessments of students’ preconditions and academic competencies in HE are required. However, as analyses of the current states of research and practice show, a substantial research gap on assessment practices in HE still exists, calling for the development of effective solutions. These demands lead to significant conceptual and methodological challenges. Funded by the German Federal Ministry of Education and Research, the research program 'Modeling and Measuring Competencies in Higher Education – Validation and Methodological Challenges' (KoKoHs) focusses on addressing these challenges in HE assessment practice by modeling and validating objective test instruments. Including 16 cross-university collaborative projects, the German-wide research program contributes to bridging the research gap in current assessment research and practice by concentrating on practical and policy-related challenges of assessment in HE. In this paper, we present a differentiated overview of existing assessments of HE at the national and international level. Based on the state of research, we describe the theoretical and conceptual framework of the KoKoHs Program as well as results of the validation studies, including their key outcomes. More precisely, this includes an insight into more than 40 developed assessments covering a broad range of transparent and objective methods for validly measuring domain-specific and generic knowledge and skills for five major study areas (Economics, Social Science, Teacher Education, Medicine and Psychology). Computer-, video- and simulation-based instruments have been applied and validated to measure over 20,000 students at the beginning, middle and end of their (bachelor and master) studies at more than 300 HE institutions throughout Germany or during their practical training phase, traineeship or occupation. Focussing on the validity of the assessments, all test instruments have been analyzed comprehensively, using a broad range of methods and observing the validity criteria of the Standards for Psychological and Educational Testing developed by the American Educational Research Association, the American Economic Association and the National Council on Measurement. The results of the developed assessments presented in this paper, provide valuable outcomes to predict students’ skills and abilities at the beginning and the end of their studies as well as their learning development and performance. This allows for a differentiated view of the diversity among students. Based on the given research results practical implications and recommendations are formulated. In particular, appropriate and effective learning opportunities for students can be created to support the learning development of students, promote their individual potential and reduce knowledge and skill gaps. Overall, the presented research on competency assessment is highly relevant to national and international HE practice.Keywords: 21st century skills, academic competencies, innovative assessments, KoKoHs
Procedia PDF Downloads 1401415 Autonomous Strategic Aircraft Deconfliction in a Multi-Vehicle Low Altitude Urban Environment
Authors: Loyd R. Hook, Maryam Moharek
Abstract:
With the envisioned future growth of low altitude urban aircraft operations for airborne delivery service and advanced air mobility, strategies to coordinate and deconflict aircraft flight paths must be prioritized. Autonomous coordination and planning of flight trajectories is the preferred approach to the future vision in order to increase safety, density, and efficiency over manual methods employed today. Difficulties arise because any conflict resolution must be constrained by all other aircraft, all airspace restrictions, and all ground-based obstacles in the vicinity. These considerations make pair-wise tactical deconfliction difficult at best and unlikely to find a suitable solution for the entire system of vehicles. In addition, more traditional methods which rely on long time scales and large protected zones will artificially limit vehicle density and drastically decrease efficiency. Instead, strategic planning, which is able to respond to highly dynamic conditions and still account for high density operations, will be required to coordinate multiple vehicles in the highly constrained low altitude urban environment. This paper develops and evaluates such a planning algorithm which can be implemented autonomously across multiple aircraft and situations. Data from this evaluation provide promising results with simulations showing up to 10 aircraft deconflicted through a relatively narrow low-altitude urban canyon without any vehicle to vehicle or obstacle conflict. The algorithm achieves this level of coordination beginning with the assumption that each vehicle is controlled to follow an independently constructed flight path, which is itself free of obstacle conflict and restricted airspace. Then, by preferencing speed change deconfliction maneuvers constrained by the vehicles flight envelope, vehicles can remain as close to the original planned path and prevent cascading vehicle to vehicle conflicts. Performing the search for a set of commands which can simultaneously ensure separation for each pair-wise aircraft interaction and optimize the total velocities of all the aircraft is further complicated by the fact that each aircraft's flight plan could contain multiple segments. This means that relative velocities will change when any aircraft achieves a waypoint and changes course. Additionally, the timing of when that aircraft will achieve a waypoint (or, more directly, the order upon which all of the aircraft will achieve their respective waypoints) will change with the commanded speed. Put all together, the continuous relative velocity of each vehicle pair and the discretized change in relative velocity at waypoints resembles a hybrid reachability problem - a form of control reachability. This paper proposes two methods for finding solutions to these multi-body problems. First, an analytical formulation of the continuous problem is developed with an exhaustive search of the combined state space. However, because of computational complexity, this technique is only computable for pairwise interactions. For more complicated scenarios, including the proposed 10 vehicle example, a discretized search space is used, and a depth-first search with early stopping is employed to find the first solution that solves the constraints.Keywords: strategic planning, autonomous, aircraft, deconfliction
Procedia PDF Downloads 951414 Self-Healing Phenomenon Evaluation in Cementitious Matrix with Different Water/Cement Ratios and Crack Opening Age
Authors: V. G. Cappellesso, D. M. G. da Silva, J. A. Arndt, N. dos Santos Petry, A. B. Masuero, D. C. C. Dal Molin
Abstract:
Concrete elements are subject to cracking, which can be an access point for deleterious agents that can trigger pathological manifestations reducing the service life of these structures. Finding ways to minimize or eliminate the effects of this aggressive agents’ penetration, such as the sealing of these cracks, is a manner of contributing to the durability of these structures. The cementitious self-healing phenomenon can be classified in two different processes. The autogenous self-healing that can be defined as a natural process in which the sealing of this cracks occurs without the stimulation of external agents, meaning, without different materials being added to the mixture, while on the other hand, the autonomous seal-healing phenomenon depends on the insertion of a specific engineered material added to the cement matrix in order to promote its recovery. This work aims to evaluate the autogenous self-healing of concretes produced with different water/cement ratios and exposed to wet/dry cycles, considering two ages of crack openings, 3 days and 28 days. The self-healing phenomenon was evaluated using two techniques: crack healing measurement using ultrasonic waves and image analysis performed with an optical microscope. It is possible to observe that by both methods, it possible to observe the self-healing phenomenon of the cracks. For young ages of crack openings and lower water/cement ratios, the self-healing capacity is higher when compared to advanced ages of crack openings and higher water/cement ratios. Regardless of the crack opening age, these concretes were found to stabilize the self-healing processes after 80 days or 90 days.Keywords: sealf-healing, autogenous, water/cement ratio, curing cycles, test methods
Procedia PDF Downloads 1601413 Ecological-Economics Evaluation of Water Treatment Systems
Authors: Hwasuk Jung, Seoi Lee, Dongchoon Ryou, Pyungjong Yoo, Seokmo Lee
Abstract:
The Nakdong River being used as drinking water sources for Pusan metropolitan city has the vulnerability of water management due to the fact that industrial areas are located in the upper Nakdong River. Most citizens of Busan think that the water quality of Nakdong River is not good, so they boil or use home filter to drink tap water, which causes unnecessary individual costs to Busan citizens. We need to diversify water intake to reduce the cost and to change the weak water source. Under this background, this study was carried out for the environmental accounting of Namgang dam water treatment system compared to Nakdong River water treatment system by using emergy analysis method to help making reasonable decision. Emergy analysis method evaluates quantitatively both natural environment and human economic activities as an equal unit of measure. The emergy transformity of Namgang dam’s water was 1.16 times larger than that of Nakdong River’s water. Namgang Dam’s water shows larger emergy transformity than that of Nakdong River’s water due to its good water quality. The emergy used in making 1 m3 tap water from Namgang dam water treatment system was 1.26 times larger than that of Nakdong River water treatment system. Namgang dam water treatment system shows larger emergy input than that of Nakdong river water treatment system due to its construction cost of new pipeline for intaking Namgang daw water. If the Won used in making 1 m3 tap water from Nakdong river water treatment system is 1, Namgang dam water treatment system used 1.66. If the Em-won used in making 1 m3 tap water from Nakdong river water treatment system is 1, Namgang dam water treatment system used 1.26. The cost-benefit ratio of Em-won was smaller than that of Won. When we use emergy analysis, which considers the benefit of a natural environment such as good water quality of Namgang dam, Namgang dam water treatment system could be a good alternative for diversifying intake source.Keywords: emergy, emergy transformity, Em-won, water treatment system
Procedia PDF Downloads 3051412 MAGE-A3 and PRAME Gene Expression and EGFR Mutation Status in Non-Small-Cell Lung Cancer
Authors: Renata Checiches, Thierry Coche, Nicolas F. Delahaye, Albert Linder, Fernando Ulloa Montoya, Olivier Gruselle, Karen Langfeld, An de Creus, Bart Spiessens, Vincent G. Brichard, Jamila Louahed, Frédéric F. Lehmann
Abstract:
Background: The RNA-expression levels of cancer-testis antigens MAGE A3 and PRAME were determined in resected tissue from patients with primary non-small-cell lung cancer (NSCLC) and related to clinical outcome. EGFR, KRAS and BRAF mutation status was determined in a subset to investigate associations with MAGE A3 and PRAME expression. Methods: We conducted a single-centre, uncontrolled, retrospective study of 1260 tissue-bank samples from stage IA-III resected NSCLC. The prognostic value of antigen expression (qRT-PCR) was determined by hazard-ratio and Kaplan-Meier curves. Results: Thirty-seven percent (314/844) of tumours expressed MAGE-A3, 66% (723/1092) expressed PRAME and 31% (239/839) expressed both. Respective frequencies in squamous-cell tumours and adenocarcinomas were 43%/30% for MAGE A3 and 80%/44% for PRAME. No correlation with stage, tumour size or patient age was found. Overall, no prognostic value was identified for either antigen. A trend to poorer overall survival was associated with MAGE-A3 in stage IIIB and with PRAME in stage IB. EGFR and KRAS mutations were found in 10.1% (28/311) and 33.8% (97/311) of tumours, respectively. EGFR (but not KRAS) mutation status was negatively associated with PRAME expression. Conclusion: No clear prognostic value for either PRAME or MAGE A3 was observed in the overall population, although some observed trends may warrant further investigation.Keywords: MAGE A3, PRAME, cancer-testis gene, NSCLC, survival, EGFR
Procedia PDF Downloads 3821411 An Improved Total Variation Regularization Method for Denoising Magnetocardiography
Authors: Yanping Liao, Congcong He, Ruigang Zhao
Abstract:
The application of magnetocardiography signals to detect cardiac electrical function is a new technology developed in recent years. The magnetocardiography signal is detected with Superconducting Quantum Interference Devices (SQUID) and has considerable advantages over electrocardiography (ECG). It is difficult to extract Magnetocardiography (MCG) signal which is buried in the noise, which is a critical issue to be resolved in cardiac monitoring system and MCG applications. In order to remove the severe background noise, the Total Variation (TV) regularization method is proposed to denoise MCG signal. The approach transforms the denoising problem into a minimization optimization problem and the Majorization-minimization algorithm is applied to iteratively solve the minimization problem. However, traditional TV regularization method tends to cause step effect and lacks constraint adaptability. In this paper, an improved TV regularization method for denoising MCG signal is proposed to improve the denoising precision. The improvement of this method is mainly divided into three parts. First, high-order TV is applied to reduce the step effect, and the corresponding second derivative matrix is used to substitute the first order. Then, the positions of the non-zero elements in the second order derivative matrix are determined based on the peak positions that are detected by the detection window. Finally, adaptive constraint parameters are defined to eliminate noises and preserve signal peak characteristics. Theoretical analysis and experimental results show that this algorithm can effectively improve the output signal-to-noise ratio and has superior performance.Keywords: constraint parameters, derivative matrix, magnetocardiography, regular term, total variation
Procedia PDF Downloads 1531410 The Bayesian Premium Under Entropy Loss
Authors: Farouk Metiri, Halim Zeghdoudi, Mohamed Riad Remita
Abstract:
Credibility theory is an experience rating technique in actuarial science which can be seen as one of quantitative tools that allows the insurers to perform experience rating, that is, to adjust future premiums based on past experiences. It is used usually in automobile insurance, worker's compensation premium, and IBNR (incurred but not reported claims to the insurer) where credibility theory can be used to estimate the claim size amount. In this study, we focused on a popular tool in credibility theory which is the Bayesian premium estimator, considering Lindley distribution as a claim distribution. We derive this estimator under entropy loss which is asymmetric and squared error loss which is a symmetric loss function with informative and non-informative priors. In a purely Bayesian setting, the prior distribution represents the insurer’s prior belief about the insured’s risk level after collection of the insured’s data at the end of the period. However, the explicit form of the Bayesian premium in the case when the prior is not a member of the exponential family could be quite difficult to obtain as it involves a number of integrations which are not analytically solvable. The paper finds a solution to this problem by deriving this estimator using numerical approximation (Lindley approximation) which is one of the suitable approximation methods for solving such problems, it approaches the ratio of the integrals as a whole and produces a single numerical result. Simulation study using Monte Carlo method is then performed to evaluate this estimator and mean squared error technique is made to compare the Bayesian premium estimator under the above loss functions.Keywords: bayesian estimator, credibility theory, entropy loss, monte carlo simulation
Procedia PDF Downloads 334