Search results for: rise to span ratio
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6660

Search results for: rise to span ratio

1620 I, Me and the Bot: Forming a Theory of Symbolic Interactivity with a Chatbot

Authors: Felix Liedel

Abstract:

The rise of artificial intelligence has numerous and far-reaching consequences. In addition to the obvious consequences for entire professions, the increasing interaction with chatbots also has a wide range of social consequences and implications. We are already increasingly used to interacting with digital chatbots, be it in virtual consulting situations, creative development processes or even in building personal or intimate virtual relationships. A media-theoretical classification of these phenomena has so far been difficult, partly because the interactive element in the exchange with artificial intelligence has undeniable similarities to human-to-human communication but is not identical to it. The proposed study, therefore, aims to reformulate the concept of symbolic interaction in the tradition of George Herbert Mead as symbolic interactivity in communication with chatbots. In particular, Mead's socio-psychological considerations will be brought into dialog with the specific conditions of digital media, the special dispositive situation of chatbots and the characteristics of artificial intelligence. One example that illustrates this particular communication situation with chatbots is so-called consensus fiction: In face-to-face communication, we use symbols on the assumption that they will be interpreted in the same or a similar way by the other person. When briefing a chatbot, it quickly becomes clear that this is by no means the case: only the bot's response shows whether the initial request corresponds to the sender's actual intention. This makes it clear that chatbots do not just respond to requests. Rather, they function equally as projection surfaces for their communication partners but also as distillations of generalized social attitudes. The personalities of the chatbot avatars result, on the one hand, from the way we behave towards them and, on the other, from the content we have learned in advance. Similarly, we interpret the response behavior of the chatbots and make it the subject of our own actions with them. In conversation with the virtual chatbot, we enter into a dialog with ourselves but also with the content that the chatbot has previously learned. In our exchanges with chatbots, we, therefore, interpret socially influenced signs and behave towards them in an individual way according to the conditions that the medium deems acceptable. This leads to the emergence of situationally determined digital identities that are in exchange with the real self but are not identical to it: In conversation with digital chatbots, we bring our own impulses, which are brought into permanent negotiation with a generalized social attitude by the chatbot. This also leads to numerous media-ethical follow-up questions. The proposed approach is a continuation of my dissertation on moral decision-making in so-called interactive films. In this dissertation, I attempted to develop a concept of symbolic interactivity based on Mead. Current developments in artificial intelligence are now opening up new areas of application.

Keywords: artificial intelligence, chatbot, media theory, symbolic interactivity

Procedia PDF Downloads 56
1619 Synthesis, Characterization, and Application of Novel Trihexyltetradecyl Phosphonium Chloride for Extractive Desulfurization of Liquid Fuel

Authors: Swapnil A. Dharaskar, Kailas L. Wasewar, Mahesh N. Varma, Diwakar Z. Shende

Abstract:

Owing to the stringent environmental regulations in many countries for production of ultra low sulfur petroleum fractions intending to reduce sulfur emissions results in enormous interest in this area among the scientific community. The requirement of zero sulfur emissions enhances the prominence for more advanced techniques in desulfurization. Desulfurization by extraction is a promising approach having several advantages over conventional hydrodesulphurization. Present work is dealt with various new approaches for desulfurization of ultra clean gasoline, diesel and other liquid fuels by extraction with ionic liquids. In present paper experimental data on extractive desulfurization of liquid fuel using trihexyl tetradecyl phosphonium chloride has been presented. The FTIR, 1H-NMR, and 13C-NMR have been discussed for the molecular confirmation of synthesized ionic liquid. Further, conductivity, solubility, and viscosity analysis of ionic liquids were carried out. The effects of reaction time, reaction temperature, sulfur compounds, ultrasonication, and recycling of ionic liquid without regeneration on removal of dibenzothiphene from liquid fuel were also investigated. In extractive desulfurization process, the removal of dibenzothiophene in n-dodecane was 84.5% for mass ratio of 1:1 in 30 min at 30OC under the mild reaction conditions. Phosphonium ionic liquids could be reused five times without a significant decrease in activity. Also, the desulfurization of real fuels, multistage extraction was examined. The data and results provided in present paper explore the significant insights of phosphonium based ionic liquids as novel extractant for extractive desulfurization of liquid fuels.

Keywords: ionic liquid, PPIL, desulfurization, liquid fuel, extraction

Procedia PDF Downloads 609
1618 Production of New Hadron States in Effective Field Theory

Authors: Qi Wu, Dian-Yong Chen, Feng-Kun Guo, Gang Li

Abstract:

In the past decade, a growing number of new hadron states have been observed, which are dubbed as XYZ states in the heavy quarkonium mass regions. In this work, we present our study on the production of some new hadron states. In particular, we investigate the processes Υ(5S,6S)→ Zb (10610)/Zb (10650)π, Bc→ Zc (3900)/Zc (4020)π and Λb→ Pc (4312)/Pc (4440)/Pc (4457)K. (1) For the production of Zb (10610)/Zb (10650) from Υ(5S,6S) decay, two types of bottom-meson loops were discussed within a nonrelativistic effective field theory. We found that the loop contributions with all intermediate states being the S-wave ground state bottom mesons are negligible, while the loops with one bottom meson being the broad B₀* or B₁' resonance could provide the dominant contributions to the Υ(5S)→ Zb⁽'⁾ π. (2) For the production of Zc (3900)/Zc (4020) from Bc decay, the branching ratios of Bc⁺→ Z (3900)⁺ π⁰ and Bc⁺→ Zc (4020)⁺ π⁰ are estimated to be of order of 10⁽⁻⁴⁾ and 10⁽⁻⁷⁾ in an effective Lagrangian approach. The large production rate of Zc (3900) could provide an important source of the production of Zc (3900) from the semi-exclusive decay of b-flavored hadrons reported by D0 Collaboration, which can be tested by the exclusive measurements in LHCb. (3) For the production of Pc (4312), Pc (4440) and Pc (4457) from Λb decay, the ratio of the branching fraction of Λb→ Pc K was predicted in a molecular scenario by using an effective Lagrangian approach, which is weakly dependent on our model parameter. We also find the ratios of the productions of the branching fractions of Λb→ Pc K and Pc→ J/ψ p can be well interpreted in the molecular scenario. Moreover, the estimated branching fractions of Λb→ Pc K are of order 10⁽⁻⁶⁾, which could be tested by further measurements in LHCb Collaboration.

Keywords: effective Lagrangian approach, hadron loops, molecular states, new hadron states

Procedia PDF Downloads 133
1617 Advanced Magnetic Resonance Imaging in Differentiation of Neurocysticercosis and Tuberculoma

Authors: Rajendra N. Ghosh, Paramjeet Singh, Niranjan Khandelwal, Sameer Vyas, Pratibha Singhi, Naveen Sankhyan

Abstract:

Background: Tuberculoma and neurocysticercosis (NCC) are two most common intracranial infections in developing country. They often simulate on neuroimaging and in absence of typical imaging features cause significant diagnostic dilemmas. Differentiation is extremely important to avoid empirical exposure to antitubercular medications or nonspecific treatment causing disease progression. Purpose: Better characterization and differentiation of CNS tuberculoma and NCC by using morphological and multiple advanced functional MRI. Material and Methods: Total fifty untreated patients (20 tuberculoma and 30 NCC) were evaluated by using conventional and advanced sequences like CISS, SWI, DWI, DTI, Magnetization transfer (MT), T2Relaxometry (T2R), Perfusion and Spectroscopy. rCBV,ADC,FA,T2R,MTR values and metabolite ratios were calculated from lesion and normal parenchyma. Diagnosis was confirmed by typical biochemical, histopathological and imaging features. Results: CISS was most useful sequence for scolex detection (90% on CISS vs 73% on routine sequences). SWI showed higher scolex detection ability. Mean values of ADC, FA,T2R from core and rCBV from wall of lesion were significantly different in tuberculoma and NCC (P < 0.05). Mean values of rCBV, ADC, T2R and FA for tuberculoma and NCC were (3.36 vs1.3), (1.09x10⁻³vs 1.4x10⁻³), (0.13 x10⁻³ vs 0.09 x10⁻³) and (88.65 ms vs 272.3 ms) respectively. Tuberculomas showed high lipid peak, more choline and lower creatinine with Ch/Cr ratio > 1. T2R value was most significant parameter for differentiation. Cut off values for each significant parameters have proposed. Conclusion: Quantitative MRI in combination with conventional sequences can better characterize and differentiate similar appearing tuberculoma and NCC and may be incorporated in routine protocol which may avoid brain biopsy and empirical therapy.

Keywords: advanced functional MRI, differentiation, neurcysticercosis, tuberculoma

Procedia PDF Downloads 568
1616 The Feasibility of Anaerobic Digestion at 45⁰C

Authors: Nuruol S. Mohd, Safia Ahmed, Rumana Riffat, Baoqiang Li

Abstract:

Anaerobic digestion at mesophilic and thermophilic temperatures have been widely studied and evaluated by numerous researchers. Limited extensive research has been conducted on anaerobic digestion in the intermediate zone of 45°C, mainly due to the notion that limited microbial activity occurs within this zone. The objectives of this research were to evaluate the performance and the capability of anaerobic digestion at 45°C in producing class A biosolids, in comparison to a mesophilic and thermophilic anaerobic digestion system operated at 35°C and 55°C, respectively. In addition to that, the investigation on the possible inhibition factors affecting the performance of the digestion system at this temperature will be conducted as well. The 45°C anaerobic digestion systems were not able to achieve comparable methane yield and high-quality effluent compared to the mesophilic system, even though the systems produced biogas with about 62-67% methane. The 45°C digesters suffered from high acetate accumulation, but sufficient buffering capacity was observed as the pH, alkalinity and volatile fatty acids (VFA)-to-alkalinity ratio were within recommended values. The accumulation of acetate observed in 45°C systems were presumably due to the high temperature which contributed to high hydrolysis rate. Consequently, it produced a large amount of toxic salts that combined with the substrate making them not readily available to be consumed by methanogens. Acetate accumulation, even though contributed to 52 to 71% reduction in acetate degradation process, could not be considered as completely inhibitory. Additionally, at 45°C, no ammonia inhibition was observed and the digesters were able to achieve volatile solids (VS) reduction of 47.94±4.17%. The pathogen counts were less than 1,000 MPN/g total solids, thus, producing Class A biosolids.

Keywords: 45°C anaerobic digestion, acetate accumulation, class A biosolids, salt toxicity

Procedia PDF Downloads 305
1615 Distinguishing Substance from Spectacle in Violent Extremist Propaganda through Frame Analysis

Authors: John Hardy

Abstract:

Over the last decade, the world has witnessed an unprecedented rise in the quality and availability of violent extremist propaganda. This phenomenon has been fueled primarily by three interrelated trends: rapid adoption of online content mediums by creators of violent extremist propaganda, increasing sophistication of violent extremist content production, and greater coordination of content and action across violent extremist organizations. In particular, the self-styled ‘Islamic State’ attracted widespread attention from its supporters and detractors alike by mixing shocking video and imagery content in with substantive ideological and political content. Although this practice was widely condemned for its brutality, it proved to be effective at engaging with a variety of international audiences and encouraging potential supporters to seek further information. The reasons for the noteworthy success of this kind of shock-value propaganda content remain unclear, despite many governments’ attempts to produce counterpropaganda. This study examines violent extremist propaganda distributed by five terrorist organizations between 2010 and 2016, using material released by the ‎Al Hayat Media Center of the Islamic State, Boko Haram, Al Qaeda, Al Qaeda in the Arabian Peninsula, and Al Qaeda in the Islamic Maghreb. The time period covers all issues of the infamous publications Inspire and Dabiq, as well as the most shocking video content released by the Islamic State and its affiliates. The study uses frame analysis to distinguish thematic from symbolic content in violent extremist propaganda by contrasting the ways that substantive ideology issues were framed against the use of symbols and violence to garner attention and to stylize propaganda. The results demonstrate that thematic content focuses significantly on diagnostic frames, which explain violent extremist groups’ causes, and prognostic frames, which propose solutions to addressing or rectifying the cause shared by groups and their sympathizers. Conversely, symbolic violence is primarily stylistic and rarely linked to thematic issues or motivational framing. Frame analysis provides a useful preliminary tool in disentangling substantive ideological and political content from stylistic brutality in violent extremist propaganda. This provides governments and researchers a method for better understanding the framing and content used to design narratives and propaganda materials used to promote violent extremism around the world. Increased capacity to process and understand violent extremist narratives will further enable governments and non-governmental organizations to develop effective counternarratives which promote non-violent solutions to extremists’ grievances.

Keywords: countering violent extremism, counternarratives, frame analysis, propaganda, terrorism, violent extremism

Procedia PDF Downloads 174
1614 Semiconductor Properties of Natural Phosphate Application to Photodegradation of Basic Dyes in Single and Binary Systems

Authors: Y. Roumila, D. Meziani, R. Bagtache, K. Abdmeziem, M. Trari

Abstract:

Heterogeneous photocatalysis over semiconductors has proved its effectiveness in the treatment of wastewaters since it works under soft conditions. It has emerged as a promising technique, giving rise to less toxic effluents and offering the opportunity of using sunlight as a sustainable and renewable source of energy. Many compounds have been used as photocatalysts. Though synthesized ones are intensively used, they remain expensive, and their synthesis involves special conditions. We thus thought of implementing a natural material, a phosphate ore, due to its low cost and great availability. Our work is devoted to the removal of hazardous organic pollutants, which cause several environmental problems and health risks. Among them, dye pollutants occupy a large place. This work relates to the study of the photodegradation of methyl violet (MV) and rhodamine B (RhB), in single and binary systems, under UV light and sunlight irradiation. Methyl violet is a triarylmethane dye, while RhB is a heteropolyaromatic dye belonging to the Xanthene family. In the first part of this work, the natural compound was characterized using several physicochemical and photo-electrochemical (PEC) techniques: X-Ray diffraction, chemical, and thermal analyses scanning electron microscopy, UV-Vis diffuse reflectance measurements, and FTIR spectroscopy. The electrochemical and photoelectrochemical studies were performed with a Voltalab PGZ 301 potentiostat/galvanostat at room temperature. The structure of the phosphate material was well characterized. The photo-electrochemical (PEC) properties are crucial for drawing the energy band diagram, in order to suggest the formation of radicals and the reactions involved in the dyes photo-oxidation mechanism. The PEC characterization of the natural phosphate was investigated in neutral solution (Na₂SO₄, 0.5 M). The study revealed the semiconducting behavior of the phosphate rock. Indeed, the thermal evolution of the electrical conductivity was well fitted by an exponential type law, and the electrical conductivity increases with raising the temperature. The Mott–Schottky plot and current-potential J(V) curves recorded in the dark and under illumination clearly indicate n-type behavior. From the results of photocatalysis, in single solutions, the changes in MV and RhB absorbance in the function of time show that practically all of the MV was removed after 240 mn irradiation. For RhB, the complete degradation was achieved after 330 mn. This is due to its complex and resistant structure. In binary systems, it is only after 120 mn that RhB begins to be slowly removed, while about 60% of MV is already degraded. Once nearly all of the content of MV in the solution has disappeared (after about 250 mn), the remaining RhB is degraded rapidly. This behaviour is different from that observed in single solutions where both dyes are degraded since the first minutes of irradiation.

Keywords: environment, organic pollutant, phosphate ore, photodegradation

Procedia PDF Downloads 132
1613 Toxic Ingredients Contained in Our Cosmetics

Authors: El Alia Boularas, H. Bekkar, H. Larachi, H. Rezk-kallah

Abstract:

Introduction: Notwithstanding cosmetics are used in life every day, these products are not all innocuous and harmless, as they may contain ingredients responsible for allergic reactions and, possibly, for other health problems. Additionally, environmental pollution should be taken into account. Thus, it is time to investigate what is ‘hidden behind beauty’. Aims: 1.To investigate prevalence of 13 chemical ingredients in cosmetics being object of concern, which the Algerians use regularly. 2.To know the profile of questioned consumers and describe their opinion on cosmetics. Methods: The survey was carried out in year 2013 over a period of 3 months, among Algerian Internet users having an e-mail address or a Facebook account.The study investigated 13 chemical agents showing health and environmental problems, selected after analysis of the recent studies published on the subject, the lists of national and international regulatory references on chemical hazards, and querying the database Skin Deep presented by the Environmental Working Group. Results: 300 people distributed all over the Algerian territory participated in the survey, providing information about 731 cosmetics; 86% aged from 20 to 39 years, with a sex ratio=0,27. A percentage of 43% of the analyzed cosmetics contained at least one of the 13 toxic ingredients. The targeted ingredient that has been most frequently reported was ‘perfume’ followed by parabens and PEG.85% of the participants declared that cosmetics ‘can contain toxic substances’, 27% asserted that they verify regularly the list of ingredients when they buy cosmetics, 61% said that they try to avoid the toxic ingredients, among whom 24 % were more vigilant on the presence of parabens, 95% were in favour of the strengthening of the Algerian laws on cosmetics. Conclusion: The results of the survey provide the indication of a widespread presence of toxic chemical ingredients in personal care products that Algerians use daily.

Keywords: Algerians consumers, cosmetics, survey, toxic ingredients

Procedia PDF Downloads 277
1612 A Statistical Analysis on the Comparison of First and Second Waves of COVID-19 and Importance of Early Actions in Public Health for Third Wave in India

Authors: Maitri Dave

Abstract:

Coronaviruses (CoV) is such infectious virus which has impacted globally in a more dangerous manner causing severe lung problems and leaving behind more serious diseases among the people. This pandemic has affected globally and created severe respiratory problems, and damaged the lungs. India has reported the first case of COVID-19 in January 2020. The first wave of COVID-19 took place from April to September of 2020. Soon after, a second peak is also noticed in the month of March 2021, which in turn becomes more dangerous due to a lack of supply of medical equipment. It created resource deficiency globally, specifically in India, where some necessary life-saving equipment like ventilators and oxygenators were not sufficient to cater to the demand-supply ratio effectively. Through carefully examining such a situation, India began to execute the process of vaccination in the month of January 2021 and successfully administered 25,46,71,259 doses of vaccines till now, which is only 15.5% of the total population while only 3.6% of the total population is fully vaccinated. India has authorized the British Oxford–AstraZeneca vaccine (Covishield), the Indian BBV152 (Covaxin) vaccine, and the Russian Sputnik V vaccine for emergency use. In the present study, we have collected all the data state wisely of both first and second wave and analyzed them using MS Excel Version 2019 and SPSS Statistics Version 26. Following the trends, we have predicted the characteristics of the upcoming third wave of COVID-19 and recommended some strategies, early actions, and measures that can be taken by the public health system in India to combat the third wave more effectively.

Keywords: COVID-19, vaccination, Covishiled, Coronavirus

Procedia PDF Downloads 217
1611 Convectory Policing-Reconciling Historic and Contemporary Models of Police Service Delivery

Authors: Mark Jackson

Abstract:

Description: This paper is based on an theoretical analysis of the efficacy of the dominant model of policing in western jurisdictions. Those results are then compared with a similar analysis of a traditional reactive model. It is found that neither model provides for optimal delivery of services. Instead optimal service can be achieved by a synchronous hybrid model, termed the Convectory Policing approach. Methodology and Findings: For over three decades problem oriented policing (PO) has been the dominant model for western police agencies. Initially based on the work of Goldstein during the 1970s the problem oriented framework has spawned endless variants and approaches, most of which embrace a problem solving rather than a reactive approach to policing. This has included the Area Policing Concept (APC) applied in many smaller jurisdictions in the USA, the Scaled Response Policing Model (SRPM) currently under trial in Western Australia and the Proactive Pre-Response Approach (PPRA) which has also seen some success. All of these, in some way or another, are largely based on a model that eschews a traditional reactive model of policing. Convectory Policing (CP) is an alternative model which challenges the underpinning assumptions which have seen proliferation of the PO approach in the last three decades and commences by questioning the economics on which PO is based. It is argued that in essence, the PO relies on an unstated, and often unrecognised assumption that resources will be available to meet demand for policing services, while at the same time maintaining the capacity to deploy staff to develop solutions to the problems which were ultimately manifested in those same calls for service. The CP model relies on the observations from a numerous western jurisdictions to challenge the validity of that underpinning assumption, particularly in fiscally tight environment. In deploying staff to pursue and develop solutions to underpinning problems, there is clearly an opportunity cost. Those same staff cannot be allocated to alternative duties while engaged in a problem solution role. At the same time, resources in use responding to calls for service are unavailable, while committed to that role, to pursue solutions to the problems giving rise to those same calls for service. The two approaches, reactive and PO are therefore dichotomous. One cannot be optimised while the other is being pursued. Convectory Policing is a pragmatic response to the schism between the competing traditional and contemporary models. If it is not possible to serve either model with any real rigour, it becomes necessary to taper an approach to deliver specific outcomes against which success or otherwise might be measured. CP proposes that a structured roster-driven approach to calls for service, combined with the application of what is termed a resource-effect response capacity has the potential to resolve the inherent conflict between traditional and models of policing and the expectations of the community in terms of community policing based problem solving models.

Keywords: policing, reactive, proactive, models, efficacy

Procedia PDF Downloads 484
1610 Effect of L-Dopa on Performance and Carcass Characteristics in Broiler Chickens

Authors: B. R. O. Omidiwura, A. F. Agboola, E. A. Iyayi

Abstract:

Pure form of L-Dopa is used to enhance muscular development, fat breakdown and suppress Parkinson disease in humans. However, the L-Dopa in mucuna seed, when present with other antinutritional factors, causes nutritional disorders in monogastric animals. Information on the utilisation of pure L-Dopa in monogastric animals is scanty. Therefore, effect of L-Dopa on growth performance and carcass characteristics in broiler chickens was investigated. Two hundred and forty one-day-old chicks were allotted to six treatments, which consisted of a positive control (PC) with standard energy (3100Kcal/Kg) and negative control (NC) with high energy (3500Kcal/Kg). The rest 4 diets were NC+0.1, NC+0.2, NC+0.3 and NC+0.4% L-Dopa, respectively. All treatments had 4 replicates in a completely randomized design. Body weight gain, final weight, feed intake, dressed weight and carcass characteristics were determined. Body weight gain and final weight of birds fed PC were 1791.0 and 1830.0g, NC+0.1% L-Dopa were 1827.7 and 1866.7g and NC+0.2% L-Dopa were 1871.9 and 1910.9g respectively, and the feed intake of PC (3231.5g), were better than other treatments. The dressed weight at 1375.0g and 1357.1g of birds fed NC+0.1% and NC+0.2% L-Dopa, respectively, were similar but better than other treatments. Also, the thigh (202.5g and 194.9g) and the breast meat (413.8g and 410.8g) of birds fed NC+0.1% and NC+0.2% L-Dopa, respectively, were similar but better than birds fed other treatments. The drum stick of birds fed NC+0.1% L-Dopa (220.5g) was observed to be better than birds on other diets. Meat to bone ratio and relative organ weights were not affected across treatments. L-Dopa extract, at levels tested, had no detrimental effect on broilers, rather better bird performance and carcass characteristics were observed especially at 0.1% and 0.2% L-Dopa inclusion rates. Therefore, 0.2% inclusion is recommended in diets of broiler chickens for improved performance and carcass characteristics.

Keywords: broilers, carcass characteristics, l-dopa, performance

Procedia PDF Downloads 310
1609 An Investigation into Why Liquefaction Charts Work: A Necessary Step toward Integrating the States of Art and Practice

Authors: Tarek Abdoun, Ricardo Dobry

Abstract:

This paper is a systematic effort to clarify why field liquefaction charts based on Seed and Idriss’ Simplified Procedure work so well. This is a necessary step toward integrating the states of the art (SOA) and practice (SOP) for evaluating liquefaction and its effects. The SOA relies mostly on laboratory measurements and correlations with void ratio and relative density of the sand. The SOP is based on field measurements of penetration resistance and shear wave velocity coupled with empirical or semi-empirical correlations. This gap slows down further progress in both SOP and SOA. The paper accomplishes its objective through: a literature review of relevant aspects of the SOA including factors influencing threshold shear strain and pore pressure buildup during cyclic strain-controlled tests; a discussion of factors influencing field penetration resistance and shear wave velocity; and a discussion of the meaning of the curves in the liquefaction charts separating liquefaction from no liquefaction, helped by recent full-scale and centrifuge results. It is concluded that the charts are curves of constant cyclic strain at the lower end (Vs1 < 160 m/s), with this strain being about 0.03 to 0.05% for earthquake magnitude, Mw ≈ 7. It is also concluded, in a more speculative way, that the curves at the upper end probably correspond to a variable increasing cyclic strain and Ko, with this upper end controlled by over consolidated and preshaken sands, and with cyclic strains needed to cause liquefaction being as high as 0.1 to 0.3%. These conclusions are validated by application to case histories corresponding to Mw ≈ 7, mostly in the San Francisco Bay Area of California during the 1989 Loma Prieta earthquake.

Keywords: permeability, lateral spreading, liquefaction, centrifuge modeling, shear wave velocity charts

Procedia PDF Downloads 297
1608 Cyclic Behaviour of Wide Beam-Column Joints with Shear Strength Ratios of 1.0 and 1.7

Authors: Roy Y. C. Huang, J. S. Kuang, Hamdolah Behnam

Abstract:

Beam-column connections play an important role in the reinforced concrete moment resisting frame (RCMRF), which is one of the most commonly used structural systems around the world. The premature failure of such connections would severely limit the seismic performance and increase the vulnerability of RCMRF. In the past decades, researchers primarily focused on investigating the structural behaviour and failure mechanisms of conventional beam-column joints, the beam width of which is either smaller than or equal to the column width, while studies in wide beam-column joints were scarce. This paper presents the preliminary experimental results of two full-scale exterior wide beam-column connections, which are mainly designed and detailed according to ACI 318-14 and ACI 352R-02, under reversed cyclic loading. The ratios of the design shear force to the nominal shear strength of these specimens are 1.0 and 1.7, respectively, so as to probe into differences of the joint shear strength between experimental results and predictions by design codes of practice. Flexural failure dominated in the specimen with ratio of 1.0 in which full-width plastic hinges were observed, while both beam hinges and post-peak joint shear failure occurred for the other specimen. No sign of premature joint shear failure was found which is inconsistent with ACI codes’ prediction. Finally, a modification of current codes of practice is provided to accurately predict the joint shear strength in wide beam-column joint.

Keywords: joint shear strength, reversed cyclic loading, seismic vulnerability, wide beam-column joints

Procedia PDF Downloads 324
1607 Preparation of Carbon Nanofiber Reinforced HDPE Using Dialkylimidazolium as a Dispersing Agent: Effect on Thermal and Rheological Properties

Authors: J. Samuel, S. Al-Enezi, A. Al-Banna

Abstract:

High-density polyethylene reinforced with carbon nanofibers (HDPE/CNF) have been prepared via melt processing using dialkylimidazolium tetrafluoroborate (ionic liquid) as a dispersion agent. The prepared samples were characterized by thermogravimetric (TGA) and differential scanning calorimetric (DSC) analyses. The samples blended with imidazolium ionic liquid exhibit higher thermal stability. DSC analysis showed clear miscibility of ionic liquid in the HDPE matrix and showed single endothermic peak. The melt rheological analysis of HDPE/CNF composites was performed using an oscillatory rheometer. The influence of CNF and ionic liquid concentration (ranging from 0, 0.5, and 1 wt%) on the viscoelastic parameters was investigated at 200 °C with an angular frequency range of 0.1 to 100 rad/s. The rheological analysis shows the shear-thinning behavior for the composites. An improvement in the viscoelastic properties was observed as the nanofiber concentration increases. The progress in the modulus values was attributed to the structural rigidity imparted by the high aspect ratio CNF. The modulus values and complex viscosity of the composites increased significantly at low frequencies. Composites blended with ionic liquid exhibit slightly lower values of complex viscosity and modulus over the corresponding HDPE/CNF compositions. Therefore, reduction in melt viscosity is an additional benefit for polymer composite processing as a result of wetting effect by polymer-ionic liquid combinations.

Keywords: high-density polyethylene, carbon nanofibers, ionic liquid, complex viscosity

Procedia PDF Downloads 127
1606 The Unique Electrical and Magnetic Properties of Thorium Di-Iodide Indicate the Arrival of Its Superconducting State

Authors: Dong Zhao

Abstract:

Even though the recent claim of room temperature superconductivity by LK-99 was confirmed an unsuccessful attempt, this work reawakened people’s century striving to get applicable superconductors with Tc of room temperature or higher and under ambient pressure. One of the efforts was focusing on exploring the thorium salts. This is because certain thorium compounds revealed an unusual property of having both high electrical conductivity and diamagnetism or the so-called “coexistence of high electrical conductivity and diamagnetism.” It is well known that this property of the coexistence of high electrical conductivity and diamagnetism is held by superconductors because of the electron pairings. Consequently, the likelihood for these thorium compounds to have superconducting properties becomes great. However, as a surprise, these thorium salts possess this property at room temperature and atmosphere pressure. This gives rise to solid evidence for these thorium compounds to be room-temperature superconductors without a need for external pressure. Among these thorium compound superconductors claimed in that work, thorium di-iodide (ThI₂) is a unique one and has received comprehensive discussion. ThI₂ was synthesized and structurally analyzed by the single crystal diffraction method in the 1960s. Its special property of coexistence of high electrical conductivity and diamagnetism was revealed. Because of this unique property, a special molecular configuration was sketched. Except for an ordinary oxidation of +2 for the thorium cation, the thorium’s oxidation state in ThI₂ is +4. According to the experimental results, ThI₂‘s actual molecular configuration was determined as an unusual one of [Th4+(e-)2](I-)2. This means that the ThI₂ salt’s cation is composed of a [Th4+(e-)2]2+ cation core. In other words, the cation of ThI₂ is constructed by combining an oxidation state +4 of the thorium atom and a pair of electrons or an electron lone pair located on the thorium atom. This combination of the thorium atom and the electron lone pair leads to an oxidation state +2 for the [Th4+(e-)2]2+ cation core. This special construction of the thorium cation is very distinctive, which is believed to be the factor that grants ThI₂ the room temperature superconductivity. Actually, the key for ThI₂ to become a room-temperature superconductor is this characteristic electron lone pair residing on the thorium atom along with the formation of a network constructed by the thorium atoms. This network specializes in a way that allows the electron lone pairs to hop over it and, thus, to generate the supercurrent. This work will discuss, in detail, the special electrical and magnetic properties of ThI₂ as well as its structural features at ambient conditions. The exploration of how the electron pairing in combination with the structurally specialized network works together to bring ThI₂ into a superconducting state. From the experimental results, strong evidence has definitely pointed out that the ThI₂ should be a superconductor, at least at room temperature and under atmosphere pressure.

Keywords: co-existence of high electrical conductivity and diamagnetism, electron lone pair, room temperature superconductor, special molecular configuration of thorium di-iodide ThI₂

Procedia PDF Downloads 59
1605 Comparison of Soil Test Extractants for Determination of Available Soil Phosphorus

Authors: Violina Angelova, Stefan Krustev

Abstract:

The aim of this work was to evaluate the effectiveness of different soil test extractants for the determination of available soil phosphorus in five internationally certified standard soils, sludge and clay (NCS DC 85104, NCS DC 85106, ISE 859, ISE 952, ISE 998). The certified samples were extracted with the following methods/extractants: CaCl₂, CaCl₂ and DTPA (CAT), double lactate (DL), ammonium lactate (AL), calcium acetate lactate (CAL), Olsen, Mehlich 3, Bray and Kurtz I, and Morgan, which are commonly used in soil testing laboratories. The phosphorus in soil extracts was measured colorimetrically using Spectroquant Pharo 100 spectrometer. The methods used in the study were evaluated according to the recovery of available phosphorus, facility of application and rapidity of performance. The relationships between methods are examined statistically. A good agreement of the results from different soil test was established for all certified samples. In general, the P values extracted by the nine extraction methods significantly correlated with each other. When grouping the soils according to pH, organic carbon content and clay content, weaker extraction methods showed analogous trends; also among the stronger extraction methods, common tendencies were found. Other factors influencing the extraction force of the different methods include soil: solution ratio, as well as the duration and power of shaking the samples. The mean extractable P in certified samples was found to be in the order of CaCl₂ < CAT < Morgan < Bray and Kurtz I < Olsen < CAL < DL < Mehlich 3 < AL. Although the nine methods extracted different amounts of P from the certified samples, values of P extracted by the different methods were strongly correlated among themselves. Acknowledgment: The financial support by the Bulgarian National Science Fund Projects DFNI Н04/9 and DFNI Н06/21 are greatly appreciated.

Keywords: available soil phosphorus, certified samples, determination, soil test extractants

Procedia PDF Downloads 153
1604 Design and Manufacture of a Hybrid Gearbox Reducer System

Authors: Ahmed Mozamel, Kemal Yildizli

Abstract:

Due to mechanical energy losses and a competitive of minimizing these losses and increases the machine efficiency, the need for contactless gearing system has raised. In this work, one stage of mechanical planetary gear transmission system integrated with one stage of magnetic planetary gear system is designed as a two-stage hybrid gearbox system. The permanent magnets internal energy in the form of the magnetic field is used to create meshing between contactless magnetic rotors in order to provide self-system protection against overloading and decrease the mechanical loss of the transmission system by eliminating the friction losses. Classical methods, such as analytical, tabular method and the theory of elasticity are used to calculate the planetary gear design parameters. The finite element method (ANSYS Maxwell) is used to predict the behaviors of a magnetic gearing system. The concentric magnetic gearing system has been modeled and analyzed by using 2D finite element method (ANSYS Maxwell). In addition to that, design and manufacturing processes of prototype components (a planetary gear, concentric magnetic gear, shafts and the bearings selection) of a gearbox system are investigated. The output force, the output moment, the output power and efficiency of the hybrid gearbox system are experimentally evaluated. The viability of applying a magnetic force to transmit mechanical power through a non-contact gearing system is presented. The experimental test results show that the system is capable to operate continuously within the range of speed from 400 rpm to 3000 rpm with the reduction ratio of 2:1 and maximum efficiency of 91%.

Keywords: hybrid gearbox, mechanical gearboxes, magnetic gears, magnetic torque

Procedia PDF Downloads 154
1603 Analysis of Adolescents Birth Rate in Zimbabwe: The Case of High Widening Gap between Rural and Urban Areas, Secondary Analysis from the 2022 National Population and Housing Census

Authors: Mercy Marimirofa, Farai Machinga, Alfred Zvoushe, Tsitsidzaishe Musvosvi

Abstract:

Adolescent Birth rate (ABR) is an important indicator of both gender equality and equity in the country. This is the number of births to women aged between 15 and 19 years per 1000 live births. There has been a decreasing trend in ABR in Zimbabwe since 2014. However, the difference between rural areas and urban areas has continued to widen. A secondary analysis was conducted to assess the differences in ABR between the rural areas of Zimbabwe and the urban areas. This was also done to determine the root causes of high ABR in rural areas compared to urban areas and the impact this may cause to the economic development of the nation. The analysis was done according to geographical characteristics (provinces). A total of 69,335 females aged 10 to 19 years had live births among a total population of 791,914 females aged 15 to 19 years. The total Adolescent Birth rate in Zimbabwe is 87/1000 live births, while in rural areas, it is 114.4/1000 live births compared to urban areas, which is 49.7/1000 live births. A decrease in the ABR trends has been recorded since 2014 from 143/1000 live births among adolescents in rural areas to 97/1000 live births in urban areas. This shows that rural areas still have high rates of ABR compared to their urban counterparts, and the gap is still wide. High ABR is a result of early child marriages, teenage pregnancies as well as poverty. Most of these marriages (46%) are intergenerational relationships and have resulted in an increase in gender-based violence cases among adolescents, poor health outcomes, including pregnancy complications such as eclampsia, Cephalous Pelvic Disproportion (CPD), and obstructed labour. Maternal deaths among adolescence is also high compared to adults. Furthermore, the increase of school dropouts among adolescent girls is on the rise due to teen pregnancies. These challenges are being faced mostly by rural adolescent girls as compared to their urban counterparts. The widening gap in ABR between urban areas and rural areas is a matter of concern and needs to be addressed. There is a need to inform policy, programming, and interventions targeting rural areas to address the challenges and gaps in reducing ABR. This abstract is to inform policymakers on the strategies and resources required to address the challenges currently distressing adolescents. There is a need to improve access to Sexual and Reproductive Health (SRH) Services by adolescents and reduce the age of consent to access SRH services should be reduced from 18 years for ease access to young people to reduce teenage pregnancies. Comprehensive sexuality education, both in-school and out of school, should be strengthened to increase knowledge among young people on sexuality.

Keywords: adolescence birth rate, live birth, teenage pregnancies, SRH services

Procedia PDF Downloads 82
1602 Community Forest Management and Ecological and Economic Sustainability: A Two-Way Street

Authors: Sony Baral, Harald Vacik

Abstract:

This study analyzes the sustainability of community forest management in two community forests in Terai and Hills of Nepal, representing four forest types: 1) Shorearobusta, 2) Terai hardwood, 3) Schima-Castanopsis, and 4) other Hills. The sustainability goals for this region include maintaining and enhancing the forest stocks. Considering this, we analysed changes in species composition, stand density, growing stock volume, and growth-to-removal ratio at 3-5 year intervals from 2005-2016 within 109 permanent forest plots (57 in the Terai and 52 in the Hills). To complement inventory data, forest users, forest committee members, and forest officials were consulted. The results indicate that the relative representation of economically valuable tree species has increased. Based on trends in stand density, both forests are being sustainably managed. Pole-sized trees dominated the diameter distribution, however, with a limited number of mature trees and declined regeneration. The forests were over-harvested until 2013 but under-harvested in the recent period in the Hills. In contrast, both forest types were under-harvested throughout the inventory period in the Terai. We found that the ecological dimension of sustainable forest management is strongly achieved while the economic dimension is lacking behind the current potential. Thus, we conclude that maintaining a large number of trees in the forest does not necessarily ensure both ecological and economical sustainability. Instead, priority should be given on a rational estimation of the annual harvest rates to enhance forest resource conditions together with regular benefits to the local communities.

Keywords: community forests, diversity, growing stock, forest management, sustainability, nepal

Procedia PDF Downloads 98
1601 Understanding Mathematics Achievements among U. S. Middle School Students: A Bayesian Multilevel Modeling Analysis with Informative Priors

Authors: Jing Yuan, Hongwei Yang

Abstract:

This paper aims to understand U.S. middle school students’ mathematics achievements by examining relevant student and school-level predictors. Through a variance component analysis, the study first identifies evidence supporting the use of multilevel modeling. Then, a multilevel analysis is performed under Bayesian statistical inference where prior information is incorporated into the modeling process. During the analysis, independent variables are entered sequentially in the order of theoretical importance to create a hierarchy of models. By evaluating each model using Bayesian fit indices, a best-fit and most parsimonious model is selected where Bayesian statistical inference is performed for the purpose of result interpretation and discussion. The primary dataset for Bayesian modeling is derived from the Program for International Student Assessment (PISA) in 2012 with a secondary PISA dataset from 2003 analyzed under the traditional ordinary least squares method to provide the information needed to specify informative priors for a subset of the model parameters. The dependent variable is a composite measure of mathematics literacy, calculated from an exploratory factor analysis of all five PISA 2012 mathematics achievement plausible values for which multiple evidences are found supporting data unidimensionality. The independent variables include demographics variables and content-specific variables: mathematics efficacy, teacher-student ratio, proportion of girls in the school, etc. Finally, the entire analysis is performed using the MCMCpack and MCMCglmm packages in R.

Keywords: Bayesian multilevel modeling, mathematics education, PISA, multilevel

Procedia PDF Downloads 336
1600 A Review of How COVID-19 Has Created an Insider Fraud Pandemic and How to Stop It

Authors: Claire Norman-Maillet

Abstract:

Insider fraud, including its various synonyms such as occupational, employee or internal fraud, is a major financial crime threat whereby an employee defrauds (or attempts to defraud) their current, prospective, or past employer. ‘Employee’ covers anyone employed by the company, including contractors, directors, and part time staff; they may be a solo bad actor or working in collusion with others, whether internal or external. Insider fraud is even more of a concern given the impacts of the Coronavirus pandemic, which has generated multiple opportunities to commit insider fraud. Insider fraud is something that is not necessarily thought of as a significant financial crime threat; the focus of most academics and practitioners has historically been on that of ‘external fraud’ against businesses or entities where an individual or group has no professional ties. Without the face-to-face, ‘over the shoulder’ capabilities of staff being able to keep an eye on their employees, there is a heightened reliance on trust and transparency. With this, naturally, comes an increased risk of insider fraud perpetration. The objective of the research is to better understand how companies are impacted by insider fraud, and therefore how to stop it. This research will make both an original contribution and stimulate debate within the financial crime field. The financial crime landscape is never static – criminals are always creating new ways to perpetrate financial crime, and new legislation and regulations are implemented as attempts to strengthen controls, in addition to businesses doing what they can internally to detect and prevent it. By focusing on insider fraud specifically, the research will be more specific and will be of greater use to those in the field. To achieve the aims of the research, semi-structured interviews were conducted with 22 individuals who either work in financial services and deal with insider fraud or work within insider fraud perpetration in a recruitment or advisory capacity. This was to enable the sourcing of information from a wide range of individuals in a setting where they were able to elaborate on their answers. The principal recruitment strategy was engaging with the researcher’s network on LinkedIn. The interviews were then transcribed and analysed thematically. Main findings in the research suggest that insider fraud has been ignored owing to the denial of accepting the possibility that colleagues would defraud their employer. Whilst Coronavirus has led to a significant rise in insider fraud, this type of crime has been a major risk to businesses since their inception, however have never been given the financial or strategic backing required to be mitigated, until it's too late. Furthermore, Coronavirus should have led to companies tightening their access rights, controls and policies to mitigate the insider fraud risk. However, in most cases this has not happened. The research concludes that insider fraud needs to be given a platform upon which to be recognised as a threat to any company and given the same level of weighting and attention by Executive Committees and Boards as other types of economic crime.

Keywords: fraud, insider fraud, economic crime, coronavirus, Covid-19

Procedia PDF Downloads 69
1599 Self-Healing Phenomenon Evaluation in Cementitious Matrix with Different Water/Cement Ratios and Crack Opening Age

Authors: V. G. Cappellesso, D. M. G. da Silva, J. A. Arndt, N. dos Santos Petry, A. B. Masuero, D. C. C. Dal Molin

Abstract:

Concrete elements are subject to cracking, which can be an access point for deleterious agents that can trigger pathological manifestations reducing the service life of these structures. Finding ways to minimize or eliminate the effects of this aggressive agents’ penetration, such as the sealing of these cracks, is a manner of contributing to the durability of these structures. The cementitious self-healing phenomenon can be classified in two different processes. The autogenous self-healing that can be defined as a natural process in which the sealing of this cracks occurs without the stimulation of external agents, meaning, without different materials being added to the mixture, while on the other hand, the autonomous seal-healing phenomenon depends on the insertion of a specific engineered material added to the cement matrix in order to promote its recovery. This work aims to evaluate the autogenous self-healing of concretes produced with different water/cement ratios and exposed to wet/dry cycles, considering two ages of crack openings, 3 days and 28 days. The self-healing phenomenon was evaluated using two techniques: crack healing measurement using ultrasonic waves and image analysis performed with an optical microscope. It is possible to observe that by both methods, it possible to observe the self-healing phenomenon of the cracks. For young ages of crack openings and lower water/cement ratios, the self-healing capacity is higher when compared to advanced ages of crack openings and higher water/cement ratios. Regardless of the crack opening age, these concretes were found to stabilize the self-healing processes after 80 days or 90 days.

Keywords: sealf-healing, autogenous, water/cement ratio, curing cycles, test methods

Procedia PDF Downloads 161
1598 Capacity of Cold-Formed Steel Warping-Restrained Members Subjected to Combined Axial Compressive Load and Bending

Authors: Maryam Hasanali, Syed Mohammad Mojtabaei, Iman Hajirasouliha, G. Charles Clifton, James B. P. Lim

Abstract:

Cold-formed steel (CFS) elements are increasingly being used as main load-bearing components in the modern construction industry, including low- to mid-rise buildings. In typical multi-storey buildings, CFS structural members act as beam-column elements since they are exposed to combined axial compression and bending actions, both in moment-resisting frames and stud wall systems. Current design specifications, including the American Iron and Steel Institute (AISI S100) and the Australian/New Zealand Standard (AS/NZS 4600), neglect the beneficial effects of warping-restrained boundary conditions in the design of beam-column elements. Furthermore, while a non-linear relationship governs the interaction of axial compression and bending, the combined effect of these actions is taken into account through a simplified linear expression combining pure axial and flexural strengths. This paper aims to evaluate the reliability of the well-known Direct Strength Method (DSM) as well as design proposals found in the literature to provide a better understanding of the efficiency of the code-prescribed linear interaction equation in the strength predictions of CFS beam columns and the effects of warping-restrained boundary conditions on their behavior. To this end, the experimentally validated finite element (FE) models of CFS elements under compression and bending were developed in ABAQUS software, which accounts for both non-linear material properties and geometric imperfections. The validated models were then used for a comprehensive parametric study containing 270 FE models, covering a wide range of key design parameters, such as length (i.e., 0.5, 1.5, and 3 m), thickness (i.e., 1, 2, and 4 mm) and cross-sectional dimensions under ten different load eccentricity levels. The results of this parametric study demonstrated that using the DSM led to the most conservative strength predictions for beam-column members by up to 55%, depending on the element’s length and thickness. This can be sourced by the errors associated with (i) the absence of warping-restrained boundary condition effects, (ii) equations for the calculations of buckling loads, and (iii) the linear interaction equation. While the influence of warping restraint is generally less than 6%, the code suggested interaction equation led to an average error of 4% to 22%, based on the element lengths. This paper highlights the need to provide more reliable design solutions for CFS beam-column elements for practical design purposes.

Keywords: beam-columns, cold-formed steel, finite element model, interaction equation, warping-restrained boundary conditions

Procedia PDF Downloads 105
1597 The French Ekang Ethnographic Dictionary. The Quantum Approach

Authors: Henda Gnakate Biba, Ndassa Mouafon Issa

Abstract:

Dictionaries modeled on the Western model [tonic accent languages] are not suitable and do not account for tonal languages phonologically, which is why the [prosodic and phonological] ethnographic dictionary was designed. It is a glossary that expresses the tones and the rhythm of words. It recreates exactly the speaking or singing of a tonal language, and allows the non-speaker of this language to pronounce the words as if they were a native. It is a dictionary adapted to tonal languages. It was built from ethnomusicological theorems and phonological processes, according to Jean. J. Rousseau 1776 hypothesis /To say and to sing were once the same thing/. Each word in the French dictionary finds its corresponding language, ekaη. And each word ekaη is written on a musical staff. This ethnographic dictionary is also an inventive, original and innovative research thesis, but it is also an inventive, original and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation and the question of modeling in the human sciences: mathematics, computer science, translation automation and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal and random music. The experimentation confirming the theorization designed a semi-digital, semi-analog application which translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music and deterministic and random music). To test this application, I use a music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). Translation is done (from writing to writing, from writing to speech and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz and, world music or, variety etc. The software runs, giving you the option to choose harmonies, and then you select your melody.

Keywords: music, language, entenglement, science, research

Procedia PDF Downloads 70
1596 Ecological-Economics Evaluation of Water Treatment Systems

Authors: Hwasuk Jung, Seoi Lee, Dongchoon Ryou, Pyungjong Yoo, Seokmo Lee

Abstract:

The Nakdong River being used as drinking water sources for Pusan metropolitan city has the vulnerability of water management due to the fact that industrial areas are located in the upper Nakdong River. Most citizens of Busan think that the water quality of Nakdong River is not good, so they boil or use home filter to drink tap water, which causes unnecessary individual costs to Busan citizens. We need to diversify water intake to reduce the cost and to change the weak water source. Under this background, this study was carried out for the environmental accounting of Namgang dam water treatment system compared to Nakdong River water treatment system by using emergy analysis method to help making reasonable decision. Emergy analysis method evaluates quantitatively both natural environment and human economic activities as an equal unit of measure. The emergy transformity of Namgang dam’s water was 1.16 times larger than that of Nakdong River’s water. Namgang Dam’s water shows larger emergy transformity than that of Nakdong River’s water due to its good water quality. The emergy used in making 1 m3 tap water from Namgang dam water treatment system was 1.26 times larger than that of Nakdong River water treatment system. Namgang dam water treatment system shows larger emergy input than that of Nakdong river water treatment system due to its construction cost of new pipeline for intaking Namgang daw water. If the Won used in making 1 m3 tap water from Nakdong river water treatment system is 1, Namgang dam water treatment system used 1.66. If the Em-won used in making 1 m3 tap water from Nakdong river water treatment system is 1, Namgang dam water treatment system used 1.26. The cost-benefit ratio of Em-won was smaller than that of Won. When we use emergy analysis, which considers the benefit of a natural environment such as good water quality of Namgang dam, Namgang dam water treatment system could be a good alternative for diversifying intake source.

Keywords: emergy, emergy transformity, Em-won, water treatment system

Procedia PDF Downloads 306
1595 Assessment of Urban Environmental Noise in Urban Habitat: A Spatial Temporal Study

Authors: Neha Pranav Kolhe, Harithapriya Vijaye, Arushi Kamle

Abstract:

The economic growth engines are urban regions. As the economy expands, so does the need for peace and quiet, and noise pollution is one of the important social and environmental issue. Health and wellbeing are at risk from environmental noise pollution. Because of urbanisation, population growth, and the consequent rise in the usage of increasingly potent, diverse, and highly mobile sources of noise, it is now more severe and pervasive than ever before, and it will only become worse. Additionally, it will expand as long as there is an increase in air, train, and highway traffic, which continue to be the main contributors of noise pollution. The current study will be conducted in two zones of class I city of central India (population range: 1 million–4 million). Total 56 measuring points were chosen to assess noise pollution. The first objective evaluates the noise pollution in various urban habitats determined as formal and informal settlement. It identifies the comparison of noise pollution within the settlements using T- Test analysis. The second objective assess the noise pollution in silent zones (as stated in Central Pollution Control Board) in a hierarchical way. It also assesses the noise pollution in the settlements and compares with prescribed permissible limits using class I sound level equipment. As appropriate indices, equivalent noise level on the (A) frequency weighting network, minimum sound pressure level and maximum sound pressure level were computed. The survey is conducted for a period of 1 week. Arc GIS is used to plot and map the temporal and spatial variability in urban settings. It is discovered that noise levels at most stations, particularly at heavily trafficked crossroads and subway stations, were significantly different and higher than acceptable limits and squares. The study highlights the vulnerable areas that should be considered while city planning. The study demands area level planning while preparing a development plan. It also demands attention to noise pollution from the perspective of residential and silent zones. The city planning in urban areas neglects the noise pollution assessment at city level. This contributes to that, irrespective of noise pollution guidelines, the ground reality is far away from its applicability. The result produces incompatible land use on a neighbourhood scale with respect to noise pollution. The study's final results will be useful to policymakers, architects and administrators in developing countries. This will be useful for noise pollution in urban habitat governance by efficient decision making and policy formulation to increase the profitability of these systems.

Keywords: noise pollution, formal settlements, informal settlements, built environment, silent zone, residential area

Procedia PDF Downloads 119
1594 MAGE-A3 and PRAME Gene Expression and EGFR Mutation Status in Non-Small-Cell Lung Cancer

Authors: Renata Checiches, Thierry Coche, Nicolas F. Delahaye, Albert Linder, Fernando Ulloa Montoya, Olivier Gruselle, Karen Langfeld, An de Creus, Bart Spiessens, Vincent G. Brichard, Jamila Louahed, Frédéric F. Lehmann

Abstract:

Background: The RNA-expression levels of cancer-testis antigens MAGE A3 and PRAME were determined in resected tissue from patients with primary non-small-cell lung cancer (NSCLC) and related to clinical outcome. EGFR, KRAS and BRAF mutation status was determined in a subset to investigate associations with MAGE A3 and PRAME expression. Methods: We conducted a single-centre, uncontrolled, retrospective study of 1260 tissue-bank samples from stage IA-III resected NSCLC. The prognostic value of antigen expression (qRT-PCR) was determined by hazard-ratio and Kaplan-Meier curves. Results: Thirty-seven percent (314/844) of tumours expressed MAGE-A3, 66% (723/1092) expressed PRAME and 31% (239/839) expressed both. Respective frequencies in squamous-cell tumours and adenocarcinomas were 43%/30% for MAGE A3 and 80%/44% for PRAME. No correlation with stage, tumour size or patient age was found. Overall, no prognostic value was identified for either antigen. A trend to poorer overall survival was associated with MAGE-A3 in stage IIIB and with PRAME in stage IB. EGFR and KRAS mutations were found in 10.1% (28/311) and 33.8% (97/311) of tumours, respectively. EGFR (but not KRAS) mutation status was negatively associated with PRAME expression. Conclusion: No clear prognostic value for either PRAME or MAGE A3 was observed in the overall population, although some observed trends may warrant further investigation.

Keywords: MAGE A3, PRAME, cancer-testis gene, NSCLC, survival, EGFR

Procedia PDF Downloads 384
1593 An Improved Total Variation Regularization Method for Denoising Magnetocardiography

Authors: Yanping Liao, Congcong He, Ruigang Zhao

Abstract:

The application of magnetocardiography signals to detect cardiac electrical function is a new technology developed in recent years. The magnetocardiography signal is detected with Superconducting Quantum Interference Devices (SQUID) and has considerable advantages over electrocardiography (ECG). It is difficult to extract Magnetocardiography (MCG) signal which is buried in the noise, which is a critical issue to be resolved in cardiac monitoring system and MCG applications. In order to remove the severe background noise, the Total Variation (TV) regularization method is proposed to denoise MCG signal. The approach transforms the denoising problem into a minimization optimization problem and the Majorization-minimization algorithm is applied to iteratively solve the minimization problem. However, traditional TV regularization method tends to cause step effect and lacks constraint adaptability. In this paper, an improved TV regularization method for denoising MCG signal is proposed to improve the denoising precision. The improvement of this method is mainly divided into three parts. First, high-order TV is applied to reduce the step effect, and the corresponding second derivative matrix is used to substitute the first order. Then, the positions of the non-zero elements in the second order derivative matrix are determined based on the peak positions that are detected by the detection window. Finally, adaptive constraint parameters are defined to eliminate noises and preserve signal peak characteristics. Theoretical analysis and experimental results show that this algorithm can effectively improve the output signal-to-noise ratio and has superior performance.

Keywords: constraint parameters, derivative matrix, magnetocardiography, regular term, total variation

Procedia PDF Downloads 153
1592 The Bayesian Premium Under Entropy Loss

Authors: Farouk Metiri, Halim Zeghdoudi, Mohamed Riad Remita

Abstract:

Credibility theory is an experience rating technique in actuarial science which can be seen as one of quantitative tools that allows the insurers to perform experience rating, that is, to adjust future premiums based on past experiences. It is used usually in automobile insurance, worker's compensation premium, and IBNR (incurred but not reported claims to the insurer) where credibility theory can be used to estimate the claim size amount. In this study, we focused on a popular tool in credibility theory which is the Bayesian premium estimator, considering Lindley distribution as a claim distribution. We derive this estimator under entropy loss which is asymmetric and squared error loss which is a symmetric loss function with informative and non-informative priors. In a purely Bayesian setting, the prior distribution represents the insurer’s prior belief about the insured’s risk level after collection of the insured’s data at the end of the period. However, the explicit form of the Bayesian premium in the case when the prior is not a member of the exponential family could be quite difficult to obtain as it involves a number of integrations which are not analytically solvable. The paper finds a solution to this problem by deriving this estimator using numerical approximation (Lindley approximation) which is one of the suitable approximation methods for solving such problems, it approaches the ratio of the integrals as a whole and produces a single numerical result. Simulation study using Monte Carlo method is then performed to evaluate this estimator and mean squared error technique is made to compare the Bayesian premium estimator under the above loss functions.

Keywords: bayesian estimator, credibility theory, entropy loss, monte carlo simulation

Procedia PDF Downloads 335
1591 Characterisation of Fractions Extracted from Sorghum Byproducts

Authors: Prima Luna, Afroditi Chatzifragkou, Dimitris Charalampopoulos

Abstract:

Sorghum byproducts, namely bran, stalk, and panicle are examples of lignocellulosic biomass. These raw materials contain large amounts of polysaccharides, in particular hemicelluloses, celluloses, and lignins, which if efficiently extracted, can be utilised for the development of a range of added value products with potential applications in agriculture and food packaging sectors. The aim of this study was to characterise fractions extracted from sorghum bran and stalk with regards to their physicochemical properties that could determine their applicability as food-packaging materials. A sequential alkaline extraction was applied for the isolation of cellulosic, hemicellulosic and lignin fractions from sorghum stalk and bran. Lignin content, phenolic content and antioxidant capacity were also investigated in the case of the lignin fraction. Thermal analysis using differential scanning calorimetry (DSC) and X-Ray Diffraction (XRD) revealed that the glass transition temperature (Tg) of cellulose fraction of the stalk was ~78.33 oC at amorphous state (~65%) and water content of ~5%. In terms of hemicellulose, the Tg value of stalk was slightly lower compared to bran at amorphous state (~54%) and had less water content (~2%). It is evident that hemicelluloses generally showed a lower thermal stability compared to cellulose, probably due to their lack of crystallinity. Additionally, bran had higher arabinose-to-xylose ratio (0.82) than the stalk, a fact that indicated its low crystallinity. Furthermore, lignin fraction had Tg value of ~93 oC at amorphous state (~11%). Stalk-derived lignin fraction contained more phenolic compounds (mainly consisting of p-coumaric and ferulic acid) and had higher lignin content and antioxidant capacity compared to bran-derived lignin fraction.

Keywords: alkaline extraction, bran, cellulose, hemicellulose, lignin, stalk

Procedia PDF Downloads 300