Search results for: optimizing construction design work
1156 Exploration Tools for Tantalum-Bearing Pegmatites along Kibara Belt, Central and Southwestern Uganda
Authors: Sadat Sembatya
Abstract:
Tantalum metal is used in addressing capacitance challenge in the 21st-century technology growth. Tantalum is rarely found in its elemental form. Hence it’s often found with niobium and the radioactive elements of thorium and uranium. Industrial processes are required to extract pure tantalum. Its deposits are mainly oxide associated and exist in Ta-Nb oxides such as tapiolite, wodginite, ixiolite, rutile and pyrochlore-supergroup minerals are of minor importance. The stability and chemical inertness of tantalum makes it a valuable substance for laboratory equipment and a substitute for platinum. Each period of Tantalum ore formation is characterized by specific mineralogical and geochemical features. Compositions of Columbite-Group Minerals (CGM) are variable: Fe-rich types predominate in the Man Shield (Sierra Leone), the Congo Craton (DR Congo), the Kamativi Belt (Zimbabwe) and the Jos Plateau (Nigeria). Mn-rich columbite-tantalite is typical of the Alto Ligonha Province (Mozambique), the Arabian-Nubian Shield (Egypt, Ethiopia) and the Tantalite Valley pegmatites (southern Namibia). There are large compositional variations through Fe-Mn fractionation, followed by Nb-Ta fractionation. These are typical for pegmatites usually associated with very coarse quartz-feldspar-mica granites. They are young granitic systems of the Kibara Belt of Central Africa and the Older Granites of Nigeria. Unlike ‘simple’ Be-pegmatites, most Ta-Nb rich pegmatites have the most complex zoning. Hence we need systematic exploration tools to find and rapidly assess the potential of different pegmatites. The pegmatites exist as known deposits (e.g., abandoned mines) and the exposed or buried pegmatites. We investigate rocks and minerals to trace for the possibility of the effect of hydrothermal alteration mainly for exposed pegmatites, do mineralogical study to prove evidence of gradual replacement and geochemistry to report the availability of trace elements which are good indicators of mineralisation. Pegmatites are not good geophysical responders resulting to the exclusion of the geophysics option. As for more advanced prospecting, we bulk samples from different zones first to establish their grades and characteristics, then make a pilot test plant because of big samples to aid in the quantitative characterization of zones, and then drill to reveal distribution and extent of different zones but not necessarily grade due to nugget effect. Rapid assessment tools are needed to assess grade and degree of fractionation in order to ‘rule in’ or ‘rule out’ a given pegmatite for future work. Pegmatite exploration is also unique, high risk and expensive hence right traceability system and certification for 3Ts are highly needed.Keywords: exploration, mineralogy, pegmatites, tantalum
Procedia PDF Downloads 1491155 Synthesis and Characterization of pH-Sensitive Graphene Quantum Dot-Loaded Metal-Organic Frameworks for Targeted Drug Delivery and Fluorescent Imaging
Authors: Sayed Maeen Badshah, Kuen-Song Lin, Abrar Hussain, Jamshid Hussain
Abstract:
Liver cancer is a significant global health issue, ranking fifth in incidence and second in mortality. Effective therapeutic strategies are urgently needed to combat this disease, particularly in regions with high prevalence. This study focuses on developing and characterizing fluorescent organometallic frameworks as distinct drug delivery carriers with potential applications in both the treatment and biological imaging of liver cancer. This work introduces two distinct organometallic frameworks: the cake-shaped GQD@NH₂-MIL-125 and the cross-shaped M8U6/FM8U6. The GQD@NH₂-MIL-125 framework is particularly noteworthy for its high fluorescence, making it an effective tool for biological imaging. X-ray diffraction (XRD) analysis revealed specific diffraction peaks at 6.81ᵒ (011), 9.76ᵒ (002), and 11.69ᵒ (121), with an additional significant peak at 26ᵒ (2θ), corresponding to the carbon material. Morphological analysis using Field Emission Scanning Electron Microscopy (FE-SEM), and Transmission Electron Microscopy (TEM) demonstrated that the framework has a front particle size of 680 nm and a side particle size of 55±5 nm. High-resolution TEM (HR-TEM) images confirmed the successful attachment of graphene quantum dots (GQDs) onto the NH2-MIL-125 framework. Fourier-Transform Infrared (FT-IR) spectroscopy identified crucial functional groups within the GQD@NH₂-MIL-125 structure, including O-Ti-O metal bonds within the 500 to 700 cm⁻¹ range, and N-H and C-N bonds at 1,646 cm⁻¹ and 1,164 cm⁻¹, respectively. BET isotherm analysis further revealed a specific surface area of 338.1 m²/g and an average pore size of 46.86 nm. This framework also demonstrated UV-active properties, as identified by UV-visible light spectra, and its photoluminescence (PL) spectra showed an emission peak around 430 nm when excited at 350 nm, indicating its potential as a fluorescent drug delivery carrier. In parallel, the cross-shaped M8U6/FM8U6 frameworks were synthesized and characterized using X-ray diffraction, which identified distinct peaks at 2θ = 7.4 (111), 8.5 (200), 9.2 (002), 10.8 (002), 12.1 (220), 16.7 (103), and 17.1 (400). FE-SEM, HR-TEM, and TEM analyses revealed particle sizes of 350±50 nm for M8U6 and 200±50 nm for FM8U6. These frameworks, synthesized from terephthalic acid (H₂BDC), displayed notable vibrational bonds, such as C=O at 1,650 cm⁻¹, Fe-O in MIL-88 at 520 cm⁻¹, and Zr-O in UIO-66 at 482 cm⁻¹. BET analysis showed specific surface areas of 740.1 m²/g with a pore size of 22.92 nm for M8U6 and 493.9 m²/g with a pore size of 35.44 nm for FM8U6. Extended X-ray Absorption Fine Structure (EXAFS) spectra confirmed the stability of Ti-O bonds in the frameworks, with bond lengths of 2.026 Å for MIL-125, 1.962 Å for NH₂-MIL-125, and 1.817 Å for GQD@NH₂-MIL-125. These findings highlight the potential of these organometallic frameworks for enhanced liver cancer therapy through precise drug delivery and imaging, representing a significant advancement in nanomaterial applications in biomedical science.Keywords: liver cancer cells, metal organic frameworks, Doxorubicin (DOX), drug release.
Procedia PDF Downloads 91154 Early Childhood Education for Bilingual Children: A Cross-Cultural Examination
Authors: Dina C. Castro, Rossana Boyd, Eugenia Papadaki
Abstract:
Immigration within and across continents is currently a global reality. The number of people leaving their communities in search for a better life for them and their families has increased dramatically during the last twenty years. Therefore, young children of the 21st century around the World are growing up in diverse communities, exposed to many languages and cultures. One consequence of these migration movements is the increased linguistic diversity in school settings. Depending on the linguistic history and the status of languages in the communities (i.e., minority-majority; majority-majority) the instructional approaches will differ. This session will discuss how bilingualism is addressed in early education programs in both minority-majority and majority-majority language communities, analyzing experiences in three countries with very distinct societal and demographic characteristics: Peru (South America), the United States (North America), and Italy (European Union). The ultimate goal is to identify commonalities and differences across the three experiences that could lead to a discussion of bilingualism in early education from a global perspective. From Peru, we will discuss current national language and educational policies that have lead to the design and implementation of bilingual and intercultural education for children in indigenous communities. We will also discuss how those practices are being implemented in preschool programs, the progress made and challenges encountered. From the United States, we will discuss the early education of Spanish-English bilingual preschoolers, including the national policy environment, as well as variations in language of instruction approaches currently being used with these children. From Italy, we will describe early education practices in the Bilingual School of Monza, in northern Italy, a school that has 20 years promoting bilingualism and multilingualism in education. While the presentations from Peru and the United States will discuss bilingualism in a majority-minority language environment, this presentation will lead to a discussion on the opportunities and challenges of promoting bilingualism in a majority-majority language environment. It is evident that innovative models and policies are necessary to prevent inequality of opportunities for bilingual children beginning in their earliest years. The cross-cultural examination of bilingual education experiences for young children in three part of the World will allow us to learn from our success and challenges. The session will end with a discussion of the following question: To what extent are early care and education programs being effective in promoting positive development and learning among all children, including those from diverse language, ethnic and cultural backgrounds? We expect to identify, with participants to our session, a set of recommendations for policy and program development that could ensure access to high quality early education for all bilingual children.Keywords: early education for bilingual children, global perspectives in early education, cross-cultural, language policies
Procedia PDF Downloads 2981153 Jungle Justice on Emotional Health Challenges of Residents in Lagos Metropolis
Authors: Aaron Akinloye
Abstract:
this research focuses on the impact of jungle justice on the emotional health challenges experienced by residents in the Lagos metropolitan city in Nigeria. Jungle justice refers to the practice of individuals taking the law into their own hands and administering punishment without proper legal procedures. The aim of this study is to investigate the influence of jungle justice on the emotional challenges faced by residents in Lagos. The specific objectives of the study are to examine the effects of jungle justice on trauma, pressure, fear, and depression among residents. The study adopts a descriptive survey research design and uses a questionnaire as the research instrument. The population of the study consisted of residents in the three senatorial districts that make up Lagos State. A simple random sampling technique was used to select two Local Government Areas (Yaba and Shomolu) from each of the three senatorial districts of Lagos State. Also, a simple random sampling technique was used to select fifty (50) residents from each of the chosen Local Government Areas to make three hundred (300) residents that formed the sample of the study. Accidental sampling technique is employed to select a sample of 300 residents. Data on the variables of interest is collected using a self-developed questionnaire. The research instrument undergoes validation through face, content, and construct validation processes. The reliability coefficient of the instrument is found to be 0.84. The study reveals that jungle justice significantly influences trauma, pressure, fear, and depression among residents in Lagos metropolitan city. The statistical analysis shows significant relationships between jungle justice and these emotional health challenges (df (298) t= 2.33, p< 0.05; df (298) t= 2.16, p< 0.05; df (298) t= 2.20, p< 0.05; df (298) t= 2.14, p< 0.05). This study contributes to the literature by highlighting the negative effects of jungle justice on the emotional well-being of residents. It emphasizes the importance of addressing this issue and implementing measures to prevent such vigilante actions. Data is collected through the administration of the self-developed questionnaire to the selected residents. The collected data is then analyzed using inferential statistics, specifically mean analysis, to examine the relationships between jungle justice and the emotional health challenges experienced by the residents. The main question addressed in this study is how jungle justice affects the emotional health challenges faced by residents in Lagos metropolitan city. Conclusion: The study concludes that jungle justice has a significant influence on trauma, pressure, fear, and depression among residents. To address this issue, recommendations are made, including the implementation of comprehensive awareness campaigns, improvement of law enforcement agencies, development of support systems for victims, and revision of the legal framework to effectively address jungle justice. Overall, this research contributes to the understanding of the consequences of jungle justice and provides recommendations for intervention to protect the emotional well-being of residents in Lagos metropolitan city.Keywords: jungle justice, emotional health, depression, anger
Procedia PDF Downloads 761152 Freshwater Pinch Analysis for Optimal Design of the Photovoltaic Powered-Pumping System
Authors: Iman Janghorban Esfahani
Abstract:
Due to the increased use of irrigation in agriculture, the importance and need for highly reliable water pumping systems have significantly increased. The pumping of the groundwater is essential to provide water for both drip and furrow irrigation to increase the agricultural yield, especially in arid regions that suffer from scarcities of surface water. The most common irrigation pumping systems (IPS) consume conventional energies through the use of electric motors and generators or connecting to the electricity grid. Due to the shortage and transportation difficulties of fossil fuels, and unreliable access to the electricity grid, especially in the rural areas, and the adverse environmental impacts of fossil fuel usage, such as greenhouse gas (GHG) emissions, the need for renewable energy sources such as photovoltaic systems (PVS) as an alternative way of powering irrigation pumping systems is urgent. Integration of the photovoltaic systems with irrigation pumping systems as the Photovoltaic Powered-Irrigation Pumping System (PVP-IPS) can avoid fossil fuel dependency and the subsequent greenhouse gas emissions, as well as ultimately lower energy costs and improve efficiency, which made PVP-IPS systems as an environmentally and economically efficient solution for agriculture irrigation in every region. The greatest problem faced by integration of PVP with IPS systems is matching the intermittence of the energy supply with the dynamic water demand. The best solution to overcome the intermittence is to incorporate a storage system into the PVP-IPS to provide water-on-demand as a highly reliable stand-alone irrigation pumping system. The water storage tank (WST) is the most common storage device for PVP-IPS systems. In the integrated PVP-IPS with a water storage tank (PVP-IPS-WST), a water storage tank stores the water pumped by the IPS in excess of the water demand and then delivers it when demands are high. The Freshwater pinch analysis (FWaPA) as an alternative to mathematical modeling was used by other researchers for retrofitting the off-grid battery less photovoltaic-powered reverse osmosis system. However, the Freshwater pinch analysis has not been used to integrate the photovoltaic systems with irrigation pumping system with water storage tanks. In this study, FWaPA graphical and numerical tools were used for retrofitting an existing PVP-IPS system located in Salahadin, Republic of Iraq. The plant includes a 5 kW submersible water pump and 7.5 kW solar PV system. The Freshwater Composite Curve as the graphical tool and Freashwater Storage Cascade Table as the numerical tool were constructed to determine the minimum required outsourced water during operation, optimal amount of delivered electricity to the water pump, and optimal size of the water storage tank for one-year operation data. The results of implementing the FWaPA on the case study show that the PVP-IPS system with a WST as the reliable system can reduce outsourced water by 95.41% compare to the PVP-IPS system without storage tank.Keywords: irrigation, photovoltaic, pinch analysis, pumping, solar energy
Procedia PDF Downloads 1381151 3D Nanostructured Assembly of 2D Transition Metal Chalcogenide/Graphene as High Performance Electrocatalysts
Authors: Sunil P. Lonkar, Vishnu V. Pillai, Saeed Alhassan
Abstract:
Design and development of highly efficient, inexpensive, and long-term stable earth-abundant electrocatalysts hold tremendous promise for hydrogen evolution reaction (HER) in water electrolysis. The 2D transition metal dichalcogenides, especially molybdenum disulfide attracted a great deal of interests due to its high electrocatalytic activity. However, due to its poor electrical conductivity and limited exposed active sites, the performance of these catalysts is limited. In this context, a facile and scalable synthesis method for fabrication nanostructured electrocatalysts composed 3D graphene porous aerogels supported with MoS₂ and WS₂ is highly desired. Here we developed a highly active and stable electrocatalyst catalyst for the HER by growing it into a 3D porous architecture on conducting graphene. The resulting nanohybrids were thoroughly investigated by means of several characterization techniques to understand structure and properties. Moreover, the HER performance of these 3D catalysts is expected to greatly improve in compared to other, well-known catalysts which mainly benefits from the improved electrical conductivity of the by graphene and porous structures of the support. This technologically scalable process can afford efficient electrocatalysts for hydrogen evolution reactions (HER) and hydrodesulfurization catalysts for sulfur-rich petroleum fuels. Owing to the lower cost and higher performance, the resulting materials holds high potential for various energy and catalysis applications. In typical hydrothermal method, sonicated GO aqueous dispersion (5 mg mL⁻¹) was mixed with ammonium tetrathiomolybdate (ATTM) and tungsten molybdate was treated in a sealed Teflon autoclave at 200 ◦C for 4h. After cooling, a black solid macroporous hydrogel was recovered washed under running de-ionized water to remove any by products and metal ions. The obtained hydrogels were then freeze-dried for 24 h and was further subjected to thermal annealing driven crystallization at 600 ◦C for 2h to ensure complete thermal reduction of RGO into graphene and formation of highly crystalline MoS₂ and WoS₂ phases. The resulting 3D nanohybrids were characterized to understand the structure and properties. The SEM-EDS clearly reveals the formation of highly porous material with a uniform distribution of MoS₂ and WS₂ phases. In conclusion, a novice strategy for fabrication of 3D nanostructured MoS₂-WS₂/graphene is presented. The characterizations revealed that the in-situ formed promoters uniformly dispersed on to few layered MoS₂¬-WS₂ nanosheets that are well-supported on graphene surface. The resulting 3D hybrids hold high promise as potential electrocatalyst and hydrodesulfurization catalyst.Keywords: electrocatalysts, graphene, transition metal chalcogenide, 3D assembly
Procedia PDF Downloads 1361150 A Protocol of Procedures and Interventions to Accelerate Post-Earthquake Reconstruction
Authors: Maria Angela Bedini, Fabio Bronzini
Abstract:
The Italian experiences, positive and negative, of the post-earthquake are conditioned by long times and structural bureaucratic constraints, also motivated by the attempt to contain mafia infiltration and corruption. The transition from the operational phase of the emergency to the planning phase of the reconstruction project is thus hampered by a series of inefficiencies and delays, incompatible with the need for rapid recovery of the territories in crisis. In fact, intervening in areas affected by seismic events means at the same time associating the reconstruction plan with an urban and territorial rehabilitation project based on strategies and tools in which prevention and safety play a leading role in the regeneration of territories in crisis and the return of the population. On the contrary, the earthquakes that took place in Italy have instead further deprived the territories affected of the minimum requirements for habitability, in terms of accessibility and services, accentuating the depopulation process, already underway before the earthquake. The objective of this work is to address with implementing and programmatic tools the procedures and strategies to be put in place, today and in the future, in Italy and abroad, to face the challenge of the reconstruction of activities, sociality, services, risk mitigation: a protocol of operational intentions and firm points, open to a continuous updating and implementation. The methodology followed is that of the comparison in a synthetic form between the different Italian experiences of the post-earthquake, based on facts and not on intentions, to highlight elements of excellence or, on the contrary, damage. The main results obtained can be summarized in technical comparison cards on good and bad practices. With this comparison, we intend to make a concrete contribution to the reconstruction process, certainly not only related to the reconstruction of buildings but privileging the primary social and economic needs. In this context, the recent instrument applied in Italy of the strategic urban and territorial SUM (Minimal Urban Structure) and the strategic monitoring process become dynamic tools for supporting reconstruction. The conclusions establish, by points, a protocol of interventions, the priorities for integrated socio-economic strategies, multisectoral and multicultural, and highlight the innovative aspects of 'inversion' of priorities in the reconstruction process, favoring the take-off of 'accelerator' interventions social and economic and a more updated system of coexistence with risks. In this perspective, reconstruction as a necessary response to the calamitous event can and must become a unique opportunity to raise the level of protection from risks and rehabilitation and development of the most fragile places in Italy and abroad.Keywords: an operational protocol for reconstruction, operational priorities for coexistence with seismic risk, social and economic interventions accelerators of building reconstruction, the difficult post-earthquake reconstruction in Italy
Procedia PDF Downloads 1271149 Effect of Oxygen Ion Irradiation on the Structural, Spectral and Optical Properties of L-Arginine Acetate Single Crystals
Authors: N. Renuka, R. Ramesh Babu, N. Vijayan
Abstract:
Ion beams play a significant role in the process of tuning the properties of materials. Based on the radiation behavior, the engineering materials are categorized into two different types. The first one comprises organic solids which are sensitive to the energy deposited in their electronic system and the second one comprises metals which are insensitive to the energy deposited in their electronic system. However, exposure to swift heavy ions alters this general behavior. Depending on the mass, kinetic energy and nuclear charge, an ion can produce modifications within a thin surface layer or it can penetrate deeply to produce long and narrow distorted area along its path. When a high energetic ion beam impinges on a material, it causes two different types of changes in the material due to the columbic interaction between the target atom and the energetic ion beam: (i) inelastic collisions of the energetic ion with the atomic electrons of the material; and (ii) elastic scattering from the nuclei of the atoms of the material, which is extremely responsible for relocating the atoms of matter from their lattice position. The exposure of the heavy ions renders the material return to equilibrium state during which the material undergoes surface and bulk modifications which depends on the mass of the projectile ion, physical properties of the target material, its energy, and beam dimension. It is well established that electronic stopping power plays a major role in the defect creation mechanism provided it exceeds a threshold which strongly depends on the nature of the target material. There are reports available on heavy ion irradiation especially on crystalline materials to tune their physical and chemical properties. L-Arginine Acetate [LAA] is a potential semi-organic nonlinear optical crystal and its optical, mechanical and thermal properties have already been reported The main objective of the present work is to enhance or tune the structural and optical properties of LAA single crystals by heavy ion irradiation. In the present study, a potential nonlinear optical single crystal, L-arginine acetate (LAA) was grown by slow evaporation solution growth technique. The grown LAA single crystal was irradiated with oxygen ions at the dose rate of 600 krad and 1M rad in order to tune the structural and optical properties. The structural properties of pristine and oxygen ions irradiated LAA single crystals were studied using Powder X- ray diffraction and Fourier Transform Infrared spectral studies which reveal the structural changes that are generated due to irradiation. Optical behavior of pristine and oxygen ions irradiated crystals is studied by UV-Vis-NIR and photoluminescence analyses. From this investigation we can concluded that oxygen ions irradiation modifies the structural and optical properties of LAA single crystals.Keywords: heavy ion irradiation, NLO single crystal, photoluminescence, X-ray diffractometer
Procedia PDF Downloads 2541148 Thermo-Economic Evaluation of Sustainable Biogas Upgrading via Solid-Oxide Electrolysis
Authors: Ligang Wang, Theodoros Damartzis, Stefan Diethelm, Jan Van Herle, François Marechal
Abstract:
Biogas production from anaerobic digestion of organic sludge from wastewater treatment as well as various urban and agricultural organic wastes is of great significance to achieve a sustainable society. Two upgrading approaches for cleaned biogas can be considered: (1) direct H₂ injection for catalytic CO₂ methanation and (2) CO₂ separation from biogas. The first approach usually employs electrolysis technologies to generate hydrogen and increases the biogas production rate; while the second one usually applies commercially-available highly-selective membrane technologies to efficiently extract CO₂ from the biogas with the latter being then sent afterward for compression and storage for further use. A straightforward way of utilizing the captured CO₂ is on-site catalytic CO₂ methanation. From the perspective of system complexity, the second approach may be questioned, since it introduces an additional expensive membrane component for producing the same amount of methane. However, given the circumstance that the sustainability of the produced biogas should be retained after biogas upgrading, renewable electricity should be supplied to drive the electrolyzer. Therefore, considering the intermittent nature and seasonal variation of renewable electricity supply, the second approach offers high operational flexibility. This indicates that these two approaches should be compared based on the availability and scale of the local renewable power supply and not only the technical systems themselves. Solid-oxide electrolysis generally offers high overall system efficiency, and more importantly, it can achieve simultaneous electrolysis of CO₂ and H₂O (namely, co-electrolysis), which may bring significant benefits for the case of CO₂ separation from the produced biogas. When taking co-electrolysis into account, two additional upgrading approaches can be proposed: (1) direct steam injection into the biogas with the mixture going through the SOE, and (2) CO₂ separation from biogas which can be used later for co-electrolysis. The case study of integrating SOE to a wastewater treatment plant is investigated with wind power as the renewable power. The dynamic production of biogas is provided on an hourly basis with the corresponding oxygen and heating requirements. All four approaches mentioned above are investigated and compared thermo-economically: (a) steam-electrolysis with grid power, as the base case for steam electrolysis, (b) CO₂ separation and co-electrolysis with grid power, as the base case for co-electrolysis, (c) steam-electrolysis and CO₂ separation (and storage) with wind power, and (d) co-electrolysis and CO₂ separation (and storage) with wind power. The influence of the scale of wind power supply is investigated by a sensitivity analysis. The results derived provide general understanding on the economic competitiveness of SOE for sustainable biogas upgrading, thus assisting the decision making for biogas production sites. The research leading to the presented work is funded by European Union’s Horizon 2020 under grant agreements n° 699892 (ECo, topic H2020-JTI-FCH-2015-1) and SCCER BIOSWEET.Keywords: biogas upgrading, solid-oxide electrolyzer, co-electrolysis, CO₂ utilization, energy storage
Procedia PDF Downloads 1551147 Analysis on the Converged Method of Korean Scientific and Mathematical Fields and Liberal Arts Programme: Focusing on the Intervention Patterns in Liberal Arts
Authors: Jinhui Bak, Bumjin Kim
Abstract:
The purpose of this study is to analyze how the scientific and mathematical fields (STEM) and liberal arts (A) work together in the STEAM program. In the future STEAM programs that have been designed and developed, the humanities will act not just as a 'tool' for science technology and mathematics, but as a 'core' content to have an equivalent status. STEAM was first introduced to the Republic of Korea in 2011 when the Ministry of Education emphasized fostering creative convergence talent. Many programs have since been developed under the name STEAM, but with the majority of programs focusing on technology education, arts and humanities are considered secondary. As a result, arts is most likely to be accepted as an option that can be excluded from the teachers who run the STEAM program. If what we ultimately pursue through STEAM education is in fostering STEAM literacy, we should no longer turn arts into a tooling area for STEM. Based on this consciousness, this study analyzed over 160 STEAM programs in middle and high schools, which were produced and distributed by the Ministry of Education and the Korea Science and Technology Foundation from 2012 to 2017. The framework of analyses referenced two criteria presented in the related prior studies: normative convergence and technological convergence. In addition, we divide Arts into fine arts and liberal arts and focused on Korean Language Course which is in liberal arts and analyzed what kind of curriculum standards were selected, and what kind of process the Korean language department participated in teaching and learning. In this study, to ensure the reliability of the analysis results, we have chosen to cross-check the individual analysis results of the two researchers and only if they are consistent. We also conducted a reliability check on the analysis results of three middle and high school teachers involved in the STEAM education program. Analyzing 10 programs selected randomly from the analyzed programs, Cronbach's α .853 showed a reliable level. The results of this study are summarized as follows. First, the convergence ratio of the liberal arts was lowest in the department of moral at 14.58%. Second, the normative convergence is 28.19%, which is lower than that of the technological convergence. Third, the language and achievement criteria selected for the program were limited to functional areas such as listening, talking, reading and writing. This means that the convergence of Korean language departments is made only by the necessary tools to communicate opinions or promote scientific products. In this study, we intend to compare these results with the STEAM programs in the United States and abroad to explore what elements or key concepts are required for the achievement criteria for Korean language and curriculum. This is meaningful in that the humanities field (A), including Korean, provides basic data that can be fused into 'equivalent qualifications' with science (S), technical engineering (TE) and mathematics (M).Keywords: Korean STEAM Programme, liberal arts, STEAM curriculum, STEAM Literacy, STEM
Procedia PDF Downloads 1571146 Pueblos Mágicos in Mexico: The Loss of Intangible Cultural Heritage and Cultural Tourism
Authors: Claudia Rodriguez-Espinosa, Erika Elizabeth Pérez Múzquiz
Abstract:
Since the creation of the “Pueblos Mágicos” program in 2001, a series of social and cultural events had directly affected the heritage conservation of the 121 registered localities until 2018, when the federal government terminated the program. Many studies have been carried out that seek to analyze from different perspectives and disciplines the consequences that these appointments have generated in the “Pueblos Mágicos.” Multidisciplinary groups such as the one headed by Carmen Valverde and Liliana López Levi, have brought together specialists from all over the Mexican Republic to create a set of diagnoses of most of these settlements, and although each one has unique specificities, there is a constant in most of them that has to do with the loss of cultural heritage and that is related to transculturality. There are several factors identified that have fostered a cultural loss, as a direct reflection of the economic crisis that prevails in Mexico. It is important to remember that the origin of this program had as its main objective to promote the growth and development of local economies since one of the conditions for entering the program is that they have less than 20,000 inhabitants. With this goal in mind, one of the first actions that many “Pueblos Mágicos” carried out was to improve or create an infrastructure to receive both national and foreign tourists since this was practically non-existent. Creating hotels, restaurants, cafes, training certified tour guides, among other actions, have led to one of the great problems they face: globalization. Although by itself it is not bad, its impact in many cases has been negative for heritage conservation. The entry into and contact with new cultures has led to the undervaluation of cultural traditions, their transformation and even their total loss. This work seeks to present specific cases of transformation and loss of cultural heritage, as well as to reflect on the problem and propose scenarios in which the negative effects can be reversed. For this text, 36 “Pueblos Mágicos” have been selected for study, based on those settlements that are cited in volumes I and IV (the first and last of the collection) of the series produced by the multidisciplinary group led by Carmen Valverde and Liliana López Levi (researchers from UNAM and UAM Xochimilco respectively) in the project supported by CONACyT entitled “Pueblos Mágicos. An interdisciplinary vision”, of which we are part. This sample is considered representative since it forms 30% of the total of 121 “Pueblos Mágicos” existing at that moment. With this information, the elements of its intangible heritage loss or transformation have been identified in every chapter based on the texts written by the participants of that project. Finally, this text shows an analysis of the effects that this federal program, as a public policy applied to 132 populations, has had on the conservation or transformation of the intangible cultural heritage of the “Pueblos Mágicos.” Transculturality, globalization, the creation of identities and the desire to increase the flow of tourists have impacted the changes that traditions (main intangible cultural heritage) have had in the 18 years that the federal program lasted.Keywords: public policies, cultural tourism, heritage preservation, pueblos mágicos program
Procedia PDF Downloads 1891145 On Stochastic Models for Fine-Scale Rainfall Based on Doubly Stochastic Poisson Processes
Authors: Nadarajah I. Ramesh
Abstract:
Much of the research on stochastic point process models for rainfall has focused on Poisson cluster models constructed from either the Neyman-Scott or Bartlett-Lewis processes. The doubly stochastic Poisson process provides a rich class of point process models, especially for fine-scale rainfall modelling. This paper provides an account of recent development on this topic and presents the results based on some of the fine-scale rainfall models constructed from this class of stochastic point processes. Amongst the literature on stochastic models for rainfall, greater emphasis has been placed on modelling rainfall data recorded at hourly or daily aggregation levels. Stochastic models for sub-hourly rainfall are equally important, as there is a need to reproduce rainfall time series at fine temporal resolutions in some hydrological applications. For example, the study of climate change impacts on hydrology and water management initiatives requires the availability of data at fine temporal resolutions. One approach to generating such rainfall data relies on the combination of an hourly stochastic rainfall simulator, together with a disaggregator making use of downscaling techniques. Recent work on this topic adopted a different approach by developing specialist stochastic point process models for fine-scale rainfall aimed at generating synthetic precipitation time series directly from the proposed stochastic model. One strand of this approach focused on developing a class of doubly stochastic Poisson process (DSPP) models for fine-scale rainfall to analyse data collected in the form of rainfall bucket tip time series. In this context, the arrival pattern of rain gauge bucket tip times N(t) is viewed as a DSPP whose rate of occurrence varies according to an unobserved finite state irreducible Markov process X(t). Since the likelihood function of this process can be obtained, by conditioning on the underlying Markov process X(t), the models were fitted with maximum likelihood methods. The proposed models were applied directly to the raw data collected by tipping-bucket rain gauges, thus avoiding the need to convert tip-times to rainfall depths prior to fitting the models. One advantage of this approach was that the use of maximum likelihood methods enables a more straightforward estimation of parameter uncertainty and comparison of sub-models of interest. Another strand of this approach employed the DSPP model for the arrivals of rain cells and attached a pulse or a cluster of pulses to each rain cell. Different mechanisms for the pattern of the pulse process were used to construct variants of this model. We present the results of these models when they were fitted to hourly and sub-hourly rainfall data. The results of our analysis suggest that the proposed class of stochastic models is capable of reproducing the fine-scale structure of the rainfall process, and hence provides a useful tool in hydrological modelling.Keywords: fine-scale rainfall, maximum likelihood, point process, stochastic model
Procedia PDF Downloads 2781144 Rotary Machine Sealing Oscillation Frequencies and Phase Shift Analysis
Authors: Liliia N. Butymova, Vladimir Ya Modorskii
Abstract:
To ensure the gas transmittal GCU's efficient operation, leakages through the labyrinth packings (LP) should be minimized. Leakages can be minimized by decreasing the LP gap, which in turn depends on thermal processes and possible rotor vibrations and is designed to ensure absence of mechanical contact. Vibration mitigation allows to minimize the LP gap. It is advantageous to research influence of processes in the dynamic gas-structure system on LP vibrations. This paper considers influence of rotor vibrations on LP gas dynamics and influence of the latter on the rotor structure within the FSI unidirectional dynamical coupled problem. Dependences of nonstationary parameters of gas-dynamic process in LP on rotor vibrations under various gas speeds and pressures, shaft rotation speeds and vibration amplitudes, and working medium features were studied. The programmed multi-processor ANSYS CFX was chosen as a numerical computation tool. The problem was solved using PNRPU high-capacity computer complex. Deformed shaft vibrations are replaced with an unyielding profile that moves in the fixed annulus "up-and-down" according to set harmonic rule. This solves a nonstationary gas-dynamic problem and determines time dependence of total gas-dynamic force value influencing the shaft. Pressure increase from 0.1 to 10 MPa causes growth of gas-dynamic force oscillation amplitude and frequency. The phase shift angle between gas-dynamic force oscillations and those of shaft displacement decreases from 3π/4 to π/2. Damping constant has maximum value under 1 MPa pressure in the gap. Increase of shaft oscillation frequency from 50 to 150 Hz under P=10 MPa causes growth of gas-dynamic force oscillation amplitude. Damping constant has maximum value at 50 Hz equaling 1.012. Increase of shaft vibration amplitude from 20 to 80 µm under P=10 MPa causes the rise of gas-dynamic force amplitude up to 20 times. Damping constant increases from 0.092 to 0.251. Calculations for various working substances (methane, perfect gas, air at 25 ˚С) prove the minimum gas-dynamic force persistent oscillating amplitude under P=0.1 MPa being observed in methane, and maximum in the air. Frequency remains almost unchanged and the phase shift in the air changes from 3π/4 to π/2. Calculations for various working substances (methane, perfect gas, air at 25 ˚С) prove the maximum gas-dynamic force oscillating amplitude under P=10 MPa being observed in methane, and minimum in the air. Air demonstrates surging. Increase of leakage speed from 0 to 20 m/s through LP under P=0.1 MPa causes the gas-dynamic force oscillating amplitude to decrease by 3 orders and oscillation frequency and the phase shift to increase 2 times and stabilize. Increase of leakage speed from 0 to 20 m/s in LP under P=1 MPa causes gas-dynamic force oscillating amplitude to decrease by almost 4 orders. The phase shift angle increases from π/72 to π/2. Oscillations become persistent. Flow rate proved to influence greatly on pressure oscillations amplitude and a phase shift angle. Work medium influence depends on operation conditions. At pressure growth, vibrations are mostly affected in methane (of working substances list considered), and at pressure decrease, in the air at 25 ˚С.Keywords: aeroelasticity, labyrinth packings, oscillation phase shift, vibration
Procedia PDF Downloads 2961143 Residual Plastic Deformation Capacity in Reinforced Concrete Beams Subjected to Drop Weight Impact Test
Authors: Morgan Johansson, Joosef Leppanen, Mathias Flansbjer, Fabio Lozano, Josef Makdesi
Abstract:
Concrete is commonly used for protective structures and how impact loading affects different types of concrete structures is an important issue. Often the knowledge gained from static loading is also used in the design of impulse loaded structures. A large plastic deformation capacity is essential to obtain a large energy absorption in an impulse loaded structure. However, the structural response of an impact loaded concrete beam may be very different compared to a statically loaded beam. Consequently, the plastic deformation capacity and failure modes of the concrete structure can be different when subjected to dynamic loads; and hence it is not sure that the observations obtained from static loading are also valid for dynamic loading. The aim of this paper is to investigate the residual plastic deformation capacity in reinforced concrete beams subjected to drop weight impact tests. A test-series consisting of 18 simply supported beams (0.1 x 0.1 x 1.18 m, ρs = 0.7%) with a span length of 1.0 m and subjected to a point load in the beam mid-point, was carried out. 2x6 beams were first subjected to drop weight impact tests, and thereafter statically tested until failure. The drop in weight had a mass of 10 kg and was dropped from 2.5 m or 5.0 m. During the impact tests, a high-speed camera was used with 5 000 fps and for the static tests, a camera was used with 0.5 fps. Digital image correlation (DIC) analyses were conducted and from these the velocities of the beam and the drop weight, as well as the deformations and crack propagation of the beam, were effectively measured. Additionally, for the static tests, the applied load and midspan deformation were measured. The load-deformation relations for the beams subjected to an impact load were compared with 6 reference beams that were subjected to static loading only. The crack pattern obtained were compared using DIC, and it was concluded that the resulting crack formation depended much on the test method used. For the static tests, only bending cracks occurred. For the impact loaded beams, though, distinctive diagonal shear cracks also formed below the zone of impact and less wide shear cracks were observed in the region half-way to the support. Furthermore, due to wave propagation effects, bending cracks developed in the upper part of the beam during initial loading. The results showed that the plastic deformation capacity increased for beams subjected to drop weight impact tests from a high drop height of 5.0 m. For beams subjected to an impact from a low drop height of 2.5 m, though, the plastic deformation capacity was in the same order of magnitude as for the statically loaded reference beams. The beams tested were designed to fail due to bending when subjected to a static load. However, for the impact tested beams, one beam exhibited a shear failure at a significantly reduced load level when it was tested statically; indicating that there might be a risk of reduced residual load capacity for impact loaded structures.Keywords: digital image correlation (DIC), drop weight impact, experiments, plastic deformation capacity, reinforced concrete
Procedia PDF Downloads 1471142 Precursor Synthesis of Carbon Materials with Different Aggregates Morphologies
Authors: Nikolai A. Khlebnikov, Vladimir N. Krasilnikov, Evgenii V. Polyakov, Anastasia A. Maltceva
Abstract:
Carbon materials with advanced surfaces are widely used both in modern industry and in environmental protection. The physical-chemical nature of these materials is determined by the morphology of primary atomic and molecular carbon structures, which are the basis for synthesizing the following materials: zero-dimensional (fullerenes), one-dimensional (fiber, tubes), two-dimensional (graphene) carbon nanostructures, three-dimensional (multi-layer graphene, graphite, foams) with unique physical-chemical and functional properties. Experience shows that the microscopic morphological level is the basis for the creation of the next mesoscopic morphological level. The dependence of the morphology on the chemical way and process prehistory (crystallization, colloids formation, liquid crystal state and other) is the peculiarity of the last called level. These factors determine the consumer properties of carbon materials, such as specific surface area, porosity, chemical resistance in corrosive environments, catalytic and adsorption activities. Based on the developed ideology of thin precursor synthesis, the authors discuss one of the approaches of the porosity control of carbon-containing materials with a given aggregates morphology. The low-temperature thermolysis of precursors in a gas environment of a given composition is the basis of the above-mentioned idea. The processes of carbothermic precursor synthesis of two different compounds: tungsten carbide WC:nC and zinc oxide ZnO:nC containing an impurity phase in the form of free carbon were selected as subjects of the research. In the first case, the transition metal (tungsten) forming carbides was the object of the synthesis. In the second case, there was selected zinc that does not form carbides. The synthesis of both kinds of transition metals compounds was conducted by the method of precursor carbothermic synthesis from the organic solution. ZnO:nC composites were obtained by thermolysis of succinate Zn(OO(CH2)2OO), formate glycolate Zn(HCOO)(OCH2CH2O)1/2, glycerolate Zn(OCH2CHOCH2OH), and tartrate Zn(OOCCH(OH)CH(OH)COO). WC:nC composite was synthesized from ammonium paratungstate and glycerol. In all cases, carbon structures that are specific for diamond- like carbon forms appeared on the surface of WC and ZnO particles after the heat treatment. Tungsten carbide and zinc oxide were removed from the composites by selective chemical dissolution preserving the amorphous carbon phase. This work presents the results of investigating WC:nC and ZnO:nC composites and carbon nanopowders with tubular, tape, plate and onion morphologies of aggregates that are separated by chemical dissolution of WC and ZnO from the composites by the following methods: SEM, TEM, XPA, Raman spectroscopy, and BET. The connection between the carbon morphology under the conditions of synthesis and chemical nature of the precursor and the possibility of regulation of the morphology with the specific surface area up to 1700-2000 m2/g of carbon-structured materials are discussed.Keywords: carbon morphology, composite materials, precursor synthesis, tungsten carbide, zinc oxide
Procedia PDF Downloads 3351141 Human Beta Defensin 1 as Potential Antimycobacterial Agent against Active and Dormant Tubercle Bacilli
Authors: Richa Sharma, Uma Nahar, Sadhna Sharma, Indu Verma
Abstract:
Counteracting the deadly pathogen Mycobacterium tuberculosis (M. tb) effectively is still a global challenge. Scrutinizing alternative weapons like antimicrobial peptides to strengthen existing tuberculosis artillery is urgently required. Considering the antimycobacterial potential of Human Beta Defensin 1 (HBD-1) along with isoniazid, the present study was designed to explore the ability of HBD-1 to act against active and dormant M. tb. HBD-1 was screened in silico using antimicrobial peptide prediction servers to identify its short antimicrobial motif. The activity of both HBD-1 and its selected motif (Pep B) was determined at different concentrations against actively growing M. tb in vitro and ex vivo in monocyte derived macrophages (MDMs). Log phase M. tb was grown along with HBD-1 and Pep B for 7 days. M. tb infected MDMs were treated with HBD-1 and Pep B for 72 hours. Thereafter, colony forming unit (CFU) enumeration was performed to determine activity of both peptides against actively growing in vitro and intracellular M. tb. The dormant M. tb models were prepared by following two approaches and treated with different concentrations of HBD-1 and Pep B. Firstly, 20-22 days old M. tbH37Rv was grown in potassium deficient Sauton media for 35 days. The presence of dormant bacilli was confirmed by Nile red staining. Dormant bacilli were further treated with rifampicin, isoniazid, HBD-1 and its motif for 7 days. The effect of both peptides on latent bacilli was assessed by colony forming units (CFU) and most probable number (MPN) enumeration. Secondly, human PBMC granuloma model was prepared by infecting PBMCs seeded on collagen matrix with M. tb(MOI 0.1) for 10 days. Histopathology was done to confirm granuloma formation. The granuloma thus formed was incubated for 72 hours with rifampicin, HBD-1 and Pep B individually. Difference in bacillary load was determined by CFU enumeration. The minimum inhibitory concentrations of HBD-1 and Pep B restricting growth of mycobacteria in vitro were 2μg/ml and 20μg/ml respectively. The intracellular mycobacterial load was reduced significantly by HBD-1 and Pep B at 1μg/ml and 5μg/ml respectively. Nile red positive bacterial population, high MPN/ low CFU count and tolerance to isoniazid, confirmed the formation of potassium deficienybaseddormancy model. HBD-1 (8μg/ml) showed 96% and 99% killing and Pep B (40μg/ml) lowered dormant bacillary load by 68.89% and 92.49% based on CFU and MPN enumeration respectively. Further, H&E stained aggregates of macrophages and lymphocytes, acid fast bacilli surrounded by cellular aggregates and rifampicin resistance, indicated the formation of human granuloma dormancy model. HBD-1 (8μg/ml) led to 81.3% reduction in CFU whereas its motif Pep B (40μg/ml) showed only 54.66% decrease in bacterial load inside granuloma. Thus, the present study indicated that HBD-1 and its motif are effective antimicrobial players against both actively growing and dormant M. tb. They should be further explored to tap their potential to design a powerful weapon for combating tuberculosis.Keywords: antimicrobial peptides, dormant, human beta defensin 1, tuberculosis
Procedia PDF Downloads 2631140 Robotic Process Automation in Accounting and Finance Processes: An Impact Assessment of Benefits
Authors: Rafał Szmajser, Katarzyna Świetla, Mariusz Andrzejewski
Abstract:
Robotic process automation (RPA) is a technology of repeatable business processes performed using computer programs, robots that simulate the work of a human being. This approach assumes replacing an existing employee with the use of dedicated software (software robots) to support activities, primarily repeated and uncomplicated, characterized by a low number of exceptions. RPA application is widespread in modern business services, particularly in the areas of Finance, Accounting and Human Resources Management. By utilizing this technology, the effectiveness of operations increases while reducing workload, minimizing possible errors in the process, and as a result, bringing measurable decrease in the cost of providing services. Regardless of how the use of modern information technology is assessed, there are also some doubts as to whether we should replace human activities in the implementation of the automation in business processes. After the initial awe for the new technological concept, a reflection arises: to what extent does the implementation of RPA increase the efficiency of operations or is there a Business Case for implementing it? If the business case is beneficial, in which business processes is the greatest potential for RPA? A closer look at these issues was provided by in this research during which the respondents’ view of the perceived advantages resulting from the use of robotization and automation in financial and accounting processes was verified. As a result of an online survey addressed to over 500 respondents from international companies, 162 complete answers were returned from the most important types of organizations in the modern business services industry, i.e. Business or IT Process Outsourcing (BPO/ITO), Shared Service Centers (SSC), Consulting/Advisory and their customers. Answers were provided by representatives of the positions in their organizations: Members of the Board, Directors, Managers and Experts/Specialists. The structure of the survey allowed the respondents to supplement the survey with additional comments and observations. The results formed the basis for the creation of a business case calculating tangible benefits associated with the implementation of automation in the selected financial processes. The results of the statistical analyses carried out with regard to revenue growth confirmed the correctness of the hypothesis that there is a correlation between job position and the perception of the impact of RPA implementation on individual benefits. Second hypothesis (H2) that: There is a relationship between the kind of company in the business services industry and the reception of the impact of RPA on individual benefits was thus not confirmed. Based results of survey authors performed simulation of business case for implementation of RPA in selected Finance and Accounting Processes. Calculated payback period was diametrically different ranging from 2 months for the Account Payables process with 75% savings and in the extreme case for the process Taxes implementation and maintenance costs exceed the savings resulting from the use of the robot.Keywords: automation, outsourcing, business process automation, process automation, robotic process automation, RPA, RPA business case, RPA benefits
Procedia PDF Downloads 1371139 Irradion: Portable Small Animal Imaging and Irradiation Unit
Authors: Josef Uher, Jana Boháčová, Richard Kadeřábek
Abstract:
In this paper, we present a multi-robot imaging and irradiation research platform referred to as Irradion, with full capabilities of portable arbitrary path computed tomography (CT). Irradion is an imaging and irradiation unit entirely based on robotic arms for research on cancer treatment with ion beams on small animals (mice or rats). The platform comprises two subsystems that combine several imaging modalities, such as 2D X-ray imaging, CT, and particle tracking, with precise positioning of a small animal for imaging and irradiation. Computed Tomography: The CT subsystem of the Irradion platform is equipped with two 6-joint robotic arms that position a photon counting detector and an X-ray tube independently and freely around the scanned specimen and allow image acquisition utilizing computed tomography. Irradiation measures nearly all conventional 2D and 3D trajectories of X-ray imaging with precisely calibrated and repeatable geometrical accuracy leading to a spatial resolution of up to 50 µm. In addition, the photon counting detectors allow X-ray photon energy discrimination, which can suppress scattered radiation, thus improving image contrast. It can also measure absorption spectra and recognize different materials (tissue) types. X-ray video recording and real-time imaging options can be applied for studies of dynamic processes, including in vivo specimens. Moreover, Irradion opens the door to exploring new 2D and 3D X-ray imaging approaches. We demonstrate in this publication various novel scan trajectories and their benefits. Proton Imaging and Particle Tracking: The Irradion platform allows combining several imaging modules with any required number of robots. The proton tracking module comprises another two robots, each holding particle tracking detectors with position, energy, and time-sensitive sensors Timepix3. Timepix3 detectors can track particles entering and exiting the specimen and allow accurate guiding of photon/ion beams for irradiation. In addition, quantifying the energy losses before and after the specimen brings essential information for precise irradiation planning and verification. Work on the small animal research platform Irradion involved advanced software and hardware development that will offer researchers a novel way to investigate new approaches in (i) radiotherapy, (ii) spectral CT, (iii) arbitrary path CT, (iv) particle tracking. The robotic platform for imaging and radiation research developed for the project is an entirely new product on the market. Preclinical research systems with precision robotic irradiation with photon/ion beams combined with multimodality high-resolution imaging do not exist currently. The researched technology can potentially cause a significant leap forward compared to the current, first-generation primary devices.Keywords: arbitrary path CT, robotic CT, modular, multi-robot, small animal imaging
Procedia PDF Downloads 891138 Hybrid Living: Emerging Out of the Crises and Divisions
Authors: Yiorgos Hadjichristou
Abstract:
The paper will focus on the hybrid living typologies which are brought about due to the Global Crisis. Mixing of the generations and the groups of people, mingling the functions of living with working and socializing, merging the act of living in synergy with the urban realm and its constituent elements will be the springboard of proposing an essential sustainable housing approach and the respective urban development. The thematic will be based on methodologies developed both on the academic, educational environment including participation of students’ research and on the practical aspect of architecture including case studies executed by the author in the island of Cyprus. Both paths of the research will deal with the explorative understanding of the hybrid ways of living, testing the limits of its autonomy. The evolution of the living typologies into substantial hybrid entities, will deal with the understanding of new ways of living which include among others: re-introduction of natural phenomena, accommodation of the activity of work and services in the living realm, interchange of public and private, injections of communal events into the individual living territories. The issues and the binary questions raised by what is natural and artificial, what is private and what public, what is ephemeral and what permanent and all the in-between conditions are eloquently traced in the everyday life in the island. Additionally, given the situation of Cyprus with the eminent scar of the dividing ‘Green line’ and the waiting of the ‘ghost city’ of Famagusta to be resurrected, the conventional way of understanding the limits and the definitions of the properties is irreversibly shaken. The situation is further aggravated by the unprecedented phenomenon of the crisis on the island. All these observations set the premises of reexamining the urban development and the respective sustainable housing in a synergy where their characteristics start exchanging positions, merge into each other, contemporarily emerge and vanish, changing from permanent to ephemeral. This fluidity of conditions will attempt to render a future of the built- and unbuilt realm where the main focusing point will be redirected to the human and the social. Weather and social ritual scenographies together with ‘spontaneous urban landscapes’ of ‘momentary relationships’ will suggest a recipe for emerging urban environments and sustainable living. Thus, the paper will aim at opening a discourse on the future of the sustainable living merged in a sustainable urban development in relation to the imminent solution of the division of island, where the issue of property became the main obstacle to be overcome. At the same time, it will attempt to link this approach to the global need for a sustainable evolution of the urban and living realms.Keywords: social ritual scenographies, spontaneous urban landscapes, substantial hybrid entities, re-introduction of natural phenomena
Procedia PDF Downloads 2631137 The Role of Piceatannol in Counteracting Glyceraldehyde-3-Phosphate Dehydrogenase Aggregation and Nuclear Translocation
Authors: Joanna Gerszon, Aleksandra Rodacka
Abstract:
In the pathogenesis of neurodegenerative diseases such as Alzheimer's disease and Parkinson's disease, protein and peptide aggregation processes play a vital role in contributing to the formation of intracellular and extracellular protein deposits. One of the major components of these deposits is the oxidatively modified glyceraldehyde-3-phosphate dehydrogenase (GAPDH). Therefore, the purpose of this research was to answer the question whether piceatannol, a stilbene derivative, counteracts and/or slows down oxidative stress-induced GAPDH aggregation. The study also aimed to determine if this natural occurring compound prevents unfavorable nuclear translocation of GAPDH in hippocampal cells. The isothermal titration calorimetry (ITC) analysis indicated that one molecule of GAPDH can bind up to 8 molecules of piceatannol (7.3 ± 0.9). As a consequence of piceatannol binding to the enzyme, the loss of activity was observed. Parallel with GAPDH inactivation the changes in zeta potential, and loss of free thiol groups were noted. Nevertheless, the ligand-protein binding does not influence the secondary structure of the GAPDH. Precise molecular docking analysis of the interactions inside the active center allowed to presume that these effects are due to piceatannol ability to assemble a covalent binding with nucleophilic cysteine residue (Cys149) which is directly involved in the catalytic reaction. Molecular docking also showed that simultaneously 11 molecules of ligand can be bound to dehydrogenase. Taking into consideration obtained data, the influence of piceatannol on level of GAPDH aggregation induced by excessive oxidative stress was examined. The applied methods (thioflavin-T binding-dependent fluorescence as well as microscopy methods - transmission electron microscopy, Congo Red staining) revealed that piceatannol significantly diminishes level of GAPDH aggregation. Finally, studies involving cellular model (Western blot analyses of nuclear and cytosolic fractions and confocal microscopy) indicated that piceatannol-GAPDH binding prevents GAPDH from nuclear translocation induced by excessive oxidative stress in hippocampal cells. In consequence, it counteracts cell apoptosis. These studies demonstrate that by binding with GAPDH, piceatannol blocks cysteine residue and counteracts its oxidative modifications, that induce oligomerization and GAPDH aggregation as well as it prevents hippocampal cells from apoptosis by retaining GAPDH in the cytoplasm. All these findings provide a new insight into the role of piceatannol interaction with GAPDH and present a potential therapeutic strategy for some neurological disorders related to GAPDH aggregation. This work was supported by the by National Science Centre, Poland (grant number 2017/25/N/NZ1/02849).Keywords: glyceraldehyde-3-phosphate dehydrogenase, neurodegenerative disease, neuroprotection, piceatannol, protein aggregation
Procedia PDF Downloads 1671136 CRM Cloud Computing: An Efficient and Cost Effective Tool to Improve Customer Interactions
Authors: Gaurangi Saxena, Ravindra Saxena
Abstract:
Lately, cloud computing is used to enhance the ability to attain corporate goals more effectively and efficiently at lower cost. This new computing paradigm “The Cloud Computing” has emerged as a powerful tool for optimum utilization of resources and gaining competitiveness through cost reduction and achieving business goals with greater flexibility. Realizing the importance of this new technique, most of the well known companies in computer industry like Microsoft, IBM, Google and Apple are spending millions of dollars in researching cloud computing and investigating the possibility of producing interface hardware for cloud computing systems. It is believed that by using the right middleware, a cloud computing system can execute all the programs a normal computer could run. Potentially, everything from most simple generic word processing software to highly specialized and customized programs designed for specific company could work successfully on a cloud computing system. A Cloud is a pool of virtualized computer resources. Clouds are not limited to grid environments, but also support “interactive user-facing applications” such as web applications and three-tier architectures. Cloud Computing is not a fundamentally new paradigm. It draws on existing technologies and approaches, such as utility Computing, Software-as-a-service, distributed computing, and centralized data centers. Some companies rent physical space to store servers and databases because they don’t have it available on site. Cloud computing gives these companies the option of storing data on someone else’s hardware, removing the need for physical space on the front end. Prominent service providers like Amazon, Google, SUN, IBM, Oracle, Salesforce etc. are extending computing infrastructures and platforms as a core for providing top-level services for computation, storage, database and applications. Application services could be email, office applications, finance, video, audio and data processing. By using cloud computing system a company can improve its customer relationship management. A CRM cloud computing system may be highly useful in delivering a sales team a blend of unique functionalities to improve agent/customer interactions. This paper attempts to first define the cloud computing as a tool for running business activities more effectively and efficiently at a lower cost; and then it distinguishes cloud computing with grid computing. Based on exhaustive literature review, authors discuss application of cloud computing in different disciplines of management especially in the field of marketing with special reference to use of cloud computing in CRM. Study concludes that CRM cloud computing platform helps a company track any data, such as orders, discounts, references, competitors and many more. By using CRM cloud computing, companies can improve its customer interactions and by serving them more efficiently that too at a lower cost can help gaining competitive advantage.Keywords: cloud computing, competitive advantage, customer relationship management, grid computing
Procedia PDF Downloads 3121135 Wage Differentials in Pakistan by Focusing on Wage Differentials in Public and Private Sectors, Formal and Informal Sectors, and Major Occupational Groups
Authors: Asghar Ali, Narjis Khatoon
Abstract:
This study focuses on the presence of wage differentials in Pakistan and also on the determinants that originate it. Since there are a smaller number of studies that are conducted on this topic in Pakistan, the current study aims to contribute in bridging the existing gap in this particular research genre. Hence, this study not only generates the desired results specific focus but it also contributes to the overall empirical work on the Pakistan economy. The preceding works which have been done to research wage determinants and wage differentials have used numerous different theories and approaches to reach their goals. The current study, in order to analyze the determinants of wage differentials in the developing economy, deals with the study of a number of such theories and approaches that are supposed as being beneficial for the purpose. This study undertakes the explanation of wage differentials in Pakistan by focusing on wage differentials in public and private sectors, formal and informal sectors, and major occupational groups. The study uses 'Wage Theory' to examine wage differentials among male and female employees in public and private sectors on varied levels of working conditions. This study also uses 'Segmented Labor Market Theory' to determine the wage differential in both public and private sectors, formal and informal, and major occupational groups in Pakistan. So the author has used various econometric techniques in order to explain and test these theories and to find out the required results. This study has employed seven different cross-sectional Labour Force Surveys for the time period between 2006-07 to 2012-13. Gender equality is not only a policy reform agenda for developing countries but also an important goal of Millennium Development Goals. This study investigates the nexus between wage inequality and economic growth and detects co-integration between gender wage differential and economic growth using ARDL bound test. It is confirmed from the empirical results that there exists long-run relationship between economic growth and wage differential. Our study indicated that half of the total female employees from fourteen major cities of Pakistan were employed in the public sector. Out of total female employees in private sector, 66 percent are employed in the formal sector, and 33 percent are working in the informal sector. Results also indicated that both men and women were paid more in the public sector compared to the private sector counterparts. Among the total female employees, only 9 percent had received any formal training, 52% were married and average years of schooling were 11 years. Further, our findings regarding wage differential between genders indicate that wage gap is lower in public sector as compared to private sector. In proportion, gender wage ratio was found to be 0.96, 0.62 and 0.66 in public, formal private and informal private sectors respectively. This suggests that in this case, private sector female employees with the same pay structure are compensated at a lower endowments rate as then public sector workers as compared to their counter parts.Keywords: wage differentials, formal, informal, economic growth
Procedia PDF Downloads 1971134 Linking Information Systems Capabilities for Service Quality: The Role of Customer Connection and Environmental Dynamism
Authors: Teng Teng, Christos Tsinopoulos
Abstract:
The purpose of this research is to explore the link between IS capabilities, customer connection, and quality performance in the service context, with investigation of the impact of firm’s stable and dynamic environments. The application of Information Systems (IS) has become a significant effect on contemporary service operations. Firms invest in IS with the presumption that they will facilitate operations processes so that their performance will improve. Yet, IS resources by themselves are not sufficiently 'unique' and thus, it would be more useful and theoretically relevant to focus on the processes they affect. One such organisational process, which has attracted a lot of research attention by supply chain management scholars, is the integration of customer connection, where IS-enabled customer connection enhances communication and contact processes, and with such customer resources integration comes greater success for the firm in its abilities to develop a good understanding of customer needs and set accurate customer. Nevertheless, prior studies on IS capabilities have focused on either one specific type of technology or operationalised it as a highly aggregated concept. Moreover, although conceptual frameworks have been identified to show customer integration is valuable in service provision, there is much to learn about the practices of integrating customer resources. In this research, IS capabilities have been broken down into three dimensions based on the framework of Wade and Hulland: IT for supply chain activities (ITSCA), flexible IT infrastructure (ITINF), and IT operations shared knowledge (ITOSK); and focus on their impact on operational performance of firms in services. With this background, this paper addresses the following questions: -How do IS capabilities affect the integration of customer connection and service quality? -What is the relationship between environmental dynamism and the relationship of customer connection and service quality? A survey of 156 service establishments was conducted, and the data analysed to determine the role of customer connection in mediating the effects of IS capabilities on firms’ service quality. Confirmatory factor analysis was used to check convergent validity. There is a good model fit for the structural model. Moderating effect of environmental dynamism on the relationship of customer connection and service quality is analysed. Results show that ITSCA, ITINF, and ITOSK have a positive influence on the degree of the integration of customer connection. In addition, customer connection positively related to service quality; this relationship is further emphasised when firms work in a dynamic environment. This research takes a step towards quelling concerns about the business value of IS, contributing to the development and validation of the measurement of IS capabilities in the service operations context. Additionally, it adds to the emerging body of literature linking customer connection to the operational performance of service firms. Managers of service firms should consider the strength of the mediating role of customer connection when investing in IT-related technologies and policies. Particularly, service firms developing IS capabilities should simultaneously implement processes that encourage supply chain integration.Keywords: customer connection, environmental dynamism, information systems capabilities, service quality, service supply chain
Procedia PDF Downloads 1401133 The Influence of Argumentation Strategy on Student’s Web-Based Argumentation in Different Scientific Concepts
Authors: Xinyue Jiao, Yu-Ren Lin
Abstract:
Argumentation is an essential aspect of scientific thinking which has been widely concerned in recent reform of science education. The purpose of the present studies was to explore the influences of two variables termed ‘the argumentation strategy’ and ‘the kind of science concept’ on student’s web-based argumentation. The first variable was divided into either monological (which refers to individual’s internal discourse and inner chain reasoning) or dialectical (which refers to dialogue interaction between/among people). The other one was also divided into either descriptive (i.e., macro-level concept, such as phenomenon can be observed and tested directly) or theoretical (i.e., micro-level concept which is abstract, and cannot be tested directly in nature). The present study applied the quasi-experimental design in which 138 7th grade students were invited and then assigned to either monological group (N=70) or dialectical group (N=68) randomly. An argumentation learning program called ‘the PWAL’ was developed to improve their scientific argumentation abilities, such as arguing from multiple perspectives and based on scientific evidence. There were two versions of PWAL created. For the individual version, students can propose argument only through knowledge recall and self-reflecting process. On the other hand, the students were allowed to construct arguments through peers’ communication in the collaborative version. The PWAL involved three descriptive science concept-based topics (unit 1, 3 and 5) and three theoretical concept-based topics (unit 2, 4 and 6). Three kinds of scaffoldings were embedded into the PWAL: a) argument template, which was used for constructing evidence-based argument; b) the model of the Toulmin’s TAP, which shows the structure and elements of a sound argument; c) the discussion block, which enabled the students to review what had been proposed during the argumentation. Both quantitative and qualitative data were collected and analyzed. An analytical framework for coding students’ arguments proposed in the PWAL was constructed. The results showed that the argumentation approach has a significant effect on argumentation only in theoretical topics (f(1, 136)=48.2, p < .001, η2=2.62). The post-hoc analysis showed the students in the collaborative group perform significantly better than the students in the individual group (mean difference=2.27). However, there is no significant difference between the two groups regarding their argumentation in descriptive topics. Secondly, the students made significant progress in the PWAL from the earlier descriptive or theoretical topic to the later one. The results enabled us to conclude that the PWAL was effective for students’ argumentation. And the students’ peers’ interaction was essential for students to argue scientifically especially for the theoretical topic. The follow-up qualitative analysis showed student tended to generate arguments through critical dialogue interactions in the theoretical topic which promoted them to use more critiques and to evaluate and co-construct each other’s arguments. More explanations regarding the students’ web-based argumentation and the suggestions for the development of web-based science learning were proposed in our discussions.Keywords: argumentation, collaborative learning, scientific concepts, web-based learning
Procedia PDF Downloads 1041132 X-Ray Detector Technology Optimization in Computed Tomography
Authors: Aziz Ikhlef
Abstract:
Most of multi-slices Computed Tomography (CT) scanners are built with detectors composed of scintillator - photodiodes arrays. The photodiodes arrays are mainly based on front-illuminated technology for detectors under 64 slices and on back-illuminated photodiode for systems of 64 slices or more. The designs based on back-illuminated photodiodes were being investigated for CT machines to overcome the challenge of the higher number of runs and connection required in front-illuminated diodes. In backlit diodes, the electronic noise has already been improved because of the reduction of the load capacitance due to the routing reduction. This is translated by a better image quality in low signal application, improving low dose imaging in large patient population. With the fast development of multi-detector-rows CT (MDCT) scanners and the increasing number of examinations, the clinical community has raised significant concerns on radiation dose received by the patient in both medical and regulatory community. In order to reduce individual exposure and in response to the recommendations of the International Commission on Radiological Protection (ICRP) which suggests that all exposures should be kept as low as reasonably achievable (ALARA), every manufacturer is trying to implement strategies and solutions to optimize dose efficiency and image quality based on x-ray emission and scanning parameters. The added demands on the CT detector performance also comes from the increased utilization of spectral CT or dual-energy CT in which projection data of two different tube potentials are collected. One of the approaches utilizes a technology called fast-kVp switching in which the tube voltage is switched between 80 kVp and 140 kVp in fraction of a millisecond. To reduce the cross-contamination of signals, the scintillator based detector temporal response has to be extremely fast to minimize the residual signal from previous samples. In addition, this paper will present an overview of detector technologies and image chain improvement which have been investigated in the last few years to improve the signal-noise ratio and the dose efficiency CT scanners in regular examinations and in energy discrimination techniques. Several parameters of the image chain in general and in the detector technology contribute in the optimization of the final image quality. We will go through the properties of the post-patient collimation to improve the scatter-to-primary ratio, the scintillator material properties such as light output, afterglow, primary speed, crosstalk to improve the spectral imaging, the photodiode design characteristics and the data acquisition system (DAS) to optimize for crosstalk, noise and temporal/spatial resolution.Keywords: computed tomography, X-ray detector, medical imaging, image quality, artifacts
Procedia PDF Downloads 1941131 Studies of the Reaction Products Resulted from Glycerol Electrochemical Conversion under Galvanostatic Mode
Authors: Ching Shya Lee, Mohamed Kheireddine Aroua, Wan Mohd Ashri Wan Daud, Patrick Cognet, Yolande Peres, Mohammed Ajeel
Abstract:
In recent years, with the decreasing supply of fossil fuel, renewable energy has received a significant demand. Biodiesel which is well known as vegetable oil based fatty acid methyl ester is an alternative fuel for diesel. It can be produced from transesterification of vegetable oils, such as palm oil, sunflower oil, rapeseed oil, etc., with methanol. During the transesterification process, crude glycerol is formed as a by-product, resulting in 10% wt of the total biodiesel production. To date, due to the fast growing of biodiesel production in worldwide, the crude glycerol supply has also increased rapidly and resulted in a significant price drop for glycerol. Therefore, extensive research has been developed to use glycerol as feedstock to produce various added-value chemicals, such as tartronic acid, mesoxalic acid, glycolic acid, glyceric acid, propanediol, acrolein etc. The industrial processes that usually involved are selective oxidation, biofermentation, esterification, and hydrolysis. However, the conversion of glycerol into added-value compounds by electrochemical approach is rarely discussed. Currently, the approach is mainly focused on the electro-oxidation study of glycerol under potentiostatic mode for cogenerating energy with other chemicals. The electro-organic synthesis study from glycerol under galvanostatic mode is seldom reviewed. In this study, the glycerol was converted into various added-value compounds by electrochemical method under galvanostatic mode. This work aimed to study the possible compounds produced from glycerol by electrochemical technique in a one-pot electrolysis cell. The electro-organic synthesis study from glycerol was carried out in a single compartment reactor for 8 hours, over the platinum cathode and anode electrodes under acidic condition. Various parameters such as electric current (1.0 A to 3.0 A) and reaction temperature (27 °C to 80 °C) were evaluated. The products obtained were characterized by using gas chromatography-mass spectroscopy equipped with an aqueous-stable polyethylene glycol stationary phase column. Under the optimized reaction condition, the glycerol conversion achieved as high as 95%. The glycerol was successfully converted into various added-value chemicals such as ethylene glycol, glycolic acid, glyceric acid, acetaldehyde, formic acid, and glyceraldehyde; given the yield of 1%, 45%, 27%, 4%, 0.7% and 5%, respectively. Based on the products obtained from this study, the reaction mechanism of this process is proposed. In conclusion, this study has successfully converted glycerol into a wide variety of added-value compounds. These chemicals are found to have high market value; they can be used in the pharmaceutical, food and cosmetic industries. This study effectively opens a new approach for the electrochemical conversion of glycerol. For further enhancement on the product selectivity, electrode material is an important parameter to be considered.Keywords: biodiesel, glycerol, electrochemical conversion, galvanostatic mode
Procedia PDF Downloads 1931130 Analyzing Global User Sentiments on Laptop Features: A Comparative Study of Preferences Across Economic Contexts
Authors: Mohammadreza Bakhtiari, Mehrdad Maghsoudi, Hamidreza Bakhtiari
Abstract:
The widespread adoption of laptops has become essential to modern lifestyles, supporting work, education, and entertainment. Social media platforms have emerged as key spaces where users share real-time feedback on laptop performance, providing a valuable source of data for understanding consumer preferences. This study leverages aspect-based sentiment analysis (ABSA) on 1.5 million tweets to examine how users from developed and developing countries perceive and prioritize 16 key laptop features. The analysis reveals that consumers in developing countries express higher satisfaction overall, emphasizing affordability, durability, and reliability. Conversely, users in developed countries demonstrate more critical attitudes, especially toward performance-related aspects such as cooling systems, battery life, and chargers. The study employs a mixed-methods approach, combining ABSA using the PyABSA framework with expert insights gathered through a Delphi panel of ten industry professionals. Data preprocessing included cleaning, filtering, and aspect extraction from tweets. Universal issues such as battery efficiency and fan performance were identified, reflecting shared challenges across markets. However, priorities diverge between regions, while users in developed countries demand high-performance models with advanced features, those in developing countries seek products that offer strong value for money and long-term durability. The findings suggest that laptop manufacturers should adopt a market-specific strategy by developing differentiated product lines. For developed markets, the focus should be on cutting-edge technologies, enhanced cooling solutions, and comprehensive warranty services. In developing markets, emphasis should be placed on affordability, versatile port options, and robust designs. Additionally, the study highlights the importance of universal charging solutions and continuous sentiment monitoring to adapt to evolving consumer needs. This research offers practical insights for manufacturers seeking to optimize product development and marketing strategies for global markets, ensuring enhanced user satisfaction and long-term competitiveness. Future studies could explore multi-source data integration and conduct longitudinal analyses to capture changing trends over time.Keywords: consumer behavior, durability, laptop industry, sentiment analysis, social media analytics
Procedia PDF Downloads 151129 Estimating Estimators: An Empirical Comparison of Non-Invasive Analysis Methods
Authors: Yan Torres, Fernanda Simoes, Francisco Petrucci-Fonseca, Freddie-Jeanne Richard
Abstract:
The non-invasive samples are an alternative of collecting genetic samples directly. Non-invasive samples are collected without the manipulation of the animal (e.g., scats, feathers and hairs). Nevertheless, the use of non-invasive samples has some limitations. The main issue is degraded DNA, leading to poorer extraction efficiency and genotyping. Those errors delayed for some years a widespread use of non-invasive genetic information. Possibilities to limit genotyping errors can be done using analysis methods that can assimilate the errors and singularities of non-invasive samples. Genotype matching and population estimation algorithms can be highlighted as important analysis tools that have been adapted to deal with those errors. Although, this recent development of analysis methods there is still a lack of empirical performance comparison of them. A comparison of methods with dataset different in size and structure can be useful for future studies since non-invasive samples are a powerful tool for getting information specially for endangered and rare populations. To compare the analysis methods, four different datasets used were obtained from the Dryad digital repository were used. Three different matching algorithms (Cervus, Colony and Error Tolerant Likelihood Matching - ETLM) are used for matching genotypes and two different ones for population estimation (Capwire and BayesN). The three matching algorithms showed different patterns of results. The ETLM produced less number of unique individuals and recaptures. A similarity in the matched genotypes between Colony and Cervus was observed. That is not a surprise since the similarity between those methods on the likelihood pairwise and clustering algorithms. The matching of ETLM showed almost no similarity with the genotypes that were matched with the other methods. The different cluster algorithm system and error model of ETLM seems to lead to a more criterious selection, although the processing time and interface friendly of ETLM were the worst between the compared methods. The population estimators performed differently regarding the datasets. There was a consensus between the different estimators only for the one dataset. The BayesN showed higher and lower estimations when compared with Capwire. The BayesN does not consider the total number of recaptures like Capwire only the recapture events. So, this makes the estimator sensitive to data heterogeneity. Heterogeneity in the sense means different capture rates between individuals. In those examples, the tolerance for homogeneity seems to be crucial for BayesN work properly. Both methods are user-friendly and have reasonable processing time. An amplified analysis with simulated genotype data can clarify the sensibility of the algorithms. The present comparison of the matching methods indicates that Colony seems to be more appropriated for general use considering a time/interface/robustness balance. The heterogeneity of the recaptures affected strongly the BayesN estimations, leading to over and underestimations population numbers. Capwire is then advisable to general use since it performs better in a wide range of situations.Keywords: algorithms, genetics, matching, population
Procedia PDF Downloads 1431128 Challenges of Blockchain Applications in the Supply Chain Industry: A Regulatory Perspective
Authors: Pardis Moslemzadeh Tehrani
Abstract:
Due to the emergence of blockchain technology and the benefits of cryptocurrencies, intelligent or smart contracts are gaining traction. Artificial intelligence (AI) is transforming our lives, and it is being embraced by a wide range of sectors. Smart contracts, which are at the heart of blockchains, incorporate AI characteristics. Such contracts are referred to as "smart" contracts because of the underlying technology that allows contracting parties to agree on terms expressed in computer code that defines machine-readable instructions for computers to follow under specific situations. The transmission happens automatically if the conditions are met. Initially utilised for financial transactions, blockchain applications have since expanded to include the financial, insurance, and medical sectors, as well as supply networks. Raw material acquisition by suppliers, design, and fabrication by manufacturers, delivery of final products to consumers, and even post-sales logistics assistance are all part of supply chains. Many issues are linked with managing supply chains from the planning and coordination stages, which can be implemented in a smart contract in a blockchain due to their complexity. Manufacturing delays and limited third-party amounts of product components have raised concerns about the integrity and accountability of supply chains for food and pharmaceutical items. Other concerns include regulatory compliance in multiple jurisdictions and transportation circumstances (for instance, many products must be kept in temperature-controlled environments to ensure their effectiveness). Products are handled by several providers before reaching customers in modern economic systems. Information is sent between suppliers, shippers, distributors, and retailers at every stage of the production and distribution process. Information travels more effectively when individuals are eliminated from the equation. The usage of blockchain technology could be a viable solution to these coordination issues. In blockchains, smart contracts allow for the rapid transmission of production data, logistical data, inventory levels, and sales data. This research investigates the legal and technical advantages and disadvantages of AI-blockchain technology in the supply chain business. It aims to uncover the applicable legal problems and barriers to the use of AI-blockchain technology to supply chains, particularly in the food industry. It also discusses the essential legal and technological issues and impediments to supply chain implementation for stakeholders, as well as methods for overcoming them before releasing the technology to clients. Because there has been little research done on this topic, it is difficult for industrial stakeholders to grasp how blockchain technology could be used in their respective operations. As a result, the focus of this research will be on building advanced and complex contractual terms in supply chain smart contracts on blockchains to cover all unforeseen supply chain challenges.Keywords: blockchain, supply chain, IoT, smart contract
Procedia PDF Downloads 1261127 Electrical Degradation of GaN-based p-channel HFETs Under Dynamic Electrical Stress
Authors: Xuerui Niu, Bolin Wang, Xinchuang Zhang, Xiaohua Ma, Bin Hou, Ling Yang
Abstract:
The application of discrete GaN-based power switches requires the collaboration of silicon-based peripheral circuit structures. However, the packages and interconnection between the Si and GaN devices can introduce parasitic effects to the circuit, which has great impacts on GaN power transistors. GaN-based monolithic power integration technology is an emerging solution which can improve the stability of circuits and allow the GaN-based devices to achieve more functions. Complementary logic circuits consisting of GaN-based E-mode p-channel heterostructure field-effect transistors (p-HFETs) and E-mode n-channel HEMTs can be served as the gate drivers. E-mode p-HFETs with recessed gate have attracted increasing interest because of the low leakage current and large gate swing. However, they suffer from a poor interface between the gate dielectric and polarized nitride layers. The reliability of p-HFETs is analyzed and discussed in this work. In circuit applications, the inverter is always operated with dynamic gate voltage (VGS) rather than a constant VGS. Therefore, dynamic electrical stress has been simulated to resemble the operation conditions for E-mode p-HFETs. The dynamic electrical stress condition is as follows. VGS is a square waveform switching from -5 V to 0 V, VDS is fixed, and the source grounded. The frequency of the square waveform is 100kHz with the rising/falling time of 100 ns and duty ratio of 50%. The effective stress time is 1000s. A number of stress tests are carried out. The stress was briefly interrupted to measure the linear IDS-VGS, saturation IDS-VGS, As VGS switches from -5 V to 0 V and VDS = 0 V, devices are under negative-bias-instability (NBI) condition. Holes are trapped at the interface of oxide layer and GaN channel layer, which results in the reduction of VTH. The negative shift of VTH is serious at the first 10s and then changes slightly with the following stress time. However, different phenomenon is observed when VDS reduces to -5V. VTH shifts negatively during stress condition, and the variation in VTH increases with time, which is different from that when VDS is 0V. Two mechanisms exists in this condition. On the one hand, the electric field in the gate region is influenced by the drain voltage, so that the trapping behavior of holes in the gate region changes. The impact of the gate voltage is weakened. On the other hand, large drain voltage can induce the hot holes generation and lead to serious hot carrier stress (HCS) degradation with time. The poor-quality interface between the oxide layer and GaN channel layer at the gate region makes a major contribution to the high-density interface traps, which will greatly influence the reliability of devices. These results emphasize that the improved etching and pretreatment processes needs to be developed so that high-performance GaN complementary logics with enhanced stability can be achieved.Keywords: GaN-based E-mode p-HFETs, dynamic electric stress, threshold voltage, monolithic power integration technology
Procedia PDF Downloads 92