Search results for: three phase partitioning
3272 Structural, Magnetic and Magnetocaloric Properties of Iron-Doped Nd₀.₆Sr₀.₄MnO₃ Perovskite
Authors: Ismail Al-Yahmadi, Abbasher Gismelseed, Fatma Al-Mammari, Ahmed Al-Rawas, Ali Yousif, Imaddin Al-Omari, Hisham Widatallah, Mohamed Elzain
Abstract:
The influence of Fe-doping on the structural, magnetic and magnetocaloric properties of Nd₀.₆Sr₀.₄FeₓMn₁₋ₓO₃ (0≤ x ≤0.5) were investigated. The samples were synthesized by auto-combustion Sol-Gel method. The phase purity, crystallinity, and the structural properties for all prepared samples were examined by X-ray diffraction. XRD refinement indicates that the samples are crystallized in the orthorhombic single-phase with Pnma space group. Temperature dependence of magnetization measurements under a magnetic applied field of 0.02 T reveals that the samples with (x=0.0, 0.1, 0.2 and 0.3) exhibit a paramagnetic (PM) to ferromagnetic (FM) transition with decreasing temperature. The Curie temperature decreased with increasing Fe content from 256 K for x =0.0 to 80 K for x =0.3 due to increasing of antiferromagnetic superexchange (SE) interaction coupling. Moreover, the magnetization as a function of applied magnetic field (M-H) curves was measured at 2 K, and 300 K. the results of such measurements confirm the temperature dependence of magnetization measurements. The magnetic entropy change|∆SM | was evaluated using Maxwell's relation. The maximum values of the magnetic entropy change |-∆SMax |for x=0.0, 0.1, 0.2, 0.3 are found to be 15.35, 5.13, 3.36, 1.08 J/kg.K for an applied magnetic field of 9 T. Our result on magnetocaloric properties suggests that the parent sample Nd₀.₆Sr₀.₄MnO₃ could be a good refrigerant for low-temperature magnetic refrigeration.Keywords: manganite perovskite, magnetocaloric effect, X-ray diffraction, relative cooling power
Procedia PDF Downloads 1573271 Implementation of Algorithm K-Means for Grouping District/City in Central Java Based on Macro Economic Indicators
Authors: Nur Aziza Luxfiati
Abstract:
Clustering is partitioning data sets into sub-sets or groups in such a way that elements certain properties have shared property settings with a high level of similarity within one group and a low level of similarity between groups. . The K-Means algorithm is one of thealgorithmsclustering as a grouping tool that is most widely used in scientific and industrial applications because the basic idea of the kalgorithm is-means very simple. In this research, applying the technique of clustering using the k-means algorithm as a method of solving the problem of national development imbalances between regions in Central Java Province based on macroeconomic indicators. The data sample used is secondary data obtained from the Central Java Provincial Statistics Agency regarding macroeconomic indicator data which is part of the publication of the 2019 National Socio-Economic Survey (Susenas) data. score and determine the number of clusters (k) using the elbow method. After the clustering process is carried out, the validation is tested using themethodsBetween-Class Variation (BCV) and Within-Class Variation (WCV). The results showed that detection outlier using z-score normalization showed no outliers. In addition, the results of the clustering test obtained a ratio value that was not high, namely 0.011%. There are two district/city clusters in Central Java Province which have economic similarities based on the variables used, namely the first cluster with a high economic level consisting of 13 districts/cities and theclustersecondwith a low economic level consisting of 22 districts/cities. And in the cluster second, namely, between low economies, the authors grouped districts/cities based on similarities to macroeconomic indicators such as 20 districts of Gross Regional Domestic Product, with a Poverty Depth Index of 19 districts, with 5 districts in Human Development, and as many as Open Unemployment Rate. 10 districts.Keywords: clustering, K-Means algorithm, macroeconomic indicators, inequality, national development
Procedia PDF Downloads 1573270 Task Value and Research Culture of Southern Luzon State University
Authors: Antonio V. Romana, Rizaide A. Salayo, Maria Lavinia E. Fetalino
Abstract:
This study assessed the subjective task value and research culture of SLSU faculty. It used the sequential explanatory mixed-method research design. For the quantitative phase, a questionnaire on the research culture and task value were used. While in the qualitative phase, the data was coded and thematized to interpret the focus group discussion outcome. Results showed that the dimensions of the subjective task value, intrinsic, got the highest rank while the utility value got the lowest. It is worth mentioning that all subjective task values were "Agreed." From the FGD, faculty members valued research and wanted to be involved in this undertaking. However, the limited number of faculty researchers, heavy teaching workload, inadequate information on the research process, lack of self-confidence, and low incentives received from research hindered their writing and engagement with research. Thus, a policy brief was developed. It is recommended that the institution may conduct a series of research seminar workshops for the faculty members, plan regular research idea exchange activities, and revisit the university's research thrust and agenda for faculties specialization and expertise alignment. In addition, the university may also lessen the workload and hire additional faculty members so that educators may focus on their research work. Finally, cash incentives may still be considered upon knowing that the faculty members have varied experiences in doing research tasks.Keywords: task value, interest value, attainment value, utility value, research culture
Procedia PDF Downloads 643269 The Effects of Electron Trapping by Electron-Ecoustic Waves Excited with Electron Beam
Authors: Abid Ali Abid
Abstract:
One-dimensional (1-D) particle-in-cell (PIC) electrostatic simulations are carried out to investigate the electrostatic waves, whose constituents are hot, cold and beam electrons in the background of motionless positive ions. In fact, the electrostatic modes excited are electron acoustic waves, beam driven waves as well as Langmuir waves. It is assessed that the relevant plasma parameters, for example, hot electron temperature, beam electron drift speed, and the electron beam density significantly modify the electrostatics wave's profiles. In the nonlinear stage, the wave-particle interaction becomes more evident and the waves have obtained its saturation level. Consequently, electrons become trapped in the waves and trapping vortices are clearly formed. Because of this trapping vortices and mixing of the electrons in phase space, finally, lead to electrons thermalization. It is observed that for the high-density value of the beam-electron, the solitary waves having a bipolar form of the electric field. These solitons are the nonlinear Brenstein-Greene and Kruskal wave mode that attributes the trapping of electrons potential well of phase-space hole. These examinations revealed that electrostatic waves have been exited in beam-plasma model and producing waves having broad-frequency ranges, which may clarify the broadband electrostatic noise (BEN) spectrum studied in the auroral zone.Keywords: electron acoustic waves, trapping of cold electron, Langmuir waves, particle-in cell simulation
Procedia PDF Downloads 2043268 3-Dimensional Contamination Conceptual Site Model: A Case Study Illustrating the Multiple Applications of Developing and Maintaining a 3D Contamination Model during an Active Remediation Project on a Former Urban Gasworks Site
Authors: Duncan Fraser
Abstract:
A 3-Dimensional (3D) conceptual site model was developed using the Leapfrog Works® platform utilising a comprehensive historical dataset for a large former Gasworks site in Fitzroy, Melbourne. The gasworks had been constructed across two fractured geological units with varying hydraulic conductivities. A Newer Volcanic (basaltic) outcrop covered approximately half of the site and was overlying a fractured Melbourne formation (Siltstone) bedrock outcropping over the remaining portion. During the investigative phase of works, a dense non-aqueous phase liquid (DNAPL) plume (coal tar) was identified within both geological units in the subsurface originating from multiple sources, including gasholders, tar wells, condensers, and leaking pipework. The first stage of model development was undertaken to determine the horizontal and vertical extents of the coal tar in the subsurface and assess the potential causality between potential sources, plume location, and site geology. Concentrations of key contaminants of interest (COIs) were also interpolated within Leapfrog to refine the distribution of contaminated soils. The model was subsequently used to develop a robust soil remediation strategy and achieve endorsement from an Environmental Auditor. A change in project scope, following the removal and validation of the three former gasholders, necessitated the additional excavation of a significant volume of residual contaminated rock to allow for the future construction of two-story underground basements. To assess financial liabilities associated with the offsite disposal or thermal treatment of material, the 3D model was updated with three years of additional analytical data from the active remediation phase of works. Chemical concentrations and the residual tar plume within the rock fractures were modelled to pre-classify the in-situ material and enhance separation strategies to prevent the unnecessary treatment of material and reduce costs.Keywords: 3D model, contaminated land, Leapfrog, remediation
Procedia PDF Downloads 1303267 Inherent Difficulties in Countering Islamophobia
Authors: Imbesat Daudi
Abstract:
Islamophobia, which is a billion-dollar industry, is widespread, especially in the United States, Europe, India, Israel, and countries that have Muslim minorities at odds with their governmental policies. Hatred of Islam in the West did not evolve spontaneously; it was methodically created. Islamophobia's current format has been designed to spread on its own, find a space in the Western psyche, and resist its eradication. Hatred has been sustained by neoconservative ideologues and their allies, which are supported by the mainstream media. Social scientists have evaluated how ideas spread, why any idea can go viral, and where new ideas find space in our brains. This was possible because of the advances in the computational power of software and computers. Spreading of ideas, including Islamophobia, follows a sine curve; it has three phases: An initial exploratory phase with a long lag period, an explosive phase if ideas go viral, and the final phase when ideas find space in the human psyche. In the initial phase, the ideas are quickly examined in a center in the prefrontal lobe. When it is deemed relevant, it is sent for evaluation to another center of the prefrontal lobe; there, it is critically examined. Once it takes a final shape, the idea is sent as a final product to a center in the occipital lobe. This center cannot critically evaluate ideas; it can only defend them from its critics. Counterarguments, no matter how scientific, are automatically rejected. Therefore, arguments that could be highly effective in the early phases are counterproductive once they are stored in the occipital lobe. Anti-Islamophobic intellectuals have done a very good job of countering Islamophobic arguments. However, they have not been as effective as neoconservative ideologues who have promoted anti-Muslim rhetoric that was based on half-truths, misinformation, or outright lies. The failure is partly due to the support pro-war activists receive from the mainstream media, state institutions, mega-corporations engaged in violent conflicts, and think tanks that provide Islamophobic arguments. However, there are also scientific reasons why anti-Islamophobic thinkers have been less effective. There are different dynamics of spreading ideas once they are stored in the occipital lobe. The human brain is incapable of evaluating further once it accepts ideas as its own; therefore, a different strategy is required to be effective. This paper examines 1) why anti-Islamophobic intellectuals have failed in changing the minds of non-Muslims and 2) the steps of countering hatred. Simply put, a new strategy is needed that can effectively counteract hatred of Islam and Muslims. Islamophobia is a disease that requires strong measures. Fighting hatred is always a challenge, but if we understand why Islamophobia is taking root in the twenty-first century, one can succeed in challenging Islamophobic arguments. That will need a coordinated effort of Intellectuals, writers and the media.Keywords: islamophobia, Islam and violence, anti-islamophobia, demonization of Islam
Procedia PDF Downloads 473266 Acceptance and Feasibility of Delivering an Evidence-based Digital Intervention for Palliative Care Education
Authors: Areej Alosimi, Heather Wharrad, Katharine Whittingham
Abstract:
Palliative care is a crucial element in nursing, especially with the steep increase in non-communicable diseases. Providing education in palliative care can help elevate the standards of care and address the growing need for it. However, palliative care has not been introduced into nursing curricula, specifically in Saudi Arabia, evidenced by students' inadequate understanding of the subject. Digital learning has been identified as a persuasive and effective method to improve education. The study aims to assess the feasibility and accessibility of implementing digital learning in palliative care education in Saudi Arabia by investigating the potential of delivering palliative care nurse education via distance learning. The study will utilize a sequential exploratory mixed-method approach. Phase one will entail identifying needs, developing a web-based program in phase two, and intervention implementation with a pre-post-test in phase three. Semi-structured interviews will be conducted to explore participant perceptions and thoughts regarding the intervention. Data collection will incorporate questionnaires and interviews with nursing students. Data analysis will use SPSS to analyze quantitative measurements and NVivo to analyze qualitative aspects. The study aims to provide insights into the feasibility of implementing digital learning in palliative care education. The results will serve as a foundation to investigate the effectiveness of e-learning interventions in palliative care education among nursing students. This study addresses a crucial gap in palliative care education, especially in nursing curricula, and explores the potential of digital learning to improve education. The results have broad implications for nursing education and the growing need for palliative care globally. The study assesses the feasibility and accessibility of implementing digital learning in palliative care education in Saudi Arabia. The research investigates whether palliative care nurse education can be effectively delivered through distance learning to improve students' understanding of the subject. The study's findings will lay the groundwork for a larger investigation on the efficacy of e-learning interventions in improving palliative care education among nursing students. The study can potentially contribute to the overall advancement of nursing education and the growing need for palliative care.Keywords: undergraduate nursing students, E-Learning, Palliative care education, Knowledge
Procedia PDF Downloads 723265 Unbalanced Distribution Optimal Power Flow to Minimize Losses with Distributed Photovoltaic Plants
Authors: Malinwo Estone Ayikpa
Abstract:
Electric power systems are likely to operate with minimum losses and voltage meeting international standards. This is made possible generally by control actions provide by automatic voltage regulators, capacitors and transformers with on-load tap changer (OLTC). With the development of photovoltaic (PV) systems technology, their integration on distribution networks has increased over the last years to the extent of replacing the above mentioned techniques. The conventional analysis and simulation tools used for electrical networks are no longer able to take into account control actions necessary for studying distributed PV generation impact. This paper presents an unbalanced optimal power flow (OPF) model that minimizes losses with association of active power generation and reactive power control of single-phase and three-phase PV systems. Reactive power can be generated or absorbed using the available capacity and the adjustable power factor of the inverter. The unbalance OPF is formulated by current balance equations and solved by primal-dual interior point method. Several simulation cases have been carried out varying the size and location of PV systems and the results show a detailed view of the impact of PV distributed generation on distribution systems.Keywords: distribution system, loss, photovoltaic generation, primal-dual interior point method
Procedia PDF Downloads 3313264 Drug Delivery Nanoparticles of Amino Acid Based Biodegradable Polymers
Authors: Sophio Kobauri, Tengiz Kantaria, Temur Kantaria, David Tugushi, Nina Kulikova, Ramaz Katsarava
Abstract:
Nanosized environmentally responsive materials are of special interest for various applications, including targeted drug to a considerable potential for treatment of many human diseases. The important technological advantages of nanoparticles (NPs) usage as drug carriers (nanocontainers) are their high stability, high carrier capacity, feasibility of encapsulation of both hydrophilic or hydrophobic substances, as well as a high variety of possible administration routes, including oral application and inhalation. NPs can also be designed to allow controlled (sustained) drug release from the matrix. These properties of NPs enable improvement of drug bioavailability and might allow drug dosage decrease. The targeted and controlled administration of drugs using NPs might also help to overcome drug resistance, which is one of the major obstacles in the control of epidemics. Various degradable and non-degradable polymers of both natural and synthetic origin have been used for NPs construction. One of the most promising for the design of NPs are amino acid-based biodegradable polymers (AABBPs) which can clear from the body after the fulfillment of their function. The AABBPs are composed of naturally occurring and non-toxic building blocks such as α-amino acids, fatty diols and dicarboxylic acids. The particles designed from these polymers are expected to have an improved bioavailability along with a high biocompatibility. The present work deals with a systematic study of the preparation of NPs by cost-effective polymer deposition/solvent displacement method using AABBPs. The influence of the nature and concentration of surfactants, concentration of organic phase (polymer solution), and the ratio organic phase/inorganic (water) phase, as well as of some other factors on the size of the fabricated NPs have been studied. It was established that depending on the used conditions the NPs size could be tuned within 40-330 nm. As the next step of this research an evaluation of biocompatibility and bioavailability of the synthesized NPs has been performed, using two stable human cell culture lines – HeLa and A549. This part of study is still in progress now.Keywords: amino acids, biodegradable polymers, nanoparticles (NPs), non-toxic building blocks
Procedia PDF Downloads 4313263 Managing Information Technology: An Overview of Information Technology Governance
Authors: Mehdi Asgarkhani
Abstract:
Today, investment on Information Technology (IT) solutions in most organizations is the largest component of capital expenditure. As capital investment on IT continues to grow, IT managers and strategists are expected to develop and put in practice effective decision making models (frameworks) that improve decision-making processes for the use of IT in organizations and optimize the investment on IT solutions. To be exact, there is an expectation that organizations not only maximize the benefits of adopting IT solutions but also avoid the many pitfalls that are associated with rapid introduction of technological change. Different organizations depending on size, complexity of solutions required and processes used for financial management and budgeting may use different techniques for managing strategic investment on IT solutions. Decision making processes for strategic use of IT within organizations are often referred to as IT Governance (or Corporate IT Governance). This paper examines IT governance - as a tool for best practice in decision making about IT strategies. Discussions in this paper represent phase I of a project which was initiated to investigate trends in strategic decision making on IT strategies. Phase I is concerned mainly with review of literature and a number of case studies, establishing that the practice of IT governance, depending on the complexity of IT solutions, organization's size and organization's stage of maturity, varies significantly – from informal approaches to sophisticated formal frameworks.Keywords: IT governance, corporate governance, IT governance frameworks, IT governance components, aligning IT with business strategies
Procedia PDF Downloads 4053262 Tunable Graphene Metasurface Modeling Using the Method of Moment Combined with Generalised Equivalent Circuit
Authors: Imen Soltani, Takoua Soltani, Taoufik Aguili
Abstract:
Metamaterials crossover classic physical boundaries and gives rise to new phenomena and applications in the domain of beam steering and shaping. Where electromagnetic near and far field manipulations were achieved in an accurate manner. In this sense, 3D imaging is one of the beneficiaries and in particular Denis Gabor’s invention: holography. But, the major difficulty here is the lack of a suitable recording medium. So some enhancements were essential, where the 2D version of bulk metamaterials have been introduced the so-called metasurface. This new class of interfaces simplifies the problem of recording medium with the capability of tuning the phase, amplitude, and polarization at a given frequency. In order to achieve an intelligible wavefront control, the electromagnetic properties of the metasurface should be optimized by means of solving Maxwell’s equations. In this context, integral methods are emerging as an important method to study electromagnetic from microwave to optical frequencies. The method of moment presents an accurate solution to reduce the problem of dimensions by writing its boundary conditions in the form of integral equations. But solving this kind of equations tends to be more complicated and time-consuming as the structural complexity increases. Here, the use of equivalent circuit’s method exhibits the most scalable experience to develop an integral method formulation. In fact, for allaying the resolution of Maxwell’s equations, the method of Generalised Equivalent Circuit was proposed to convey the resolution from the domain of integral equations to the domain of equivalent circuits. In point of fact, this technique consists in creating an electric image of the studied structure using discontinuity plan paradigm and taken into account its environment. So that, the electromagnetic state of the discontinuity plan is described by generalised test functions which are modelled by virtual sources not storing energy. The environmental effects are included by the use of an impedance or admittance operator. Here, we propose a tunable metasurface composed of graphene-based elements which combine the advantages of reflectarrays concept and graphene as a pillar constituent element at Terahertz frequencies. The metasurface’s building block consists of a thin gold film, a dielectric spacer SiO₂ and graphene patch antenna. Our electromagnetic analysis is based on the method of moment combined with generalised equivalent circuit (MoM-GEC). We begin by restricting our attention to study the effects of varying graphene’s chemical potential on the unit cell input impedance. So, it was found that the variation of complex conductivity of graphene allows controlling the phase and amplitude of the reflection coefficient at each element of the array. From the results obtained here, we were able to determine that the phase modulation is realized by adjusting graphene’s complex conductivity. This modulation is a viable solution compared to tunning the phase by varying the antenna length because it offers a full 2π reflection phase control.Keywords: graphene, method of moment combined with generalised equivalent circuit, reconfigurable metasurface, reflectarray, terahertz domain
Procedia PDF Downloads 1753261 Reduced Switch Count Asymmetrical Multilevel Inverter Topology
Authors: Voodi Kalandhar, Veera Reddy, Yuva Tejasree
Abstract:
Researchers have become interested in multilevel inverters (MLI) because of their potential for medium- and high-power applications. MLIs are becoming more popular as a result of their ability to generate higher voltage levels, minimal power losses, small size, and low price. These inverters used in high voltage and high-power applications because the stress on the switch is low. Even though many traditional topologies, such as the cascaded H-bridge MLI, the flying capacitor MLI, and the diode clamped MLI, exist, they all have some drawbacks. A complicated control system is needed for the flying capacitor MLI to balance the voltage across the capacitor and diode clamped MLI requires more no of diodes when no of levels increases. Even though the cascaded H-Bridge MLI is popular in terms of modularity and simple control, it requires more no of isolated DC source. Therefore, a topology with fewer devices has always been necessary for greater efficiency and reliability. A new single-phase MLI topology has been introduced to minimize the required switch count in the circuit and increase output levels. With 3 dc voltage sources, 8 switches, and 13 levels at the output, this new single- phase MLI topology was developed. To demonstrate the proposed converter's superiority over the other MLI topologies currently in use, a thorough analysis of the proposed topology will be conducted.Keywords: DC-AC converter, multi-level inverter (MLI), diodes, H-bridge inverter, switches
Procedia PDF Downloads 803260 The Effect of "Trait" Variance of Personality on Depression: Application of the Trait-State-Occasion Modeling
Authors: Pei-Chen Wu
Abstract:
Both preexisting cross-sectional and longitudinal studies of personality-depression relationship have suffered from one main limitation: they ignored the stability of the construct of interest (e.g., personality and depression) can be expected to influence the estimate of the association between personality and depression. To address this limitation, the Trait-State-Occasion (TSO) modeling was adopted to analyze the sources of variance of the focused constructs. A TSO modeling was operated by partitioning a state variance into time-invariant (trait) and time-variant (occasion) components. Within a TSO framework, it is possible to predict change on the part of construct that really changes (i.e., time-variant variance), when controlling the trait variances. 750 high school students were followed for 4 waves over six-month intervals. The baseline data (T1) were collected from the senior high schools (aged 14 to 15 years). Participants were given Beck Depression Inventory and Big Five Inventory at each assessment. TSO modeling revealed that 70~78% of the variance in personality (five constructs) was stable over follow-up period; however, 57~61% of the variance in depression was stable. For personality construct, there were 7.6% to 8.4% of the total variance from the autoregressive occasion factors; for depression construct there were 15.2% to 18.1% of the total variance from the autoregressive occasion factors. Additionally, results showed that when controlling initial symptom severity, the time-invariant components of all five dimensions of personality were predictive of change in depression (Extraversion: B= .32, Openness: B = -.21, Agreeableness: B = -.27, Conscientious: B = -.36, Neuroticism: B = .39). Because five dimensions of personality shared some variance, the models in which all five dimensions of personality were simultaneous to predict change in depression were investigated. The time-invariant components of five dimensions were still significant predictors for change in depression (Extraversion: B = .30, Openness: B = -.24, Agreeableness: B = -.28, Conscientious: B = -.35, Neuroticism: B = .42). In sum, the majority of the variability of personality was stable over 2 years. Individuals with the greater tendency of Extraversion and Neuroticism have higher degrees of depression; individuals with the greater tendency of Openness, Agreeableness and Conscientious have lower degrees of depression.Keywords: assessment, depression, personality, trait-state-occasion model
Procedia PDF Downloads 1743259 VISMA: A Method for System Analysis in Early Lifecycle Phases
Authors: Walter Sebron, Hans Tschürtz, Peter Krebs
Abstract:
The choice of applicable analysis methods in safety or systems engineering depends on the depth of knowledge about a system, and on the respective lifecycle phase. However, the analysis method chain still shows gaps as it should support system analysis during the lifecycle of a system from a rough concept in pre-project phase until end-of-life. This paper’s goal is to discuss an analysis method, the VISSE Shell Model Analysis (VISMA) method, which aims at closing the gap in the early system lifecycle phases, like the conceptual or pre-project phase, or the project start phase. It was originally developed to aid in the definition of the system boundary of electronic system parts, like e.g. a control unit for a pump motor. Furthermore, it can be also applied to non-electronic system parts. The VISMA method is a graphical sketch-like method that stratifies a system and its parts in inner and outer shells, like the layers of an onion. It analyses a system in a two-step approach, from the innermost to the outermost components followed by the reverse direction. To ensure a complete view of a system and its environment, the VISMA should be performed by (multifunctional) development teams. To introduce the method, a set of rules and guidelines has been defined in order to enable a proper shell build-up. In the first step, the innermost system, named system under consideration (SUC), is selected, which is the focus of the subsequent analysis. Then, its directly adjacent components, responsible for providing input to and receiving output from the SUC, are identified. These components are the content of the first shell around the SUC. Next, the input and output components to the components in the first shell are identified and form the second shell around the first one. Continuing this way, shell by shell is added with its respective parts until the border of the complete system (external border) is reached. Last, two external shells are added to complete the system view, the environment and the use case shell. This system view is also stored for future use. In the second step, the shells are examined in the reverse direction (outside to inside) in order to remove superfluous components or subsystems. Input chains to the SUC, as well as output chains from the SUC are described graphically via arrows, to highlight functional chains through the system. As a result, this method offers a clear and graphical description and overview of a system, its main parts and environment; however, the focus still remains on a specific SUC. It helps to identify the interfaces and interfacing components of the SUC, as well as important external interfaces of the overall system. It supports the identification of the first internal and external hazard causes and causal chains. Additionally, the method promotes a holistic picture and cross-functional understanding of a system, its contributing parts, internal relationships and possible dangers within a multidisciplinary development team.Keywords: analysis methods, functional safety, hazard identification, system and safety engineering, system boundary definition, system safety
Procedia PDF Downloads 2233258 Dynamic Analysis and Design of Lower Extremity Power-Assisted Exoskeleton
Authors: Song Shengli, Tan Zhitao, Li Qing, Fang Husheng, Ye Qing, Zhang Xinglong
Abstract:
Lower extremity power-assisted exoskeleton (LEPEX) is a kind of wearable electromechanical integration intelligent system, walking in synchronization with the wearer, which can assist the wearer walk by means of the driver mounted in the exoskeleton on each joint. In this paper, dynamic analysis and design of the LEPEX are performed. First of all, human walking process is divided into single leg support phase, double legs support phase and ground collision model. The three kinds of dynamics modeling is established using the Lagrange method. Then, the flat walking and climbing stairs dynamic information such as torque and power of lower extremity joints is derived for loading 75kg according to scholar Stansfield measured data of flat walking and scholars R. Riener measured data of climbing stair respectively. On this basis, the joint drive way in the sagittal plane is determined, and the structure of LEPEX is designed. Finally, the designed LEPEX is simulated under ADAMS by using a person’s joint sports information acquired under flat walking and climbing stairs. The simulation result effectively verified the correctness of the structure.Keywords: kinematics, lower extremity exoskeleton, simulation, structure
Procedia PDF Downloads 4243257 Production of Hydroxy Marilone C as a Bioactive Compound from Streptomyces badius
Authors: Osama H. Elsayed, Mohsen M. S. Asker, Mahmoud A. Swelim, Ibrahim H. Abbas, Aziza I. Attwa, Mohamed E. El Awady
Abstract:
Hydroxy marilone C is a bioactive metabolite was produced from the culture broth of Streptomyces badius isolated from Egyptian soil. hydroxy marilone C was purified and fractionated by silica gel column with a gradient mobile phase dicloromethane (DCM) : Methanol then Sephadex LH-20 column using methanol as a mobile phase. It was subjected to many instruments as Infrared (IR), nuclear magnetic resonance (NMR), Mass spectroscopy (MS) and UV spectroscopy to the elucidation of its structure. It was evaluated for antioxidant, cytotoxicity against human alveolar basal epithelial cell line (A-549) and human breast adenocarcinoma cell line (MCF-7) and antiviral activities; showed that the maximum antioxidant activity was 78.8 % at 3000 µg/ml after 90 min. and the IC50 value against DPPH radical found about 1500 µg/ml after 60 min. By Using MTT assay the effect of the pure compound on the proliferation of A-549 cells and MCF-7 cells were 443 µg/ml and 147.9 µg/ml, respectively. While for detection of antiviral activity using Madin-Darby canine kidney (MDCK) cells the maximum cytotoxicity was at 27.9% and IC50 was 128.1µg/ml. The maximum concentration required for protecting 50% of the virus-infected cells against H1N1 viral cytopathogenicity (EC50) was 33.25% for 80 µg/ml. This results indicated that the hydroxy marilone C has a potential antitumor and antiviral activities.Keywords: hydroxy marilone C, production, bioactive compound, Streptomyces badius
Procedia PDF Downloads 2523256 Critical Success Factors Influencing Construction Project Performance for Different Objectives: Procurement Phase
Authors: Samart Homthong, Wutthipong Moungnoi
Abstract:
Critical success factors (CSFs) and the criteria to measure project success have received much attention over the decades and are among the most widely researched topics in the context of project management. However, although there have been extensive studies on the subject by different researchers, to date, there has been little agreement on the CSFs. The aim of this study is to identify the CSFs that influence the performance of construction projects, and determine their relative importance for different objectives across five stages in the project life cycle. A considerable literature review was conducted that resulted in the identification of 179 individual factors. These factors were then grouped into nine major categories. A questionnaire survey was used to collect data from three groups of respondents: client representatives, consultants, and contractors. Out of 164 questionnaires distributed, 93 were returned, yielding a response rate of 56.7%. Using the mean score, relative importance index, and weighted average method, the top 10 critical factors for each category were identified. The agreement of survey respondents on those categorised factors were analysed using Spearman’s rank correlation. A one-way analysis of variance was then performed to determine whether the mean scores among the various groups of respondents were statistically significant. The findings indicate the most CSFs in each category in procurement phase are: proper procurement programming of materials (time), stability in the price of materials (cost), and determining quality in the construction (quality). They are then followed by safety equipment acquisition and maintenance (health and safety), budgeting allowed in a contractual arrangement for implementing environmental management activities (environment), completeness of drawing documents (productivity), accurate measurement and pricing of bill of quantities (risk management), adequate communication among the project team (human resource), and adequate cost control measures (client satisfaction). An understanding of CSFs would help all interested parties in the construction industry to improve project performance. Furthermore, the results of this study would help construction professionals and practitioners take proactive measures for effective project management.Keywords: critical success factors, procurement phase, project life cycle, project performance
Procedia PDF Downloads 1823255 Optoelectronic Hardware Architecture for Recurrent Learning Algorithm in Image Processing
Authors: Abdullah Bal, Sevdenur Bal
Abstract:
This paper purposes a new type of hardware application for training of cellular neural networks (CNN) using optical joint transform correlation (JTC) architecture for image feature extraction. CNNs require much more computation during the training stage compare to test process. Since optoelectronic hardware applications offer possibility of parallel high speed processing capability for 2D data processing applications, CNN training algorithm can be realized using Fourier optics technique. JTC employs lens and CCD cameras with laser beam that realize 2D matrix multiplication and summation in the light speed. Therefore, in the each iteration of training, JTC carries more computation burden inherently and the rest of mathematical computation realized digitally. The bipolar data is encoded by phase and summation of correlation operations is realized using multi-object input joint images. Overlapping properties of JTC are then utilized for summation of two cross-correlations which provide less computation possibility for training stage. Phase-only JTC does not require data rearrangement, electronic pre-calculation and strict system alignment. The proposed system can be incorporated simultaneously with various optical image processing or optical pattern recognition techniques just in the same optical system.Keywords: CNN training, image processing, joint transform correlation, optoelectronic hardware
Procedia PDF Downloads 5063254 Determination of the Structural Parameters of Calcium Phosphate for Biomedical Use
Authors: María Magdalena Méndez-González, Miguel García Rocha, Carlos Manuel Yermo De la Cruz
Abstract:
Calcium phosphate (Ca5(PO4)3(X)) is widely used in orthopedic applications and is widely used as powder and granules. However, their presence in bone is in the form of nanometric needles 60 nm in length with a non-stoichiometric phase of apatite contains CO3-2, Na+, OH-, F-, and other ions in a matrix of collagen fibers. The crystal size, morphology control and interaction with cells are essential for the development of nanotechnology. The structural results of calcium phosphate, synthesized by chemical precipitation with crystal size of 22.85 nm are presented in this paper. The calcium phosphate powders were analyzed by X-ray diffraction, energy dispersive spectroscopy (EDS), infrared spectroscopy and FT-IR transmission electron microscopy. Network parameters, atomic positions, the indexing of the planes and the calculation of FWHM (full width at half maximum) were obtained. The crystal size was also calculated using the Scherer equation d (hkl) = cλ/βcosѲ. Where c is a constant related to the shape of the crystal, the wavelength of the radiation used for a copper anode is 1.54060Å, Ѳ is the Bragg diffraction angle, and β is the width average peak height of greater intensity. Diffraction pattern corresponding to the calcium phosphate called hydroxyapatite phase of a hexagonal crystal system was obtained. It belongs to the space group P63m with lattice parameters a = 9.4394 Å and c = 6.8861 Å. The most intense peak is obtained 2Ѳ = 31.55 (FWHM = 0.4798), with a preferred orientation in 121. The intensity difference between the experimental data and the calculated values is attributable to the temperature at which the sintering was performed. The intensity of the highest peak is at angle 2Ѳ = 32.11. The structure of calcium phosphate obtained was a hexagonal configuration. The intensity changes in the peaks of the diffraction pattern, in the lattice parameters at the corners, indicating the possible presence of a dopant. That each calcium atom is surrounded by a tetrahedron of oxygen and hydrogen was observed by infrared spectra. The unit cell pattern corresponds to hydroxyapatite and transmission electron microscopic crystal morphology corresponding to the hexagonal phase with a preferential growth along the c-plane was obtained.Keywords: structure, nanoparticles, calcium phosphate, metallurgical and materials engineering
Procedia PDF Downloads 5023253 Accounting for Downtime Effects in Resilience-Based Highway Network Restoration Scheduling
Authors: Zhenyu Zhang, Hsi-Hsien Wei
Abstract:
Highway networks play a vital role in post-disaster recovery for disaster-damaged areas. Damaged bridges in such networks can disrupt the recovery activities by impeding the transportation of people, cargo, and reconstruction resources. Therefore, rapid restoration of damaged bridges is of paramount importance to long-term disaster recovery. In the post-disaster recovery phase, the key to restoration scheduling for a highway network is prioritization of bridge-repair tasks. Resilience is widely used as a measure of the ability to recover with which a network can return to its pre-disaster level of functionality. In practice, highways will be temporarily blocked during the downtime of bridge restoration, leading to the decrease of highway-network functionality. The failure to take downtime effects into account can lead to overestimation of network resilience. Additionally, post-disaster recovery of highway networks is generally divided into emergency bridge repair (EBR) in the response phase and long-term bridge repair (LBR) in the recovery phase, and both of EBR and LBR are different in terms of restoration objectives, restoration duration, budget, etc. Distinguish these two phases are important to precisely quantify highway network resilience and generate suitable restoration schedules for highway networks in the recovery phase. To address the above issues, this study proposes a novel resilience quantification method for the optimization of long-term bridge repair schedules (LBRS) taking into account the impact of EBR activities and restoration downtime on a highway network’s functionality. A time-dependent integer program with recursive functions is formulated for optimally scheduling LBR activities. Moreover, since uncertainty always exists in the LBRS problem, this paper extends the optimization model from the deterministic case to the stochastic case. A hybrid genetic algorithm that integrates a heuristic approach into a traditional genetic algorithm to accelerate the evolution process is developed. The proposed methods are tested using data from the 2008 Wenchuan earthquake, based on a regional highway network in Sichuan, China, consisting of 168 highway bridges on 36 highways connecting 25 cities/towns. The results show that, in this case, neglecting the bridge restoration downtime can lead to approximately 15% overestimation of highway network resilience. Moreover, accounting for the impact of EBR on network functionality can help to generate a more specific and reasonable LBRS. The theoretical and practical values are as follows. First, the proposed network recovery curve contributes to comprehensive quantification of highway network resilience by accounting for the impact of both restoration downtime and EBR activities on the recovery curves. Moreover, this study can improve the highway network resilience from the organizational dimension by providing bridge managers with optimal LBR strategies.Keywords: disaster management, highway network, long-term bridge repair schedule, resilience, restoration downtime
Procedia PDF Downloads 1503252 Solving Transient Conduction and Radiation using Finite Volume Method
Authors: Ashok K. Satapathy, Prerana Nashine
Abstract:
Radiative heat transfer in participating medium was anticipated using the finite volume method. The radiative transfer equations are formulated for absorbing and anisotropically scattering and emitting medium. The solution strategy is discussed and the conditions for computational stability are conferred. The equations have been solved for transient radiative medium and transient radiation incorporated with transient conduction. Results have been obtained for irradiation and corresponding heat fluxes for both the cases. The solutions can be used to conclude incident energy and surface heat flux. Transient solutions were obtained for a slab of heat conducting in slab by thermal radiation. The effect of heat conduction during the transient phase is to partially equalize the internal temperature distribution. The solution procedure provides accurate temperature distributions in these regions. A finite volume procedure with variable space and time increments is used to solve the transient energy equation. The medium in the enclosure absorbs, emits, and anisotropically scatters radiative energy. The incident radiations and the radiative heat fluxes are presented in graphical forms. The phase function anisotropy plays a significant role in the radiation heat transfer when the boundary condition is non-symmetric.Keywords: participating media, finite volume method, radiation coupled with conduction, heat transfer
Procedia PDF Downloads 3793251 Children's Literature with Mathematical Dialogue for Teaching Mathematics at Elementary Level: An Exploratory First Phase about Students’ Difficulties and Teachers’ Needs in Third and Fourth Grade
Authors: Goulet Marie-Pier, Voyer Dominic, Simoneau Victoria
Abstract:
In a previous research project (2011-2019) funded by the Quebec Ministry of Education, an educational approach was developed based on the teaching and learning of place value through children's literature. Subsequently, the effect of this approach on the conceptual understanding of the concept among first graders (6-7 years old) was studied. The current project aims to create a series of children's literature to help older elementary school students (8-10 years old) in developing a conceptual understanding of complex mathematical concepts taught at their grade level rather than a more typical procedural understanding. Knowing that there are no educational material or children's books that exist to achieve our goals, four stories, accompanied by mathematical activities, will be created to support students, and their teachers, in the learning and teaching of mathematical concepts that can be challenging within their mathematic curriculum. The stories will also introduce a mathematical dialogue into the characters' discourse with the aim to address various mathematical foundations for which there are often erroneous statements among students and occasionally among teachers. In other words, the stories aim to empower students seeking a real understanding of difficult mathematical concepts, as well as teachers seeking a way to teach these difficult concepts in a way that goes beyond memorizing rules and procedures. In order to choose the concepts that will be part of the stories, it is essential to understand the current landscape regarding the main difficulties experienced by students in third and fourth grade (8-10 years old) and their teacher’s needs. From this perspective, the preliminary phase of the study, as discussed in the presentation, will provide critical insight into the mathematical concepts with which the target grade levels struggle the most. From this data, the research team will select the concepts and develop their stories in the second phase of the study. Two questions are preliminary to the implementation of our approach, namely (1) what mathematical concepts are considered the most “difficult to teach” by teachers in the third and fourth grades? and (2) according to teachers, what are the main difficulties encountered by their students in numeracy? Self-administered online questionnaires using the SimpleSondage software will be sent to all third and fourth-grade teachers in nine school service centers in the Quebec region, representing approximately 300 schools. The data that will be collected in the fall of 2022 will be used to compare the difficulties identified by the teachers with those prevalent in the scientific literature. Considering that this ensures consistency between the proposed approach and the true needs of the educational community, this preliminary phase is essential to the relevance of the rest of the project. It is also an essential first step in achieving the two ultimate goals of the research project, improving the learning of elementary school students in numeracy, and contributing to the professional development of elementary school teachers.Keywords: children’s literature, conceptual understanding, elementary school, learning and teaching, mathematics
Procedia PDF Downloads 883250 Inverse Saturable Absorption in Non-linear Amplifying Loop Mirror Mode-Locked Fiber Laser
Authors: Haobin Zheng, Xiang Zhang, Yong Shen, Hongxin Zou
Abstract:
The research focuses on mode-locked fiber lasers with a non-linear amplifying loop mirror (NALM). Although these lasers have shown potential, they still have limitations in terms of low repetition rate. The self-starting of mode-locking in NALM is influenced by the cross-phase modulation (XPM) effect, which has not been thoroughly studied. The aim of this study is two-fold. First, to overcome the difficulties associated with increasing the repetition rate in mode-locked fiber lasers with NALM. Second, to analyze the influence of XPM on self-starting of mode-locking. The power distributions of two counterpropagating beams in the NALM and the differential non-linear phase shift (NPS) accumulations are calculated. The analysis is conducted from the perspective of NPS accumulation. The differential NPSs for continuous wave (CW) light and pulses in the fiber loop are compared to understand the inverse saturable absorption (ISA) mechanism during pulse formation in NALM. The study reveals a difference in differential NPSs between CW light and pulses in the fiber loop in NALM. This difference leads to an ISA mechanism, which has not been extensively studied in artificial saturable absorbers. The ISA in NALM provides an explanation for experimentally observed phenomena, such as active mode-locking initiation through tapping the fiber or fine-tuning light polarization. These findings have important implications for optimizing the design of NALM and reducing the self-starting threshold of high-repetition-rate mode-locked fiber lasers. This study contributes to the theoretical understanding of NALM mode-locked fiber lasers by exploring the ISA mechanism and its impact on self-starting of mode-locking. The research fills a gap in the existing knowledge regarding the XPM effect in NALM and its role in pulse formation. This study provides insights into the ISA mechanism in NALM mode-locked fiber lasers and its role in selfstarting of mode-locking. The findings contribute to the optimization of NALM design and the reduction of self-starting threshold, which are essential for achieving high-repetition-rate operation in fiber lasers. Further research in this area can lead to advancements in the field of mode-locked fiber lasers with NALM.Keywords: inverse saturable absorption, NALM, mode-locking, non-linear phase shift
Procedia PDF Downloads 1003249 Exploration into Bio Inspired Computing Based on Spintronic Energy Efficiency Principles and Neuromorphic Speed Pathways
Authors: Anirudh Lahiri
Abstract:
Neuromorphic computing, inspired by the intricate operations of biological neural networks, offers a revolutionary approach to overcoming the limitations of traditional computing architectures. This research proposes the integration of spintronics with neuromorphic systems, aiming to enhance computational performance, scalability, and energy efficiency. Traditional computing systems, based on the Von Neumann architecture, struggle with scalability and efficiency due to the segregation of memory and processing functions. In contrast, the human brain exemplifies high efficiency and adaptability, processing vast amounts of information with minimal energy consumption. This project explores the use of spintronics, which utilizes the electron's spin rather than its charge, to create more energy-efficient computing systems. Spintronic devices, such as magnetic tunnel junctions (MTJs) manipulated through spin-transfer torque (STT) and spin-orbit torque (SOT), offer a promising pathway to reducing power consumption and enhancing the speed of data processing. The integration of these devices within a neuromorphic framework aims to replicate the efficiency and adaptability of biological systems. The research is structured into three phases: an exhaustive literature review to build a theoretical foundation, laboratory experiments to test and optimize the theoretical models, and iterative refinements based on experimental results to finalize the system. The initial phase focuses on understanding the current state of neuromorphic and spintronic technologies. The second phase involves practical experimentation with spintronic devices and the development of neuromorphic systems that mimic synaptic plasticity and other biological processes. The final phase focuses on refining the systems based on feedback from the testing phase and preparing the findings for publication. The expected contributions of this research are twofold. Firstly, it aims to significantly reduce the energy consumption of computational systems while maintaining or increasing processing speed, addressing a critical need in the field of computing. Secondly, it seeks to enhance the learning capabilities of neuromorphic systems, allowing them to adapt more dynamically to changing environmental inputs, thus better mimicking the human brain's functionality. The integration of spintronics with neuromorphic computing could revolutionize how computational systems are designed, making them more efficient, faster, and more adaptable. This research aligns with the ongoing pursuit of energy-efficient and scalable computing solutions, marking a significant step forward in the field of computational technology.Keywords: material science, biological engineering, mechanical engineering, neuromorphic computing, spintronics, energy efficiency, computational scalability, synaptic plasticity.
Procedia PDF Downloads 413248 Use of Life Cycle Data for State-Oriented Maintenance
Authors: Maximilian Winkens, Matthias Goerke
Abstract:
The state-oriented maintenance enables the preventive intervention before the failure of a component and guarantees avoidance of expensive breakdowns. Because the timing of the maintenance is defined by the component’s state, the remaining service life can be exhausted to the limit. The basic requirement for the state-oriented maintenance is the ability to define the component’s state. New potential for this is offered by gentelligent components. They are developed at the Corporative Research Centre 653 of the German Research Foundation (DFG). Because of their sensory ability they enable the registration of stresses during the component’s use. The data is gathered and evaluated. The methodology developed determines the current state of the gentelligent component based on the gathered data. This article presents this methodology as well as current research. The main focus of the current scientific work is to improve the quality of the state determination based on the life-cycle data analysis. The methodology developed until now evaluates the data of the usage phase and based on it predicts the timing of the gentelligent component’s failure. The real failure timing though, deviate from the predicted one because the effects from the production phase aren’t considered. The goal of the current research is to develop a methodology for state determination which considers both production and usage data.Keywords: state-oriented maintenance, life-cycle data, gentelligent component, preventive intervention
Procedia PDF Downloads 4943247 Examining the Design of a Scaled Audio Tactile Model for Enhancing Interpretation of Visually Impaired Visitors in Heritage Sites
Authors: A. Kavita Murugkar, B. Anurag Kashyap
Abstract:
With the Rights for Persons with Disabilities Act (RPWD Act) 2016, the Indian government has made it mandatory for all establishments, including Heritage Sites, to be accessible for People with Disabilities. However, recent access audit surveys done under the Accessible India Campaign by Ministry of Culture indicate that there are very few accessibility measures provided in the Heritage sites for people with disabilities. Though there are some measures for the mobility impaired, surveys brought out that there are almost no provisions for people with vision impairment (PwVI) in heritage sites thus depriving them of a reasonable physical & intellectual access that facilitates an enjoyable experience and enriching interpretation of the Heritage Site. There is a growing need to develop multisensory interpretative tools that can help the PwVI in perceiving heritage sites in the absence of vision. The purpose of this research was to examine the usability of an audio-tactile model as a haptic and sound-based strategy for augmenting the perception and experience of PwVI in a heritage site. The first phase of the project was a multi-stage phenomenological experimental study with visually impaired users to investigate the design parameters for developing an audio-tactile model for PwVI. The findings from this phase included user preferences related to the physical design of the model such as the size, scale, materials, details, etc., and the information that it will carry such as braille, audio output, tactile text, etc. This was followed by the second phase in which a working prototype of an audio-tactile model is designed and developed for a heritage site based on the findings from the first phase of the study. A nationally listed heritage site from the author’s city was selected for making the model. The model was lastly tested by visually impaired users for final refinements and validation. The prototype developed empowers People with Vision Impairment to navigate independently in heritage sites. Such a model if installed in every heritage site, can serve as a technological guide for the Person with Vision Impairment, giving information of the architecture, details, planning & scale of the buildings, the entrances, location of important features, lifts, staircases, and available, accessible facilities. The model was constructed using 3D modeling and digital printing technology. Though designed for the Indian context, this assistive technology for the blind can be explored for wider applications across the globe. Such an accessible solution can change the otherwise “incomplete’’ perception of the disabled visitor, in this case, a visually impaired visitor and augment the quality of their experience in heritage sites.Keywords: accessibility, architectural perception, audio tactile model , inclusive heritage, multi-sensory perception, visual impairment, visitor experience
Procedia PDF Downloads 1063246 Students’ Speech Anxiety in Blended Learning
Authors: Mary Jane B. Suarez
Abstract:
Public speaking anxiety (PSA), also known as speech anxiety, is innumerably persistent in any traditional communication classes, especially for students who learn English as a second language. The speech anxiety intensifies when communication skills assessments have taken their toll in an online or a remote mode of learning due to the perils of the COVID-19 virus. Both teachers and students have experienced vast ambiguity on how to realize a still effective way to teach and learn speaking skills amidst the pandemic. Communication skills assessments like public speaking, oral presentations, and student reporting have defined their new meaning using Google Meet, Zoom, and other online platforms. Though using such technologies has paved for more creative ways for students to acquire and develop communication skills, the effectiveness of using such assessment tools stands in question. This mixed method study aimed to determine the factors that affected the public speaking skills of students in a communication class, to probe on the assessment gaps in assessing speaking skills of students attending online classes vis-à-vis the implementation of remote and blended modalities of learning, and to recommend ways on how to address the public speaking anxieties of students in performing a speaking task online and to bridge the assessment gaps based on the outcome of the study in order to achieve a smooth segue from online to on-ground instructions maneuvering towards a much better post-pandemic academic milieu. Using a convergent parallel design, both quantitative and qualitative data were reconciled by probing on the public speaking anxiety of students and the potential assessment gaps encountered in an online English communication class under remote and blended learning. There were four phases in applying the convergent parallel design. The first phase was the data collection, where both quantitative and qualitative data were collected using document reviews and focus group discussions. The second phase was data analysis, where quantitative data was treated using statistical testing, particularly frequency, percentage, and mean by using Microsoft Excel application and IBM Statistical Package for Social Sciences (SPSS) version 19, and qualitative data was examined using thematic analysis. The third phase was the merging of data analysis results to amalgamate varying comparisons between desired learning competencies versus the actual learning competencies of students. Finally, the fourth phase was the interpretation of merged data that led to the findings that there was a significantly high percentage of students' public speaking anxiety whenever students would deliver speaking tasks online. There were also assessment gaps identified by comparing the desired learning competencies of the formative and alternative assessments implemented and the actual speaking performances of students that showed evidence that public speaking anxiety of students was not properly identified and processed.Keywords: blended learning, communication skills assessment, public speaking anxiety, speech anxiety
Procedia PDF Downloads 1023245 A Low-Cost Long-Range 60 GHz Backhaul Wireless Communication System
Authors: Atabak Rashidian
Abstract:
In duplex backhaul wireless communication systems, two separate transmit and receive high-gain antennas are required if an antenna switch is not implemented. Although the switch loss, which is considerable and in the order of 1.5 dB at 60 GHz, is avoided, the large separate antenna systems make the design bulky and not cost-effective. To avoid two large reflectors for such a system, transmit and receive antenna feeds with a common phase center are required. The phase center should coincide with the focal point of the reflector to maximize the efficiency and gain. In this work, we present an ultra-compact design in which stacked patch antennas are used as the feeds for a 12-inch reflector. The transmit antenna is a 1 × 2 array and the receive antenna is a single element located in the middle of the transmit antenna elements. Antenna elements are designed as stacked patches to provide the required impedance bandwidth for four standard channels of WiGigTM applications. The design includes three metallic layers and three dielectric layers, in which the top dielectric layer is a 100 µm-thick protective layer. The top two metallic layers are specified to the main and parasitic patches. The bottom layer is basically ground plane with two circular openings (0.7 mm in diameter) having a center through via which connects the antennas to a single input/output Si-Ge Bi-CMOS transceiver chip. The reflection coefficient of the stacked patch antenna is fully investigated. The -10 dB impedance bandwidth is about 11%. Although the gap between transmit and receive antenna is very small (g = 0.525 mm), the mutual coupling is less than -12 dB over the desired frequency band. The three dimensional radiation patterns of the transmit and receive reflector antennas at 60 GHz is investigated over the impedance bandwidth. About 39 dBi realized gain is achieved. Considering over 15 dBm of output power of the silicon chip in the transmit side, the EIRP should be over 54 dBm, which is good enough for over one kilometer multi Gbps data communications. The performance of the reflector antenna over the bandwidth shows the peak gain is 39 dBi and 40 dBi for the reflector antenna with 2-element and single element feed, respectively. This type of the system design is cost-effective and efficient.Keywords: Antenna, integrated circuit, millimeter-wave, phase center
Procedia PDF Downloads 1203244 Acoustic Emission for Investigation of Processes Occurring at Hydrogenation of Metallic Titanium
Authors: Anatoly A. Kuznetsov, Pavel G. Berezhko, Sergey M. Kunavin, Eugeny V. Zhilkin, Maxim V. Tsarev, Vyacheslav V. Yaroshenko, Valery V. Mokrushin, Olga Y. Yunchina, Sergey A. Mityashin
Abstract:
The acoustic emission is caused by short-time propagation of elastic waves that are generated as a result of quick energy release from sources localized inside some material. In particular, the acoustic emission phenomenon lies in the generation of acoustic waves resulted from the reconstruction of material internal structures. This phenomenon is observed at various physicochemical transformations, in particular, at those accompanying hydrogenation processes of metals or intermetallic compounds that make it possible to study parameters of these transformations through recording and analyzing the acoustic signals. It has been known that at the interaction between metals or inter metallides with hydrogen the most intensive acoustic signals are generated as a result of cracking or crumbling of an initial compact powder sample as a result of the change of material crystal structure under hydrogenation. This work is dedicated to the study into changes occurring in metallic titanium samples at their interaction with hydrogen and followed by acoustic emission signals. In this work the subjects for investigation were specimens of metallic titanium in two various initial forms: titanium sponge and fine titanium powder made of this sponge. The kinetic of the interaction of these materials with hydrogen, the acoustic emission signals accompanying hydrogenation processes and the structure of the materials before and after hydrogenation were investigated. It was determined that in both cases interaction of metallic titanium and hydrogen is followed by acoustic emission signals of high amplitude generated on reaching some certain value of the atomic ratio [H]/[Ti] in a solid phase because of metal cracking at a macrolevel. The typical sizes of the cracks are comparable with particle sizes of hydrogenated specimens. The reasons for cracking are internal stresses initiated in a sample due to the increasing volume of a solid phase as a result of changes in a material crystal lattice under hydrogenation. When the titanium powder is used, the atomic ratio [H]/[Ti] in a solid phase corresponding to the maximum amplitude of an acoustic emission signal are, as a rule, higher than when titanium sponge is used.Keywords: acoustic emission signal, cracking, hydrogenation, titanium specimen
Procedia PDF Downloads 3853243 Liquid-Liquid Extraction of Uranium (VI) from Aqueous Solution Using 1-Hydroxyalkylidene-1,1-Diphosphonic Acids
Authors: Mustapha Bouhoun Ali, Ahmed Yacine Badjah Hadj Ahmed, Mouloud Attou, Abdel Hamid Elias, Mohamed Amine Didi
Abstract:
The extraction of uranium(VI) from aqueous solutions has been investigated using 1-hydroxyhexadecylidene-1,1-diphosphonic acid (HHDPA) and 1-hydroxydodecylidene-1,1-diphosphonic acid (HDDPA), which were synthesized and characterized by elemental analysis and by FT-IR, 1H NMR, 31P NMR spectroscopy. In this paper, we propose a tentative assignment for the shifts of those two ligands and their specific complexes with uranium(VI). We carried out the extraction of uranium(VI) by HHDPA and HDDPA from [carbon tetrachloride + 2-octanol (v/v: 90%/10%)] solutions. Various factors such as contact time, pH, organic/aqueous phase ratio and extractant concentration were considered. The optimum conditions obtained were: contact time = 20 min, organic/aqueous phase ratio = 1, pH value = 3.0 and extractant concentration = 0.3M. The extraction yields are more significant in the case of the HHDPA which is equipped with a hydrocarbon chain, longer than that of the HDDPA. Logarithmic plots of the uranium(VI) distribution ratio vs. pHeq and the extractant concentration showed that the ratio of extractant to extracted uranium(VI) (ligand/metal) is 2:1. The formula of the complex of uranium(VI) with the HHDPA and the DHDPA is UO2(H3L)2 (HHDPA and DHDPA are denoted as H4L). A spectroscopic analysis has showed that coordination of uranium(VI) takes place via oxygen atoms.Keywords: liquid-liquid extraction, uranium(VI), 1-hydroxyalkylidene-1, 1-diphosphonic acids, HHDPA, HDDPA, aqueous solution
Procedia PDF Downloads 527