Search results for: phase%20space
3265 Approach on Conceptual Design and Dimensional Synthesis of the Linear Delta Robot for Additive Manufacturing
Authors: Efrain Rodriguez, Cristhian Riano, Alberto Alvares
Abstract:
In recent years, robots manipulators with parallel architectures are used in additive manufacturing processes – 3D printing. These robots have advantages such as speed and lightness that make them suitable to help with the efficiency and productivity of these processes. Consequently, the interest for the development of parallel robots for additive manufacturing applications has increased. This article deals with the conceptual design and dimensional synthesis of the linear delta robot for additive manufacturing. Firstly, a methodology based on structured processes for the development of products through the phases of informational design, conceptual design and detailed design is adopted: a) In the informational design phase the Mudge diagram and the QFD matrix are used to aid a set of technical requirements, to define the form, functions and features of the robot. b) In the conceptual design phase, the functional modeling of the system through of an IDEF0 diagram is performed, and the solution principles for the requirements are formulated using a morphological matrix. This phase includes the description of the mechanical, electro-electronic and computational subsystems that constitute the general architecture of the robot. c) In the detailed design phase, a digital model of the robot is drawn on CAD software. A list of commercial and manufactured parts is detailed. Tolerances and adjustments are defined for some parts of the robot structure. The necessary manufacturing processes and tools are also listed, including: milling, turning and 3D printing. Secondly, a dimensional synthesis method applied on design of the linear delta robot is presented. One of the most important key factors in the design of a parallel robot is the useful workspace, which strongly depends on the joint space, the dimensions of the mechanism bodies and the possible interferences between these bodies. The objective function is based on the verification of the kinematic model for a prescribed cylindrical workspace, considering geometric constraints that possibly lead to singularities of the mechanism. The aim is to determine the minimum dimensional parameters of the mechanism bodies for the proposed workspace. A method based on genetic algorithms was used to solve this problem. The method uses a cloud of points with the cylindrical shape of the workspace and checks the kinematic model for each of the points within the cloud. The evolution of the population (point cloud) provides the optimal parameters for the design of the delta robot. The development process of the linear delta robot with optimal dimensions for additive manufacture is presented. The dimensional synthesis enabled to design the mechanism of the delta robot in function of the prescribed workspace. Finally, the implementation of the robotic platform developed based on a linear delta robot in an additive manufacturing application using the Fused Deposition Modeling (FDM) technique is presented.Keywords: additive manufacturing, delta parallel robot, dimensional synthesis, genetic algorithms
Procedia PDF Downloads 1893264 Analysis of Interpolation Factor in Pulse Shaping Filter on MRC for CDMA 2000 Systems
Authors: Pankaj Verma, Gagandeep Singh Walia, Padma Devi, H. P. Singh
Abstract:
Code Division Multiple Access 2000 operates on various RF channel bandwidths 1.2288 or 3.6864 Mcps. CDMA offers high bandwidth and wireless broadband services but the efficiency gets decreased because of many interfering factors like fading, interference, scattering, diffraction, refraction, reflection etc. To reduce the spectral bandwidth is one of the major concerns in modern day technology and this is achieved by pulse shaping filter. This paper investigates the effect of diversity (MRC), interpolation factor in Root Raised Cosine (RRC) filter for the QPSK and BPSK modulation schemes. It is made possible to send information with minimum inter symbol interference and within limited bandwidth with proper pulse shaping technique. Bit error rate (BER) performance is analyzed by applying diversity technique by varying the interpolation factor for Binary Phase Shift Keying (BPSK) and Quadrature Phase Shift Keying (QPSK). Interpolation factor increases the original sampling rate of a sequence to a higher rate and reduces the interference and diversity reduces the fading.Keywords: CDMA2000, root raised cosine, roll off factor, ISI, diversity, interference, fading
Procedia PDF Downloads 4733263 Channel Sounding and PAPR Reduction in OFDM for WiMAX Using Software Defined Radio
Authors: B. Siva Kumar Reddy, B. Lakshmi
Abstract:
WiMAX is a high speed broadband wireless access technology that adopted OFDM/OFDMA techniques to supply higher data rates with high spectral efficiency. However, OFDM suffers in view of high Peak to Average Power Ratio (PAPR) and high affect to synchronization errors. In this paper, the high PAPR problem is solved by using phase modulation to get Constant Envelop Orthogonal Frequency Division Multiplexing (CE-OFDM). The synchronization failures are brought down by employing a frequency lock loop, Poly phase clock synchronizer, Costas loop and blind equalizers such as Constant Modulus Algorithm (CMA) equalizer and Sign Kurtosis Maximization Adaptive Algorithm (SKMAA) equalizers. The WiMAX physical layer is executed on Software Defined Radio (SDR) prototype by utilizing USRP N210 as hardware and GNU Radio as software plat-forms. A SNR estimation is performed on the signal received through USRP N210. To empathize wireless propagation in specific environments, a sliding correlator wireless channel sounding system is designed by using SDR testbed.Keywords: BER, CMA equalizer, Kurtosis equalizer, GNU Radio, OFDM/OFDMA, USRP N210
Procedia PDF Downloads 3473262 Induction Heating and Electromagnetic Stirring of Bi-Phasic Metal/Glass Molten Bath for Mixed Nuclear Waste Treatment
Authors: P. Charvin, R. Bourrou, F. Lemont, C. Lafon, A. Russello
Abstract:
For nuclear waste treatment and confinement, a specific IN-CAN melting module based on low-frequency induction heating have been designed. The frequency of 50Hz has been chosen to improve penetration length through metal. In this design, the liquid metal, strongly stirred by electromagnetic effects, presents shape of a dome caused by strong Laplace forces developing in the bulk of bath. Because of a lower density, the glass phase is located above the metal phase and is heated and stirred by metal through interface. Electric parameters (Intensity, frequency) give precious information about metal load and composition (resistivity of alloy) through impedance modification. Then, power supply can be adapted to energy transfer efficiency for suitable process supervision. Modeling of this system allows prediction of metal dome shape (in agreement with experimental measurement with a specific device), glass and metal velocity, heat and motion transfer through interface. MHD modeling is achieved with COMSOL and Fluent. First, a simplified model is used to obtain the shape of the metal dome. Then the shape is fixed to calculate the fluid flow and the thermal part.Keywords: electromagnetic stirring, induction heating, interface modeling, metal load
Procedia PDF Downloads 2643261 Effects of Cerium Oxide Nanoparticle Addition in Diesel and Diesel-Biodiesel Blends on the Performance Characteristics of a CI Engine
Authors: Abbas Ali Taghipoor Bafghi, Hosein Bakhoda, Fateme Khodaei Chegeni
Abstract:
An experimental investigation is carried out to establish the performance characteristics of a compression ignition engine while using cerium oxide nano particles as additive in neat diesel and diesel-bio diesel blends. In the first phase of the experiments, stability of neat diesel and diesel-bio diesel fuel blends with the addition of cerium oxide nano particles are analyzed. After series of experiments, it is found that the blends subjected to high speed blending followed by ultrasonic bath stabilization improves the stability.In the second phase, performance characteristics are studied using the stable fuel blends in a single cylinder four stroke engine coupled with an electrical dynamo meter and a data acquisition system. The cerium oxide acts as an oxygen donating catalyst and provides oxygen for combustion. The activation energy of cerium oxide acts to burn off carbon deposits within the engine cylinder at the wall temperature and prevents the deposition of non-polar compounds on the cylinder wall results reduction in HC emissions. The tests revealed that cerium oxide nano particles can be used as additive in diesel and diesel-bio diesel blends to improve complete combustion of the fuel significantly.Keywords: engine, cerium oxide, biodiesel, deposit
Procedia PDF Downloads 3433260 Influence of Gravity on the Performance of Closed Loop Pulsating Heat Pipe
Authors: Vipul M. Patel, H. B. Mehta
Abstract:
Closed Loop Pulsating Heat Pipe (CLPHP) is a passive two-phase heat transfer device having potential to achieve high heat transfer rates over conventional cooling techniques. It is found in electronics cooling due to its outstanding characteristics such as excellent heat transfer performance, simple, reliable, cost effective, compact structure and no external mechanical power requirement etc. Comprehensive understanding of the thermo-hydrodynamic mechanism of CLPHP is still lacking due to its contradictory results available in the literature. The present paper discusses the experimental study on 9 turn CLPHP. Inner and outer diameters of the copper tube are 2 mm and 4 mm respectively. The lengths of the evaporator, adiabatic and condenser sections are 40 mm, 100 mm and 50 mm respectively. Water is used as working fluid. The Filling Ratio (FR) is kept as 50% throughout the investigations. The gravitational effect is studied by placing the evaporator heater at different orientations such as horizontal (90 degree), vertical top (180 degree) and bottom (0 degree) as well as inclined top (135 degree) and bottom (45 degree). Heat input is supplied in the range of 10-50 Watt. Heat transfer mechanism is natural convection in the condenser section. Vacuum pump is used to evacuate the system up to 10-5 bar. The results demonstrate the influence of input heat flux and gravity on the thermal performance of the CLPHP.Keywords: CLPHP, gravity effect, start up, two-phase flow
Procedia PDF Downloads 2613259 Microfluidized Fiber Based Oleogels for Encapsulation of Lycopene
Authors: Behic Mert
Abstract:
This study reports a facile approach to structure soft solids from microfluidizer lycopene-rich plant based structure and oil. First carotenoid-rich plant material (pumpkin was used in this study) processed with high-pressure microfluidizer to release lycopene molecules, then an emulsion was formed by mixing processed plant material and oil. While, in emulsion state lipid soluble carotenoid molecules were allowed to dissolve in the oil phase, the fiber material of plant material provided the network which was required for emulsion stabilization. Additional hydrocolloids (gelatin, xhantan, and pectin) up to 0.5% were also used to reinforce the emulsion stability and their impact on final product properties were evaluated via rheological, textural and oxidation studies. Finally, water was removed from emulsion phase by drying in a tray dryer at 40°C for 36 hours, and subsequent shearing resulted in soft solid (ole gel) structures. The microstructure of these systems was revealed by cryo-scanning electron microscopy. Effect of hydrocolloids on total lycopene and surface lycopene contents were also evaluated. The surface lycopene was lowest in gelatin containing oleo gels and highest in pectin-containing oleo gels. This study outlines the novel emulsion-based structuring method that can be used to encapsulate lycopene without the need of separate extraction of them.Keywords: lycopene, encapsulation, fiber, oleo gel
Procedia PDF Downloads 2653258 Air–Water Two-Phase Flow Patterns in PEMFC Microchannels
Authors: Ibrahim Rassoul, A. Serir, E-K. Si Ahmed, J. Legrand
Abstract:
The acronym PEM refers to Proton Exchange Membrane or alternatively Polymer Electrolyte Membrane. Due to its high efficiency, low operating temperature (30–80 °C), and rapid evolution over the past decade, PEMFCs are increasingly emerging as a viable alternative clean power source for automobile and stationary applications. Before PEMFCs can be employed to power automobiles and homes, several key technical challenges must be properly addressed. One technical challenge is elucidating the mechanisms underlying water transport in and removal from PEMFCs. On one hand, sufficient water is needed in the polymer electrolyte membrane or PEM to maintain sufficiently high proton conductivity. On the other hand, too much liquid water present in the cathode can cause “flooding” (that is, pore space is filled with excessive liquid water) and hinder the transport of the oxygen reactant from the gas flow channel (GFC) to the three-phase reaction sites. The experimental transparent fuel cell used in this work was designed to represent actual full scale of fuel cell geometry. According to the operating conditions, a number of flow regimes may appear in the microchannel: droplet flow, blockage water liquid bridge /plug (concave and convex forms), slug/plug flow and film flow. Some of flow patterns are new, while others have been already observed in PEMFC microchannels. An algorithm in MATLAB was developed to automatically determine the flow structure (e.g. slug, droplet, plug, and film) of detected liquid water in the test microchannels and yield information pertaining to the distribution of water among the different flow structures. A video processing algorithm was developed to automatically detect dynamic and static liquid water present in the gas channels and generate relevant quantitative information. The potential benefit of this software allows the user to obtain a more precise and systematic way to obtain measurements from images of small objects. The void fractions are also determined based on images analysis. The aim of this work is to provide a comprehensive characterization of two-phase flow in an operating fuel cell which can be used towards the optimization of water management and informs design guidelines for gas delivery microchannels for fuel cells and its essential in the design and control of diverse applications. The approach will combine numerical modeling with experimental visualization and measurements.Keywords: polymer electrolyte fuel cell, air-water two phase flow, gas diffusion layer, microchannels, advancing contact angle, receding contact angle, void fraction, surface tension, image processing
Procedia PDF Downloads 3113257 Structural, Magnetic and Magnetocaloric Properties of Iron-Doped Nd₀.₆Sr₀.₄MnO₃ Perovskite
Authors: Ismail Al-Yahmadi, Abbasher Gismelseed, Fatma Al-Mammari, Ahmed Al-Rawas, Ali Yousif, Imaddin Al-Omari, Hisham Widatallah, Mohamed Elzain
Abstract:
The influence of Fe-doping on the structural, magnetic and magnetocaloric properties of Nd₀.₆Sr₀.₄FeₓMn₁₋ₓO₃ (0≤ x ≤0.5) were investigated. The samples were synthesized by auto-combustion Sol-Gel method. The phase purity, crystallinity, and the structural properties for all prepared samples were examined by X-ray diffraction. XRD refinement indicates that the samples are crystallized in the orthorhombic single-phase with Pnma space group. Temperature dependence of magnetization measurements under a magnetic applied field of 0.02 T reveals that the samples with (x=0.0, 0.1, 0.2 and 0.3) exhibit a paramagnetic (PM) to ferromagnetic (FM) transition with decreasing temperature. The Curie temperature decreased with increasing Fe content from 256 K for x =0.0 to 80 K for x =0.3 due to increasing of antiferromagnetic superexchange (SE) interaction coupling. Moreover, the magnetization as a function of applied magnetic field (M-H) curves was measured at 2 K, and 300 K. the results of such measurements confirm the temperature dependence of magnetization measurements. The magnetic entropy change|∆SM | was evaluated using Maxwell's relation. The maximum values of the magnetic entropy change |-∆SMax |for x=0.0, 0.1, 0.2, 0.3 are found to be 15.35, 5.13, 3.36, 1.08 J/kg.K for an applied magnetic field of 9 T. Our result on magnetocaloric properties suggests that the parent sample Nd₀.₆Sr₀.₄MnO₃ could be a good refrigerant for low-temperature magnetic refrigeration.Keywords: manganite perovskite, magnetocaloric effect, X-ray diffraction, relative cooling power
Procedia PDF Downloads 1573256 Task Value and Research Culture of Southern Luzon State University
Authors: Antonio V. Romana, Rizaide A. Salayo, Maria Lavinia E. Fetalino
Abstract:
This study assessed the subjective task value and research culture of SLSU faculty. It used the sequential explanatory mixed-method research design. For the quantitative phase, a questionnaire on the research culture and task value were used. While in the qualitative phase, the data was coded and thematized to interpret the focus group discussion outcome. Results showed that the dimensions of the subjective task value, intrinsic, got the highest rank while the utility value got the lowest. It is worth mentioning that all subjective task values were "Agreed." From the FGD, faculty members valued research and wanted to be involved in this undertaking. However, the limited number of faculty researchers, heavy teaching workload, inadequate information on the research process, lack of self-confidence, and low incentives received from research hindered their writing and engagement with research. Thus, a policy brief was developed. It is recommended that the institution may conduct a series of research seminar workshops for the faculty members, plan regular research idea exchange activities, and revisit the university's research thrust and agenda for faculties specialization and expertise alignment. In addition, the university may also lessen the workload and hire additional faculty members so that educators may focus on their research work. Finally, cash incentives may still be considered upon knowing that the faculty members have varied experiences in doing research tasks.Keywords: task value, interest value, attainment value, utility value, research culture
Procedia PDF Downloads 643255 The Effects of Electron Trapping by Electron-Ecoustic Waves Excited with Electron Beam
Authors: Abid Ali Abid
Abstract:
One-dimensional (1-D) particle-in-cell (PIC) electrostatic simulations are carried out to investigate the electrostatic waves, whose constituents are hot, cold and beam electrons in the background of motionless positive ions. In fact, the electrostatic modes excited are electron acoustic waves, beam driven waves as well as Langmuir waves. It is assessed that the relevant plasma parameters, for example, hot electron temperature, beam electron drift speed, and the electron beam density significantly modify the electrostatics wave's profiles. In the nonlinear stage, the wave-particle interaction becomes more evident and the waves have obtained its saturation level. Consequently, electrons become trapped in the waves and trapping vortices are clearly formed. Because of this trapping vortices and mixing of the electrons in phase space, finally, lead to electrons thermalization. It is observed that for the high-density value of the beam-electron, the solitary waves having a bipolar form of the electric field. These solitons are the nonlinear Brenstein-Greene and Kruskal wave mode that attributes the trapping of electrons potential well of phase-space hole. These examinations revealed that electrostatic waves have been exited in beam-plasma model and producing waves having broad-frequency ranges, which may clarify the broadband electrostatic noise (BEN) spectrum studied in the auroral zone.Keywords: electron acoustic waves, trapping of cold electron, Langmuir waves, particle-in cell simulation
Procedia PDF Downloads 2043254 3-Dimensional Contamination Conceptual Site Model: A Case Study Illustrating the Multiple Applications of Developing and Maintaining a 3D Contamination Model during an Active Remediation Project on a Former Urban Gasworks Site
Authors: Duncan Fraser
Abstract:
A 3-Dimensional (3D) conceptual site model was developed using the Leapfrog Works® platform utilising a comprehensive historical dataset for a large former Gasworks site in Fitzroy, Melbourne. The gasworks had been constructed across two fractured geological units with varying hydraulic conductivities. A Newer Volcanic (basaltic) outcrop covered approximately half of the site and was overlying a fractured Melbourne formation (Siltstone) bedrock outcropping over the remaining portion. During the investigative phase of works, a dense non-aqueous phase liquid (DNAPL) plume (coal tar) was identified within both geological units in the subsurface originating from multiple sources, including gasholders, tar wells, condensers, and leaking pipework. The first stage of model development was undertaken to determine the horizontal and vertical extents of the coal tar in the subsurface and assess the potential causality between potential sources, plume location, and site geology. Concentrations of key contaminants of interest (COIs) were also interpolated within Leapfrog to refine the distribution of contaminated soils. The model was subsequently used to develop a robust soil remediation strategy and achieve endorsement from an Environmental Auditor. A change in project scope, following the removal and validation of the three former gasholders, necessitated the additional excavation of a significant volume of residual contaminated rock to allow for the future construction of two-story underground basements. To assess financial liabilities associated with the offsite disposal or thermal treatment of material, the 3D model was updated with three years of additional analytical data from the active remediation phase of works. Chemical concentrations and the residual tar plume within the rock fractures were modelled to pre-classify the in-situ material and enhance separation strategies to prevent the unnecessary treatment of material and reduce costs.Keywords: 3D model, contaminated land, Leapfrog, remediation
Procedia PDF Downloads 1303253 Inherent Difficulties in Countering Islamophobia
Authors: Imbesat Daudi
Abstract:
Islamophobia, which is a billion-dollar industry, is widespread, especially in the United States, Europe, India, Israel, and countries that have Muslim minorities at odds with their governmental policies. Hatred of Islam in the West did not evolve spontaneously; it was methodically created. Islamophobia's current format has been designed to spread on its own, find a space in the Western psyche, and resist its eradication. Hatred has been sustained by neoconservative ideologues and their allies, which are supported by the mainstream media. Social scientists have evaluated how ideas spread, why any idea can go viral, and where new ideas find space in our brains. This was possible because of the advances in the computational power of software and computers. Spreading of ideas, including Islamophobia, follows a sine curve; it has three phases: An initial exploratory phase with a long lag period, an explosive phase if ideas go viral, and the final phase when ideas find space in the human psyche. In the initial phase, the ideas are quickly examined in a center in the prefrontal lobe. When it is deemed relevant, it is sent for evaluation to another center of the prefrontal lobe; there, it is critically examined. Once it takes a final shape, the idea is sent as a final product to a center in the occipital lobe. This center cannot critically evaluate ideas; it can only defend them from its critics. Counterarguments, no matter how scientific, are automatically rejected. Therefore, arguments that could be highly effective in the early phases are counterproductive once they are stored in the occipital lobe. Anti-Islamophobic intellectuals have done a very good job of countering Islamophobic arguments. However, they have not been as effective as neoconservative ideologues who have promoted anti-Muslim rhetoric that was based on half-truths, misinformation, or outright lies. The failure is partly due to the support pro-war activists receive from the mainstream media, state institutions, mega-corporations engaged in violent conflicts, and think tanks that provide Islamophobic arguments. However, there are also scientific reasons why anti-Islamophobic thinkers have been less effective. There are different dynamics of spreading ideas once they are stored in the occipital lobe. The human brain is incapable of evaluating further once it accepts ideas as its own; therefore, a different strategy is required to be effective. This paper examines 1) why anti-Islamophobic intellectuals have failed in changing the minds of non-Muslims and 2) the steps of countering hatred. Simply put, a new strategy is needed that can effectively counteract hatred of Islam and Muslims. Islamophobia is a disease that requires strong measures. Fighting hatred is always a challenge, but if we understand why Islamophobia is taking root in the twenty-first century, one can succeed in challenging Islamophobic arguments. That will need a coordinated effort of Intellectuals, writers and the media.Keywords: islamophobia, Islam and violence, anti-islamophobia, demonization of Islam
Procedia PDF Downloads 473252 Acceptance and Feasibility of Delivering an Evidence-based Digital Intervention for Palliative Care Education
Authors: Areej Alosimi, Heather Wharrad, Katharine Whittingham
Abstract:
Palliative care is a crucial element in nursing, especially with the steep increase in non-communicable diseases. Providing education in palliative care can help elevate the standards of care and address the growing need for it. However, palliative care has not been introduced into nursing curricula, specifically in Saudi Arabia, evidenced by students' inadequate understanding of the subject. Digital learning has been identified as a persuasive and effective method to improve education. The study aims to assess the feasibility and accessibility of implementing digital learning in palliative care education in Saudi Arabia by investigating the potential of delivering palliative care nurse education via distance learning. The study will utilize a sequential exploratory mixed-method approach. Phase one will entail identifying needs, developing a web-based program in phase two, and intervention implementation with a pre-post-test in phase three. Semi-structured interviews will be conducted to explore participant perceptions and thoughts regarding the intervention. Data collection will incorporate questionnaires and interviews with nursing students. Data analysis will use SPSS to analyze quantitative measurements and NVivo to analyze qualitative aspects. The study aims to provide insights into the feasibility of implementing digital learning in palliative care education. The results will serve as a foundation to investigate the effectiveness of e-learning interventions in palliative care education among nursing students. This study addresses a crucial gap in palliative care education, especially in nursing curricula, and explores the potential of digital learning to improve education. The results have broad implications for nursing education and the growing need for palliative care globally. The study assesses the feasibility and accessibility of implementing digital learning in palliative care education in Saudi Arabia. The research investigates whether palliative care nurse education can be effectively delivered through distance learning to improve students' understanding of the subject. The study's findings will lay the groundwork for a larger investigation on the efficacy of e-learning interventions in improving palliative care education among nursing students. The study can potentially contribute to the overall advancement of nursing education and the growing need for palliative care.Keywords: undergraduate nursing students, E-Learning, Palliative care education, Knowledge
Procedia PDF Downloads 723251 Unbalanced Distribution Optimal Power Flow to Minimize Losses with Distributed Photovoltaic Plants
Authors: Malinwo Estone Ayikpa
Abstract:
Electric power systems are likely to operate with minimum losses and voltage meeting international standards. This is made possible generally by control actions provide by automatic voltage regulators, capacitors and transformers with on-load tap changer (OLTC). With the development of photovoltaic (PV) systems technology, their integration on distribution networks has increased over the last years to the extent of replacing the above mentioned techniques. The conventional analysis and simulation tools used for electrical networks are no longer able to take into account control actions necessary for studying distributed PV generation impact. This paper presents an unbalanced optimal power flow (OPF) model that minimizes losses with association of active power generation and reactive power control of single-phase and three-phase PV systems. Reactive power can be generated or absorbed using the available capacity and the adjustable power factor of the inverter. The unbalance OPF is formulated by current balance equations and solved by primal-dual interior point method. Several simulation cases have been carried out varying the size and location of PV systems and the results show a detailed view of the impact of PV distributed generation on distribution systems.Keywords: distribution system, loss, photovoltaic generation, primal-dual interior point method
Procedia PDF Downloads 3313250 Drug Delivery Nanoparticles of Amino Acid Based Biodegradable Polymers
Authors: Sophio Kobauri, Tengiz Kantaria, Temur Kantaria, David Tugushi, Nina Kulikova, Ramaz Katsarava
Abstract:
Nanosized environmentally responsive materials are of special interest for various applications, including targeted drug to a considerable potential for treatment of many human diseases. The important technological advantages of nanoparticles (NPs) usage as drug carriers (nanocontainers) are their high stability, high carrier capacity, feasibility of encapsulation of both hydrophilic or hydrophobic substances, as well as a high variety of possible administration routes, including oral application and inhalation. NPs can also be designed to allow controlled (sustained) drug release from the matrix. These properties of NPs enable improvement of drug bioavailability and might allow drug dosage decrease. The targeted and controlled administration of drugs using NPs might also help to overcome drug resistance, which is one of the major obstacles in the control of epidemics. Various degradable and non-degradable polymers of both natural and synthetic origin have been used for NPs construction. One of the most promising for the design of NPs are amino acid-based biodegradable polymers (AABBPs) which can clear from the body after the fulfillment of their function. The AABBPs are composed of naturally occurring and non-toxic building blocks such as α-amino acids, fatty diols and dicarboxylic acids. The particles designed from these polymers are expected to have an improved bioavailability along with a high biocompatibility. The present work deals with a systematic study of the preparation of NPs by cost-effective polymer deposition/solvent displacement method using AABBPs. The influence of the nature and concentration of surfactants, concentration of organic phase (polymer solution), and the ratio organic phase/inorganic (water) phase, as well as of some other factors on the size of the fabricated NPs have been studied. It was established that depending on the used conditions the NPs size could be tuned within 40-330 nm. As the next step of this research an evaluation of biocompatibility and bioavailability of the synthesized NPs has been performed, using two stable human cell culture lines – HeLa and A549. This part of study is still in progress now.Keywords: amino acids, biodegradable polymers, nanoparticles (NPs), non-toxic building blocks
Procedia PDF Downloads 4313249 Managing Information Technology: An Overview of Information Technology Governance
Authors: Mehdi Asgarkhani
Abstract:
Today, investment on Information Technology (IT) solutions in most organizations is the largest component of capital expenditure. As capital investment on IT continues to grow, IT managers and strategists are expected to develop and put in practice effective decision making models (frameworks) that improve decision-making processes for the use of IT in organizations and optimize the investment on IT solutions. To be exact, there is an expectation that organizations not only maximize the benefits of adopting IT solutions but also avoid the many pitfalls that are associated with rapid introduction of technological change. Different organizations depending on size, complexity of solutions required and processes used for financial management and budgeting may use different techniques for managing strategic investment on IT solutions. Decision making processes for strategic use of IT within organizations are often referred to as IT Governance (or Corporate IT Governance). This paper examines IT governance - as a tool for best practice in decision making about IT strategies. Discussions in this paper represent phase I of a project which was initiated to investigate trends in strategic decision making on IT strategies. Phase I is concerned mainly with review of literature and a number of case studies, establishing that the practice of IT governance, depending on the complexity of IT solutions, organization's size and organization's stage of maturity, varies significantly – from informal approaches to sophisticated formal frameworks.Keywords: IT governance, corporate governance, IT governance frameworks, IT governance components, aligning IT with business strategies
Procedia PDF Downloads 4053248 Tunable Graphene Metasurface Modeling Using the Method of Moment Combined with Generalised Equivalent Circuit
Authors: Imen Soltani, Takoua Soltani, Taoufik Aguili
Abstract:
Metamaterials crossover classic physical boundaries and gives rise to new phenomena and applications in the domain of beam steering and shaping. Where electromagnetic near and far field manipulations were achieved in an accurate manner. In this sense, 3D imaging is one of the beneficiaries and in particular Denis Gabor’s invention: holography. But, the major difficulty here is the lack of a suitable recording medium. So some enhancements were essential, where the 2D version of bulk metamaterials have been introduced the so-called metasurface. This new class of interfaces simplifies the problem of recording medium with the capability of tuning the phase, amplitude, and polarization at a given frequency. In order to achieve an intelligible wavefront control, the electromagnetic properties of the metasurface should be optimized by means of solving Maxwell’s equations. In this context, integral methods are emerging as an important method to study electromagnetic from microwave to optical frequencies. The method of moment presents an accurate solution to reduce the problem of dimensions by writing its boundary conditions in the form of integral equations. But solving this kind of equations tends to be more complicated and time-consuming as the structural complexity increases. Here, the use of equivalent circuit’s method exhibits the most scalable experience to develop an integral method formulation. In fact, for allaying the resolution of Maxwell’s equations, the method of Generalised Equivalent Circuit was proposed to convey the resolution from the domain of integral equations to the domain of equivalent circuits. In point of fact, this technique consists in creating an electric image of the studied structure using discontinuity plan paradigm and taken into account its environment. So that, the electromagnetic state of the discontinuity plan is described by generalised test functions which are modelled by virtual sources not storing energy. The environmental effects are included by the use of an impedance or admittance operator. Here, we propose a tunable metasurface composed of graphene-based elements which combine the advantages of reflectarrays concept and graphene as a pillar constituent element at Terahertz frequencies. The metasurface’s building block consists of a thin gold film, a dielectric spacer SiO₂ and graphene patch antenna. Our electromagnetic analysis is based on the method of moment combined with generalised equivalent circuit (MoM-GEC). We begin by restricting our attention to study the effects of varying graphene’s chemical potential on the unit cell input impedance. So, it was found that the variation of complex conductivity of graphene allows controlling the phase and amplitude of the reflection coefficient at each element of the array. From the results obtained here, we were able to determine that the phase modulation is realized by adjusting graphene’s complex conductivity. This modulation is a viable solution compared to tunning the phase by varying the antenna length because it offers a full 2π reflection phase control.Keywords: graphene, method of moment combined with generalised equivalent circuit, reconfigurable metasurface, reflectarray, terahertz domain
Procedia PDF Downloads 1753247 Reduced Switch Count Asymmetrical Multilevel Inverter Topology
Authors: Voodi Kalandhar, Veera Reddy, Yuva Tejasree
Abstract:
Researchers have become interested in multilevel inverters (MLI) because of their potential for medium- and high-power applications. MLIs are becoming more popular as a result of their ability to generate higher voltage levels, minimal power losses, small size, and low price. These inverters used in high voltage and high-power applications because the stress on the switch is low. Even though many traditional topologies, such as the cascaded H-bridge MLI, the flying capacitor MLI, and the diode clamped MLI, exist, they all have some drawbacks. A complicated control system is needed for the flying capacitor MLI to balance the voltage across the capacitor and diode clamped MLI requires more no of diodes when no of levels increases. Even though the cascaded H-Bridge MLI is popular in terms of modularity and simple control, it requires more no of isolated DC source. Therefore, a topology with fewer devices has always been necessary for greater efficiency and reliability. A new single-phase MLI topology has been introduced to minimize the required switch count in the circuit and increase output levels. With 3 dc voltage sources, 8 switches, and 13 levels at the output, this new single- phase MLI topology was developed. To demonstrate the proposed converter's superiority over the other MLI topologies currently in use, a thorough analysis of the proposed topology will be conducted.Keywords: DC-AC converter, multi-level inverter (MLI), diodes, H-bridge inverter, switches
Procedia PDF Downloads 803246 VISMA: A Method for System Analysis in Early Lifecycle Phases
Authors: Walter Sebron, Hans Tschürtz, Peter Krebs
Abstract:
The choice of applicable analysis methods in safety or systems engineering depends on the depth of knowledge about a system, and on the respective lifecycle phase. However, the analysis method chain still shows gaps as it should support system analysis during the lifecycle of a system from a rough concept in pre-project phase until end-of-life. This paper’s goal is to discuss an analysis method, the VISSE Shell Model Analysis (VISMA) method, which aims at closing the gap in the early system lifecycle phases, like the conceptual or pre-project phase, or the project start phase. It was originally developed to aid in the definition of the system boundary of electronic system parts, like e.g. a control unit for a pump motor. Furthermore, it can be also applied to non-electronic system parts. The VISMA method is a graphical sketch-like method that stratifies a system and its parts in inner and outer shells, like the layers of an onion. It analyses a system in a two-step approach, from the innermost to the outermost components followed by the reverse direction. To ensure a complete view of a system and its environment, the VISMA should be performed by (multifunctional) development teams. To introduce the method, a set of rules and guidelines has been defined in order to enable a proper shell build-up. In the first step, the innermost system, named system under consideration (SUC), is selected, which is the focus of the subsequent analysis. Then, its directly adjacent components, responsible for providing input to and receiving output from the SUC, are identified. These components are the content of the first shell around the SUC. Next, the input and output components to the components in the first shell are identified and form the second shell around the first one. Continuing this way, shell by shell is added with its respective parts until the border of the complete system (external border) is reached. Last, two external shells are added to complete the system view, the environment and the use case shell. This system view is also stored for future use. In the second step, the shells are examined in the reverse direction (outside to inside) in order to remove superfluous components or subsystems. Input chains to the SUC, as well as output chains from the SUC are described graphically via arrows, to highlight functional chains through the system. As a result, this method offers a clear and graphical description and overview of a system, its main parts and environment; however, the focus still remains on a specific SUC. It helps to identify the interfaces and interfacing components of the SUC, as well as important external interfaces of the overall system. It supports the identification of the first internal and external hazard causes and causal chains. Additionally, the method promotes a holistic picture and cross-functional understanding of a system, its contributing parts, internal relationships and possible dangers within a multidisciplinary development team.Keywords: analysis methods, functional safety, hazard identification, system and safety engineering, system boundary definition, system safety
Procedia PDF Downloads 2233245 Dynamic Analysis and Design of Lower Extremity Power-Assisted Exoskeleton
Authors: Song Shengli, Tan Zhitao, Li Qing, Fang Husheng, Ye Qing, Zhang Xinglong
Abstract:
Lower extremity power-assisted exoskeleton (LEPEX) is a kind of wearable electromechanical integration intelligent system, walking in synchronization with the wearer, which can assist the wearer walk by means of the driver mounted in the exoskeleton on each joint. In this paper, dynamic analysis and design of the LEPEX are performed. First of all, human walking process is divided into single leg support phase, double legs support phase and ground collision model. The three kinds of dynamics modeling is established using the Lagrange method. Then, the flat walking and climbing stairs dynamic information such as torque and power of lower extremity joints is derived for loading 75kg according to scholar Stansfield measured data of flat walking and scholars R. Riener measured data of climbing stair respectively. On this basis, the joint drive way in the sagittal plane is determined, and the structure of LEPEX is designed. Finally, the designed LEPEX is simulated under ADAMS by using a person’s joint sports information acquired under flat walking and climbing stairs. The simulation result effectively verified the correctness of the structure.Keywords: kinematics, lower extremity exoskeleton, simulation, structure
Procedia PDF Downloads 4243244 Production of Hydroxy Marilone C as a Bioactive Compound from Streptomyces badius
Authors: Osama H. Elsayed, Mohsen M. S. Asker, Mahmoud A. Swelim, Ibrahim H. Abbas, Aziza I. Attwa, Mohamed E. El Awady
Abstract:
Hydroxy marilone C is a bioactive metabolite was produced from the culture broth of Streptomyces badius isolated from Egyptian soil. hydroxy marilone C was purified and fractionated by silica gel column with a gradient mobile phase dicloromethane (DCM) : Methanol then Sephadex LH-20 column using methanol as a mobile phase. It was subjected to many instruments as Infrared (IR), nuclear magnetic resonance (NMR), Mass spectroscopy (MS) and UV spectroscopy to the elucidation of its structure. It was evaluated for antioxidant, cytotoxicity against human alveolar basal epithelial cell line (A-549) and human breast adenocarcinoma cell line (MCF-7) and antiviral activities; showed that the maximum antioxidant activity was 78.8 % at 3000 µg/ml after 90 min. and the IC50 value against DPPH radical found about 1500 µg/ml after 60 min. By Using MTT assay the effect of the pure compound on the proliferation of A-549 cells and MCF-7 cells were 443 µg/ml and 147.9 µg/ml, respectively. While for detection of antiviral activity using Madin-Darby canine kidney (MDCK) cells the maximum cytotoxicity was at 27.9% and IC50 was 128.1µg/ml. The maximum concentration required for protecting 50% of the virus-infected cells against H1N1 viral cytopathogenicity (EC50) was 33.25% for 80 µg/ml. This results indicated that the hydroxy marilone C has a potential antitumor and antiviral activities.Keywords: hydroxy marilone C, production, bioactive compound, Streptomyces badius
Procedia PDF Downloads 2523243 Critical Success Factors Influencing Construction Project Performance for Different Objectives: Procurement Phase
Authors: Samart Homthong, Wutthipong Moungnoi
Abstract:
Critical success factors (CSFs) and the criteria to measure project success have received much attention over the decades and are among the most widely researched topics in the context of project management. However, although there have been extensive studies on the subject by different researchers, to date, there has been little agreement on the CSFs. The aim of this study is to identify the CSFs that influence the performance of construction projects, and determine their relative importance for different objectives across five stages in the project life cycle. A considerable literature review was conducted that resulted in the identification of 179 individual factors. These factors were then grouped into nine major categories. A questionnaire survey was used to collect data from three groups of respondents: client representatives, consultants, and contractors. Out of 164 questionnaires distributed, 93 were returned, yielding a response rate of 56.7%. Using the mean score, relative importance index, and weighted average method, the top 10 critical factors for each category were identified. The agreement of survey respondents on those categorised factors were analysed using Spearman’s rank correlation. A one-way analysis of variance was then performed to determine whether the mean scores among the various groups of respondents were statistically significant. The findings indicate the most CSFs in each category in procurement phase are: proper procurement programming of materials (time), stability in the price of materials (cost), and determining quality in the construction (quality). They are then followed by safety equipment acquisition and maintenance (health and safety), budgeting allowed in a contractual arrangement for implementing environmental management activities (environment), completeness of drawing documents (productivity), accurate measurement and pricing of bill of quantities (risk management), adequate communication among the project team (human resource), and adequate cost control measures (client satisfaction). An understanding of CSFs would help all interested parties in the construction industry to improve project performance. Furthermore, the results of this study would help construction professionals and practitioners take proactive measures for effective project management.Keywords: critical success factors, procurement phase, project life cycle, project performance
Procedia PDF Downloads 1823242 Optoelectronic Hardware Architecture for Recurrent Learning Algorithm in Image Processing
Authors: Abdullah Bal, Sevdenur Bal
Abstract:
This paper purposes a new type of hardware application for training of cellular neural networks (CNN) using optical joint transform correlation (JTC) architecture for image feature extraction. CNNs require much more computation during the training stage compare to test process. Since optoelectronic hardware applications offer possibility of parallel high speed processing capability for 2D data processing applications, CNN training algorithm can be realized using Fourier optics technique. JTC employs lens and CCD cameras with laser beam that realize 2D matrix multiplication and summation in the light speed. Therefore, in the each iteration of training, JTC carries more computation burden inherently and the rest of mathematical computation realized digitally. The bipolar data is encoded by phase and summation of correlation operations is realized using multi-object input joint images. Overlapping properties of JTC are then utilized for summation of two cross-correlations which provide less computation possibility for training stage. Phase-only JTC does not require data rearrangement, electronic pre-calculation and strict system alignment. The proposed system can be incorporated simultaneously with various optical image processing or optical pattern recognition techniques just in the same optical system.Keywords: CNN training, image processing, joint transform correlation, optoelectronic hardware
Procedia PDF Downloads 5063241 Determination of the Structural Parameters of Calcium Phosphate for Biomedical Use
Authors: María Magdalena Méndez-González, Miguel García Rocha, Carlos Manuel Yermo De la Cruz
Abstract:
Calcium phosphate (Ca5(PO4)3(X)) is widely used in orthopedic applications and is widely used as powder and granules. However, their presence in bone is in the form of nanometric needles 60 nm in length with a non-stoichiometric phase of apatite contains CO3-2, Na+, OH-, F-, and other ions in a matrix of collagen fibers. The crystal size, morphology control and interaction with cells are essential for the development of nanotechnology. The structural results of calcium phosphate, synthesized by chemical precipitation with crystal size of 22.85 nm are presented in this paper. The calcium phosphate powders were analyzed by X-ray diffraction, energy dispersive spectroscopy (EDS), infrared spectroscopy and FT-IR transmission electron microscopy. Network parameters, atomic positions, the indexing of the planes and the calculation of FWHM (full width at half maximum) were obtained. The crystal size was also calculated using the Scherer equation d (hkl) = cλ/βcosѲ. Where c is a constant related to the shape of the crystal, the wavelength of the radiation used for a copper anode is 1.54060Å, Ѳ is the Bragg diffraction angle, and β is the width average peak height of greater intensity. Diffraction pattern corresponding to the calcium phosphate called hydroxyapatite phase of a hexagonal crystal system was obtained. It belongs to the space group P63m with lattice parameters a = 9.4394 Å and c = 6.8861 Å. The most intense peak is obtained 2Ѳ = 31.55 (FWHM = 0.4798), with a preferred orientation in 121. The intensity difference between the experimental data and the calculated values is attributable to the temperature at which the sintering was performed. The intensity of the highest peak is at angle 2Ѳ = 32.11. The structure of calcium phosphate obtained was a hexagonal configuration. The intensity changes in the peaks of the diffraction pattern, in the lattice parameters at the corners, indicating the possible presence of a dopant. That each calcium atom is surrounded by a tetrahedron of oxygen and hydrogen was observed by infrared spectra. The unit cell pattern corresponds to hydroxyapatite and transmission electron microscopic crystal morphology corresponding to the hexagonal phase with a preferential growth along the c-plane was obtained.Keywords: structure, nanoparticles, calcium phosphate, metallurgical and materials engineering
Procedia PDF Downloads 5023240 Accounting for Downtime Effects in Resilience-Based Highway Network Restoration Scheduling
Authors: Zhenyu Zhang, Hsi-Hsien Wei
Abstract:
Highway networks play a vital role in post-disaster recovery for disaster-damaged areas. Damaged bridges in such networks can disrupt the recovery activities by impeding the transportation of people, cargo, and reconstruction resources. Therefore, rapid restoration of damaged bridges is of paramount importance to long-term disaster recovery. In the post-disaster recovery phase, the key to restoration scheduling for a highway network is prioritization of bridge-repair tasks. Resilience is widely used as a measure of the ability to recover with which a network can return to its pre-disaster level of functionality. In practice, highways will be temporarily blocked during the downtime of bridge restoration, leading to the decrease of highway-network functionality. The failure to take downtime effects into account can lead to overestimation of network resilience. Additionally, post-disaster recovery of highway networks is generally divided into emergency bridge repair (EBR) in the response phase and long-term bridge repair (LBR) in the recovery phase, and both of EBR and LBR are different in terms of restoration objectives, restoration duration, budget, etc. Distinguish these two phases are important to precisely quantify highway network resilience and generate suitable restoration schedules for highway networks in the recovery phase. To address the above issues, this study proposes a novel resilience quantification method for the optimization of long-term bridge repair schedules (LBRS) taking into account the impact of EBR activities and restoration downtime on a highway network’s functionality. A time-dependent integer program with recursive functions is formulated for optimally scheduling LBR activities. Moreover, since uncertainty always exists in the LBRS problem, this paper extends the optimization model from the deterministic case to the stochastic case. A hybrid genetic algorithm that integrates a heuristic approach into a traditional genetic algorithm to accelerate the evolution process is developed. The proposed methods are tested using data from the 2008 Wenchuan earthquake, based on a regional highway network in Sichuan, China, consisting of 168 highway bridges on 36 highways connecting 25 cities/towns. The results show that, in this case, neglecting the bridge restoration downtime can lead to approximately 15% overestimation of highway network resilience. Moreover, accounting for the impact of EBR on network functionality can help to generate a more specific and reasonable LBRS. The theoretical and practical values are as follows. First, the proposed network recovery curve contributes to comprehensive quantification of highway network resilience by accounting for the impact of both restoration downtime and EBR activities on the recovery curves. Moreover, this study can improve the highway network resilience from the organizational dimension by providing bridge managers with optimal LBR strategies.Keywords: disaster management, highway network, long-term bridge repair schedule, resilience, restoration downtime
Procedia PDF Downloads 1503239 Solving Transient Conduction and Radiation using Finite Volume Method
Authors: Ashok K. Satapathy, Prerana Nashine
Abstract:
Radiative heat transfer in participating medium was anticipated using the finite volume method. The radiative transfer equations are formulated for absorbing and anisotropically scattering and emitting medium. The solution strategy is discussed and the conditions for computational stability are conferred. The equations have been solved for transient radiative medium and transient radiation incorporated with transient conduction. Results have been obtained for irradiation and corresponding heat fluxes for both the cases. The solutions can be used to conclude incident energy and surface heat flux. Transient solutions were obtained for a slab of heat conducting in slab by thermal radiation. The effect of heat conduction during the transient phase is to partially equalize the internal temperature distribution. The solution procedure provides accurate temperature distributions in these regions. A finite volume procedure with variable space and time increments is used to solve the transient energy equation. The medium in the enclosure absorbs, emits, and anisotropically scatters radiative energy. The incident radiations and the radiative heat fluxes are presented in graphical forms. The phase function anisotropy plays a significant role in the radiation heat transfer when the boundary condition is non-symmetric.Keywords: participating media, finite volume method, radiation coupled with conduction, heat transfer
Procedia PDF Downloads 3793238 Children's Literature with Mathematical Dialogue for Teaching Mathematics at Elementary Level: An Exploratory First Phase about Students’ Difficulties and Teachers’ Needs in Third and Fourth Grade
Authors: Goulet Marie-Pier, Voyer Dominic, Simoneau Victoria
Abstract:
In a previous research project (2011-2019) funded by the Quebec Ministry of Education, an educational approach was developed based on the teaching and learning of place value through children's literature. Subsequently, the effect of this approach on the conceptual understanding of the concept among first graders (6-7 years old) was studied. The current project aims to create a series of children's literature to help older elementary school students (8-10 years old) in developing a conceptual understanding of complex mathematical concepts taught at their grade level rather than a more typical procedural understanding. Knowing that there are no educational material or children's books that exist to achieve our goals, four stories, accompanied by mathematical activities, will be created to support students, and their teachers, in the learning and teaching of mathematical concepts that can be challenging within their mathematic curriculum. The stories will also introduce a mathematical dialogue into the characters' discourse with the aim to address various mathematical foundations for which there are often erroneous statements among students and occasionally among teachers. In other words, the stories aim to empower students seeking a real understanding of difficult mathematical concepts, as well as teachers seeking a way to teach these difficult concepts in a way that goes beyond memorizing rules and procedures. In order to choose the concepts that will be part of the stories, it is essential to understand the current landscape regarding the main difficulties experienced by students in third and fourth grade (8-10 years old) and their teacher’s needs. From this perspective, the preliminary phase of the study, as discussed in the presentation, will provide critical insight into the mathematical concepts with which the target grade levels struggle the most. From this data, the research team will select the concepts and develop their stories in the second phase of the study. Two questions are preliminary to the implementation of our approach, namely (1) what mathematical concepts are considered the most “difficult to teach” by teachers in the third and fourth grades? and (2) according to teachers, what are the main difficulties encountered by their students in numeracy? Self-administered online questionnaires using the SimpleSondage software will be sent to all third and fourth-grade teachers in nine school service centers in the Quebec region, representing approximately 300 schools. The data that will be collected in the fall of 2022 will be used to compare the difficulties identified by the teachers with those prevalent in the scientific literature. Considering that this ensures consistency between the proposed approach and the true needs of the educational community, this preliminary phase is essential to the relevance of the rest of the project. It is also an essential first step in achieving the two ultimate goals of the research project, improving the learning of elementary school students in numeracy, and contributing to the professional development of elementary school teachers.Keywords: children’s literature, conceptual understanding, elementary school, learning and teaching, mathematics
Procedia PDF Downloads 883237 Inverse Saturable Absorption in Non-linear Amplifying Loop Mirror Mode-Locked Fiber Laser
Authors: Haobin Zheng, Xiang Zhang, Yong Shen, Hongxin Zou
Abstract:
The research focuses on mode-locked fiber lasers with a non-linear amplifying loop mirror (NALM). Although these lasers have shown potential, they still have limitations in terms of low repetition rate. The self-starting of mode-locking in NALM is influenced by the cross-phase modulation (XPM) effect, which has not been thoroughly studied. The aim of this study is two-fold. First, to overcome the difficulties associated with increasing the repetition rate in mode-locked fiber lasers with NALM. Second, to analyze the influence of XPM on self-starting of mode-locking. The power distributions of two counterpropagating beams in the NALM and the differential non-linear phase shift (NPS) accumulations are calculated. The analysis is conducted from the perspective of NPS accumulation. The differential NPSs for continuous wave (CW) light and pulses in the fiber loop are compared to understand the inverse saturable absorption (ISA) mechanism during pulse formation in NALM. The study reveals a difference in differential NPSs between CW light and pulses in the fiber loop in NALM. This difference leads to an ISA mechanism, which has not been extensively studied in artificial saturable absorbers. The ISA in NALM provides an explanation for experimentally observed phenomena, such as active mode-locking initiation through tapping the fiber or fine-tuning light polarization. These findings have important implications for optimizing the design of NALM and reducing the self-starting threshold of high-repetition-rate mode-locked fiber lasers. This study contributes to the theoretical understanding of NALM mode-locked fiber lasers by exploring the ISA mechanism and its impact on self-starting of mode-locking. The research fills a gap in the existing knowledge regarding the XPM effect in NALM and its role in pulse formation. This study provides insights into the ISA mechanism in NALM mode-locked fiber lasers and its role in selfstarting of mode-locking. The findings contribute to the optimization of NALM design and the reduction of self-starting threshold, which are essential for achieving high-repetition-rate operation in fiber lasers. Further research in this area can lead to advancements in the field of mode-locked fiber lasers with NALM.Keywords: inverse saturable absorption, NALM, mode-locking, non-linear phase shift
Procedia PDF Downloads 1003236 Exploration into Bio Inspired Computing Based on Spintronic Energy Efficiency Principles and Neuromorphic Speed Pathways
Authors: Anirudh Lahiri
Abstract:
Neuromorphic computing, inspired by the intricate operations of biological neural networks, offers a revolutionary approach to overcoming the limitations of traditional computing architectures. This research proposes the integration of spintronics with neuromorphic systems, aiming to enhance computational performance, scalability, and energy efficiency. Traditional computing systems, based on the Von Neumann architecture, struggle with scalability and efficiency due to the segregation of memory and processing functions. In contrast, the human brain exemplifies high efficiency and adaptability, processing vast amounts of information with minimal energy consumption. This project explores the use of spintronics, which utilizes the electron's spin rather than its charge, to create more energy-efficient computing systems. Spintronic devices, such as magnetic tunnel junctions (MTJs) manipulated through spin-transfer torque (STT) and spin-orbit torque (SOT), offer a promising pathway to reducing power consumption and enhancing the speed of data processing. The integration of these devices within a neuromorphic framework aims to replicate the efficiency and adaptability of biological systems. The research is structured into three phases: an exhaustive literature review to build a theoretical foundation, laboratory experiments to test and optimize the theoretical models, and iterative refinements based on experimental results to finalize the system. The initial phase focuses on understanding the current state of neuromorphic and spintronic technologies. The second phase involves practical experimentation with spintronic devices and the development of neuromorphic systems that mimic synaptic plasticity and other biological processes. The final phase focuses on refining the systems based on feedback from the testing phase and preparing the findings for publication. The expected contributions of this research are twofold. Firstly, it aims to significantly reduce the energy consumption of computational systems while maintaining or increasing processing speed, addressing a critical need in the field of computing. Secondly, it seeks to enhance the learning capabilities of neuromorphic systems, allowing them to adapt more dynamically to changing environmental inputs, thus better mimicking the human brain's functionality. The integration of spintronics with neuromorphic computing could revolutionize how computational systems are designed, making them more efficient, faster, and more adaptable. This research aligns with the ongoing pursuit of energy-efficient and scalable computing solutions, marking a significant step forward in the field of computational technology.Keywords: material science, biological engineering, mechanical engineering, neuromorphic computing, spintronics, energy efficiency, computational scalability, synaptic plasticity.
Procedia PDF Downloads 41