Search results for: resolution down converter
545 Heterogeneous Photocatalytic Degradation of Ibuprofen in Ultrapure Water, Municipal and Pharmaceutical Industry Wastewaters Using a TiO2/UV-LED System
Authors: Nabil Jallouli, Luisa M. Pastrana-Martínez, Ana R. Ribeiro, Nuno F. F. Moreira, Joaquim L. Faria, Olfa Hentati, Adrián M. T. Silva, Mohamed Ksibi
Abstract:
Degradation and mineralization of ibuprofen (IBU) were investigated using Ultraviolet (UV) Light Emitting Diodes (LEDs) in TiO2 photocatalysis. Samples of ultrapure water (UP) and a secondary treated effluent of a municipal wastewater treatment plant (WWTP), both spiked with IBU, as well as a highly concentrated IBU (230 mgL-1) pharmaceutical industry wastewater (PIWW), were tested in the TiO2/UV-LED system. Three operating parameters, namely, pH, catalyst load and number of LEDs were optimized. The process efficiency was evaluated in terms of IBU removal using high performance liquid chromatography (HPLC) and ultra-high performance liquid chromatography coupled to tandem mass spectrometry (UHPLC-MS/MS). Additionally, the mineralization was investigated by determining the dissolved organic carbon (DOC) content. The chemical structures of transformation products were proposed based on the data obtained using liquid chromatography with a high resolution mass spectrometer ion trap/time-of-flight (LC-MS-IT-TOF). A possible pathway of IBU degradation was accordingly proposed. Bioassays were performed using the marine bacterium Vibrio fischeri to evaluate the potential acute toxicity of original and treated wastewaters. TiO2 heterogeneous photocatalysis was efficient to remove IBU from UP and from PIWW, and less efficient in treating the wastewater from the municipal WWTP. The acute toxicity decreased by ca. 40% after treatment, regardless of the studied matrix.Keywords: acute toxicity, Ibuprofen, UV-LEDs, wastewaters
Procedia PDF Downloads 253544 Evaluating the Validity of CFD Model of Dispersion in a Complex Urban Geometry Using Two Sets of Experimental Measurements
Authors: Mohammad R. Kavian Nezhad, Carlos F. Lange, Brian A. Fleck
Abstract:
This research presents the validation study of a computational fluid dynamics (CFD) model developed to simulate the scalar dispersion emitted from rooftop sources around the buildings at the University of Alberta North Campus. The ANSYS CFX code was used to perform the numerical simulation of the wind regime and pollutant dispersion by solving the 3D steady Reynolds-averaged Navier-Stokes (RANS) equations on a building-scale high-resolution grid. The validation study was performed in two steps. First, the CFD model performance in 24 cases (eight wind directions and three wind speeds) was evaluated by comparing the predicted flow fields with the available data from the previous measurement campaign designed at the North Campus, using the standard deviation method (SDM), while the estimated results of the numerical model showed maximum average percent errors of approximately 53% and 37% for wind incidents from the North and Northwest, respectively. Good agreement with the measurements was observed for the other six directions, with an average error of less than 30%. In the second step, the reliability of the implemented turbulence model, numerical algorithm, modeling techniques, and the grid generation scheme was further evaluated using the Mock Urban Setting Test (MUST) dispersion dataset. Different statistical measures, including the fractional bias (FB), the geometric mean bias (MG), and the normalized mean square error (NMSE), were used to assess the accuracy of the predicted dispersion field. Our CFD results are in very good agreement with the field measurements.Keywords: CFD, plume dispersion, complex urban geometry, validation study, wind flow
Procedia PDF Downloads 133543 Optimal Parameters of Two-Color Ionizing Laser Pulses for Terahertz Generation
Authors: I. D. Laryushin, V. A. Kostin, A. A. Silaev, N. V. Vvedenskii
Abstract:
Generation of broadband intense terahertz (THz) radiation attracts reasonable interest due to various applications, such as the THz time-domain spectroscopy, the probing and control of various ultrafast processes, the THz imaging with subwavelength resolution, and many others. One of the most promising methods for generating powerful and broadband terahertz pulses is based on focusing two-color femtosecond ionizing laser pulses in gases, including ambient air. For this method, the amplitudes of terahertz pulses are determined by the free-electron current density remaining in a formed plasma after the passage of the laser pulse. The excitation of this residual current density can be treated as multi-wave mixing: Аn effective generation of terahertz radiation is possible only when the frequency ratio of one-color components in the two-color pulse is close to irreducible rational fraction a/b with small odd sum a + b. This work focuses on the optimal parameters (polarizations and intensities) of laser components for the strongest THz generation. The optimal values of parameters are found numerically and analytically with the use of semiclassical approach for calculating the residual current density. For frequency ratios close to a/(a ± 1) with natural a, the strongest THz generation is shown to take place when the both laser components have circular polarizations and equal intensities. For this optimal case, an analytical formula for the residual current density was derived. For the frequency ratios such as 2/5, the two-color ionizing pulses with circularly polarized components practically do not excite the residual current density. However, the optimal parameters correspond generally to specific elliptical (not linear) polarizations of the components and intensity ratios close to unity.Keywords: broadband terahertz radiation, ionization, laser plasma, ultrashort two-color pulses
Procedia PDF Downloads 207542 Next Generation UK Storm Surge Model for the Insurance Market: The London Case
Authors: Iacopo Carnacina, Mohammad Keshtpoor, Richard Yablonsky
Abstract:
Non-structural protection measures against flooding are becoming increasingly popular flood risk mitigation strategies. In particular, coastal flood insurance impacts not only private citizens but also insurance and reinsurance companies, who may require it to retain solvency and better understand the risks they face from a catastrophic coastal flood event. In this context, a framework is presented here to assess the risk for coastal flooding across the UK. The area has a long history of catastrophic flood events, including the Great Flood of 1953 and the 2013 Cyclone Xaver storm, both of which led to significant loss of life and property. The current framework will leverage a technology based on a hydrodynamic model (Delft3D Flexible Mesh). This flexible mesh technology, coupled with a calibration technique, allows for better utilisation of computational resources, leading to higher resolution and more detailed results. The generation of a stochastic set of extra tropical cyclone (ETC) events supports the evaluation of the financial losses for the whole area, also accounting for correlations between different locations in different scenarios. Finally, the solution shows a detailed analysis for the Thames River, leveraging the information available on flood barriers and levees. Two realistic disaster scenarios for the Greater London area are simulated: In the first scenario, the storm surge intensity is not high enough to fail London’s flood defences, but in the second scenario, London’s flood defences fail, highlighting the potential losses from a catastrophic coastal flood event.Keywords: storm surge, stochastic model, levee failure, Thames River
Procedia PDF Downloads 230541 Political Transition in Nepal: Challenges and Limitations to Post-Conflict Peace-Building
Authors: Sourina Bej
Abstract:
Since the process of decolonization in 1940, several countries in South Asia have witnessed intra-state conflicts owing to ineffective political governance. The conflicts have remained protracted as the countries have failed to make a holistic transition to a democratic state. Nepal is one such South Asian country facing a turmultous journey from monarchy to republicanism. The paper aims to focus on the democratic transition in the context of Nepal’s political, legal and economic institutions. The presence of autocratic feudalistic and centralised state structure with entrenched socio-economic inequalities has resulted in mass uprising only to see the country slip back to the old order. Even a violent civil war led by the Maoists could not overhaul the political relations or stabilize the democratic space. The paper aims to analyse the multiple political, institutional and operational challenges in the implementation of the peace agreement with the Maoist. Looking at the historical background, the paper will examine the problematic nation-building that lies at the heart of fragile peace process in Nepal. Regional dynamics have played a big role in convoluting the peace-building. The new constitution aimed at conflict resolution brought to the open, deep seated hatred among different ethnic groups in Nepal. Apart from studying the challenges to the peace process and the role of external players like India and China in the political reconstruction, the paper will debate on a viable federal solution to the ethnic conflict in Nepal. If the current government fails to pass a constitution accepted by most ethnic groups, Nepal will remain on the brink of new conflict outbreaks.Keywords: democratisation, ethnic conflict, Nepal, peace process
Procedia PDF Downloads 276540 Fuel Cells Not Only for Cars: Technological Development in Railways
Authors: Marita Pigłowska, Beata Kurc, Paweł Daszkiewicz
Abstract:
Railway vehicles are divided into two groups: traction (powered) vehicles and wagons. The traction vehicles include locomotives (line and shunting), railcars (sometimes referred to as railbuses), and multiple units (electric and diesel), consisting of several or a dozen carriages. In vehicles with diesel traction, fuel energy (petrol, diesel, or compressed gas) is converted into mechanical energy directly in the internal combustion engine or via electricity. In the latter case, the combustion engine generator produces electricity that is then used to drive the vehicle (diesel-electric drive or electric transmission). In Poland, such a solution dominates both in heavy linear and shunting locomotives. The classic diesel drive is available for the lightest shunting locomotives, railcars, and passenger diesel multiple units. Vehicles with electric traction do not have their own source of energy -they use pantographs to obtain electricity from the traction network. To determine the competitiveness of the hydrogen propulsion system, it is essential to understand how it works. The basic elements of the construction of a railway vehicle drive system that uses hydrogen as a source of traction force are fuel cells, batteries, fuel tanks, traction motors as well as main and auxiliary converters. The compressed hydrogen is stored in tanks usually located on the roof of the vehicle. This resource is supplemented with the use of specialized infrastructure while the vehicle is stationary. Hydrogen is supplied to the fuel cell, where it oxidizes. The effect of this chemical reaction is electricity and water (in two forms -liquid and water vapor). Electricity is stored in batteries (so far, lithium-ion batteries are used). Electricity stored in this way is used to drive traction motors and supply onboard equipment. The current generated by the fuel cell passes through the main converter, whose task is to adjust it to the values required by the consumers, i.e., batteries and the traction motor. The work will attempt to construct a fuel cell with unique electrodes. This research is a trend that connects industry with science. The first goal will be to obtain hydrogen on a large scale in tube furnaces, to thoroughly analyze the obtained structures (IR), and to apply the method in fuel cells. The second goal is to create low-energy energy storage and distribution station for hydrogen and electric vehicles. The scope of the research includes obtaining a carbon variety and obtaining oxide systems on a large scale using a tubular furnace and then supplying vehicles. Acknowledgments: This work is supported by the Polish Ministry of Science and Education, project "The best of the best! 4.0", number 0911/MNSW/4968 – M.P. and grant 0911/SBAD/2102—B.K.Keywords: railway, hydrogen, fuel cells, hybrid vehicles
Procedia PDF Downloads 187539 Review of Downscaling Methods in Climate Change and Their Role in Hydrological Studies
Authors: Nishi Bhuvandas, P. V. Timbadiya, P. L. Patel, P. D. Porey
Abstract:
Recent perceived climate variability raises concerns with unprecedented hydrological phenomena and extremes. Distribution and circulation of the waters of the Earth become increasingly difficult to determine because of additional uncertainty related to anthropogenic emissions. According to the sixth Intergovernmental Panel on Climate Change (IPCC) Technical Paper on Climate Change and water, changes in the large-scale hydrological cycle have been related to an increase in the observed temperature over several decades. Although many previous research carried on effect of change in climate on hydrology provides a general picture of possible hydrological global change, new tools and frameworks for modelling hydrological series with nonstationary characteristics at finer scales, are required for assessing climate change impacts. Of the downscaling techniques, dynamic downscaling is usually based on the use of Regional Climate Models (RCMs), which generate finer resolution output based on atmospheric physics over a region using General Circulation Model (GCM) fields as boundary conditions. However, RCMs are not expected to capture the observed spatial precipitation extremes at a fine cell scale or at a basin scale. Statistical downscaling derives a statistical or empirical relationship between the variables simulated by the GCMs, called predictors, and station-scale hydrologic variables, called predictands. The main focus of the paper is on the need for using statistical downscaling techniques for projection of local hydrometeorological variables under climate change scenarios. The projections can be then served as a means of input source to various hydrologic models to obtain streamflow, evapotranspiration, soil moisture and other hydrological variables of interest.Keywords: climate change, downscaling, GCM, RCM
Procedia PDF Downloads 404538 Accessibility Assessment of School Facilities Using Geospatial Technologies: A Case Study of District Sheikhupura
Authors: Hira Jabbar
Abstract:
Education is vital for inclusive growth of an economy and a critical contributor for investment in human capital. Like other developing countries, Pakistan is facing enormous challenges regarding the provision of public facilities, improper infrastructure planning, accelerating rate of population and poor accessibility. The influence of the rapid advancement and innovations in GIS and RS techniques have proved to be a useful tool for better planning and decision making to encounter these challenges. Therefore present study incorporates GIS and RS techniques to investigate the spatial distribution of school facilities, identifies settlements with served and unserved population, finds potential areas for new schools based on population and develops an accessibility index to evaluate the higher accessibility for schools. For this purpose high-resolution worldview imagery was used to develop road network, settlements and school facilities and to generate school accessibility for each level. Landsat 8 imagery was utilized to extract built-up area by applying pre and post-processing models and Landscan 2015 was used to analyze population statistics. Service area analysis was performed using network analyst extension in ArcGIS 10.3v and results were evaluated for served and underserved areas and population. An accessibility tool was used to evaluate a set of potential destinations to determine which is the most accessible with the given population distribution. Findings of the study may contribute to facilitating the town planners and education authorities for understanding the existing patterns of school facilities. It is concluded that GIS and remote sensing can be effectively used in urban transport and facility planning.Keywords: accessibility, geographic information system, landscan, worldview
Procedia PDF Downloads 324537 Rare-Earth Ions Doped Lithium Niobate Crystals: Luminescence and Raman Spectroscopy
Authors: Ninel Kokanyan, Edvard Kokanyan, Anush Movsesyan, Marc D. Fontana
Abstract:
Lithium Niobate (LN) is one of the widely used ferroelectrics having a wide number of applications such as phase-conjugation, holographic storage, frequency doubling, SAW sensors. Furthermore, the possibility of doping with rare-earth ions leads to new laser applications. Ho and Tm dopants seem interesting due to laser emission obtained at around 2 µm. Raman spectroscopy is a powerful spectroscopic technique providing a possibility to obtain a number of information about physicochemical and also optical properties of a given material. Polarized Raman measurements were carried out on Ho and Tm doped LN crystals with excitation wavelengths of 532nm and 785nm. In obtained Raman anti-Stokes spectra, we detect expected modes according to Raman selection rules. In contrast, Raman Stokes spectra are significantly different compared to what is expected by selection rules. Additional forbidden lines are detected. These lines have quite high intensity and are well defined. Moreover, the intensity of mentioned additional lines increases with an increase of Ho or Tm concentrations in the crystal. These additional lines are attributed to emission lines reflecting the photoluminescence spectra of these crystals. It means that in our case we were able to detect, within a very good resolution, in the same Stokes spectrum, the transitions between the electronic states, and the vibrational states as well. The analysis of these data is reported as a function of Ho and Tm content, for different polarizations and wavelengths, of the incident laser beam. Results also highlight additional information about π and σ polarizations of crystals under study.Keywords: lithium niobate, Raman spectroscopy, luminescence, rare-earth ions doped lithium niobate
Procedia PDF Downloads 219536 Designing Agricultural Irrigation Systems Using Drone Technology and Geospatial Analysis
Authors: Yongqin Zhang, John Lett
Abstract:
Geospatial technologies have been increasingly used in agriculture for various applications and purposes in recent years. Unmanned aerial vehicles (drones) fit the needs of farmers in farming operations, from field spraying to grow cycles and crop health. In this research, we conducted a practical research project that used drone technology to design and map optimal locations and layouts of irrigation systems for agriculture farms. We flew a DJI Mavic 2 Pro drone to acquire aerial remote sensing images over two agriculture fields in Forest, Mississippi, in 2022. Flight plans were first designed to capture multiple high-resolution images via a 20-megapixel RGB camera mounted on the drone over the agriculture fields. The Drone Deploy web application was then utilized to develop flight plans and subsequent image processing and measurements. The images were orthorectified and processed to estimate the area of the area and measure the locations of the water line and sprinkle heads. Field measurements were conducted to measure the ground targets and validate the aerial measurements. Geospatial analysis and photogrammetric measurements were performed for the study area to determine optimal layout and quantitative estimates for irrigation systems. We created maps and tabular estimates to demonstrate the locations, spacing, amount, and layout of sprinkler heads and water lines to cover the agricultural fields. This research project provides scientific guidance to Mississippi farmers for a precision agricultural irrigation practice.Keywords: drone images, agriculture, irrigation, geospatial analysis, photogrammetric measurements
Procedia PDF Downloads 73535 Development of Alpha Spectroscopy Method with Solid State Nuclear Track Detector Using Aluminium Thin Films
Authors: Nidal Dwaikat
Abstract:
This work presents the development of alpha spectroscopy method with Solid-state nuclear track detectors using aluminum thin films. The resolution of this method is high, and it is able to discriminate between alpha particles at different incident energy. It can measure the exact number of alpha particles at specific energy without needing a calibration of alpha track diameter versus alpha energy. This method was tested by using Cf-252 alpha standard source at energies 5.11 Mev, 3.86 MeV and 2.7 MeV, which produced by the variation of detector -standard source distance. On front side, two detectors were covered with two Aluminum thin films and the third detector was kept uncovered. The thickness of Aluminum thin films was selected carefully (using SRIM 2013) such that one of the films will block the lower two alpha particles (3.86 MeV and 2.7 MeV) and the alpha particles at higher energy (5.11 Mev) can penetrate the film and reach the detector’s surface. The second thin film will block alpha particles at lower energy of 2.7 MeV and allow alpha particles at higher two energies (5.11 Mev and 3.86 MeV) to penetrate and produce tracks. For uncovered detector, alpha particles at three different energies can produce tracks on it. For quality assurance and accuracy, the detectors were mounted on thick enough copper substrates to block exposure from the backside. The tracks on the first detector are due to alpha particles at energy of 5.11 MeV. The difference between the tracks number on the first detector and the tracks number on the second detector is due to alpha particles at energy of 3.8 MeV. Finally, by subtracting the tracks number on the second detector from the tracks number on the third detector (uncovered), we can find the tracks number due to alpha particles at energy 2.7 MeV. After knowing the efficiency calibration factor, we can exactly calculate the activity of standard source.Keywords: aluminium thin film, alpha particles, copper substrate, CR-39 detector
Procedia PDF Downloads 364534 A comparative Analysis of the Good Faith Principle in Construction Contracts
Authors: Nadine Rashed, A. Samer Ezeldin, Engy Serag
Abstract:
The principle of good faith plays a critical role in shaping contractual relationships, yet its application varies significantly across different types of construction contracts and legal systems. This paper presents a comparative analysis of how various construction contracts perceive the principle of good faith, a fundamental aspect that influences contractual relationships and project outcomes. The primary objective of this analysis is to examine the differences in the application and interpretation of good faith across key construction contracts, including JCT (Joint Contracts Tribunal), FIDIC (Fédération Internationale des Ingénieurs-Conseils), NEC (New Engineering Contract), and ICE (Institution of Civil Engineers) Contracts. To accomplish this, a mixed-methods approach will be employed, integrating a thorough literature review of current legal frameworks and academic publications with primary data gathered from a structured questionnaire aimed at industry professionals such as contract managers, legal advisors, and project stakeholders. This combined strategy will enable a holistic understanding of the theoretical foundations of good faith in construction contracts and its practical effects in real-world contexts. The findings of this analysis are expected to yield valuable insights into how varying interpretations of good faith can impact project performance, dispute resolution, and collaborative practices within the construction industry. This paper contributes to a deeper understanding of how the principle of good faith is evolving in the construction industry, providing insights for contract drafters, legal practitioners, and project managers seeking to navigate the complexities of contractual obligations across different legal systems.Keywords: construction contracts, contractual obligations, ethical practices, good faith
Procedia PDF Downloads 19533 Enhancing Embedded System Efficiency with Digital Signal Processing Cores
Authors: Anil H. Dhanawade, Akshay S., Harshal M. Lakesar
Abstract:
This paper presents a comprehensive analysis of the performance advantages offered by DSP (Digital Signal Processing) cores compared to traditional MCU (Microcontroller Unit) cores in the execution of various functions critical to real-time applications. The focus is on the integration of DSP functionalities, specifically in the context of motor control applications such as Field-Oriented Control (FOC), trigonometric calculations, back-EMF estimation, digital filtering, and high-resolution PWM generation. Through comparative analysis, it is demonstrated that DSP cores significantly enhance processing efficiency, achieving faster execution times for complex mathematical operations essential for precise torque and speed control. The study highlights the capabilities of DSP cores, including single-cycle Multiply-Accumulate (MAC) operations and optimized hardware for trigonometric functions, which collectively reduce latency and improve real-time performance. In contrast, MCU cores, while capable of performing similar tasks, typically exhibit longer execution times due to reliance on software-based solutions and lack of dedicated hardware acceleration. The findings underscore the critical role of DSP cores in applications requiring high-speed processing and low-latency response, making them indispensable in the automotive, industrial, and robotics sectors. This work serves as a reference for future developments in embedded systems, emphasizing the importance of architecture choice in achieving optimal performance in demanding computational tasks.Keywords: CPU core, DSP, assembly code, motor control
Procedia PDF Downloads 14532 Digital Athena – Contemporary Commentaries and Greek Mythology Explored through 3D Printing
Authors: Rose Lastovicka, Bernard Guy, Diana Burton
Abstract:
Greek myth and art acted as tools to think with, and a lens through which to explore complex topics as a form of social media. In particular, coins were a form of propaganda to communicate the wealth and power of the city-states they originated from as they circulated from person to person. From this, how can the application of 3D printing technologies explore the infusion of ancient forms with contemporary commentaries to promote discussion? The digital reconstruction of artifacts is a topic that has been researched by various groups all over the globe. Yet, the exploration of Greek myth through artifacts infused with contemporary issues is currently unexplored in this medium. Using the Stratasys J750 3D printer - a multi-material, full-colour 3D printer - a series of coins inspired by ancient Greek currency and myth was created to present commentaries on the adversities surrounding individuals in the LGBT+ community. Using the J750 as the medium for expression allows for complete control and precision of the models to create complex high-resolution iconography. The coins are printed with a hard, translucent material with coloured 3D visuals embedded into the coin to then be viewed in close contact by the audience. These coins as commentaries present an avenue for wider understanding by drawing perspectives not only from sources concerned with the contemporary LGBT+ community but also from sources exploring ancient homosexuality and the perception and regulation of it in antiquity. By displaying what are usually points of contention between anti- and pro-LGBT+ parties, this visual medium opens up a discussion to both parties, suggesting heritage can play a vital interpretative role in the contemporary world.Keywords: 3D printing, design, Greek mythology, LGBT+ community
Procedia PDF Downloads 113531 Digitizing Masterpieces in Italian Museums: Techniques, Challenges and Consequences from Giotto to Caravaggio
Authors: Ginevra Addis
Abstract:
The possibility of reproducing physical artifacts in a digital format is one of the opportunities offered by the technological advancements in information and communication most frequently promoted by museums. Indeed, the study and conservation of our cultural heritage have seen significant advancement due to the three-dimensional acquisition and modeling technology. A variety of laser scanning systems has been developed, based either on optical triangulation or on time-of-flight measurement, capable of producing digital 3D images of complex structures with high resolution and accuracy. It is necessary, however, to explore the challenges and opportunities that this practice brings within museums. The purpose of this paper is to understand what change is introduced by digital techniques in those museums that are hosting digital masterpieces. The methodology used will investigate three distinguished Italian exhibitions, related to the territory of Milan, trying to analyze the following issues about museum practices: 1) how digitizing art masterpieces increases the number of visitors; 2) what the need that calls for the digitization of artworks; 3) which techniques are most used; 4) what the setting is; 5) the consequences of a non-publication of hard copies of catalogues; 6) envision of these practices in the future. Findings will show how interconnection plays an important role in rebuilding a collection spread all over the world. Secondly how digital artwork duplication and extension of reality entail new forms of accessibility. Thirdly, that collection and preservation through digitization of images have both a social and educational mission. Fourthly, that convergence of the properties of different media (such as web, radio) is key to encourage people to get actively involved in digital exhibitions. The present analysis will suggest further research that should create museum models and interaction spaces that act as catalysts for innovation.Keywords: digital masterpieces, education, interconnection, Italian museums, preservation
Procedia PDF Downloads 174530 Quartz Crystal Microbalance Based Hydrophobic Nanosensor for Lysozyme Detection
Authors: F. Yılmaz, Y. Saylan, A. Derazshamshir, S. Atay, A. Denizli
Abstract:
Quartz crystal microbalance (QCM), high-resolution mass-sensing technique, measures changes in mass on oscillating quartz crystal surface by measuring changes in oscillation frequency of crystal in real time. Protein adsorption techniques via hydrophobic interaction between protein and solid support, called hydrophobic interaction chromatography (HIC), can be favorable in many cases. Some nanoparticles can be effectively applied for HIC. HIC takes advantage of the hydrophobicity of proteins by promoting its separation on the basis of hydrophobic interactions between immobilized hydrophobic ligands and nonpolar regions on the surface of the proteins. Lysozyme is found in a variety of vertebrate cells and secretions, such as spleen, milk, tears, and egg white. Its common applications are as a cell-disrupting agent for extraction of bacterial intracellular products, as an antibacterial agent in ophthalmologic preparations, as a food additive in milk products and as a drug for treatment of ulcers and infections. Lysozyme has also been used in cancer chemotherapy. The aim of this study is the synthesis of hydrophobic nanoparticles for Lysozyme detection. For this purpose, methacryoyl-L-phenylalanine was chosen as a hydrophobic matrix. The hydrophobic nanoparticles were synthesized by micro-emulsion polymerization method. Then, hydrophobic QCM nanosensor was characterized by Attenuated total reflection Fourier transform infrared (ATR-FTIR) spectroscopy, atomic force microscopy (AFM) and zeta size analysis. Hydrophobic QCM nanosensor was tested for real-time detection of Lysozyme from aqueous solution. The kinetic and affinity studies were determined by using Lysozyme solutions with different concentrations. The responses related to a mass (Δm) and frequency (Δf) shifts were used to evaluate adsorption properties.Keywords: nanosensor, HIC, lysozyme, QCM
Procedia PDF Downloads 347529 Linking Metabolism, Pluripotency and Epigenetic Changes during Early Differentiation of Embryonic Stem Cells
Authors: Arieh Moussaieff, Bénédicte Elena-Herrmann, Yaakov Nahmias, Daniel Aberdam
Abstract:
Differentiation of pluripotent stem cells is a slow process, marked by the gradual loss of pluripotency factors over days in culture. While the first few days of differentiation show minor changes in the cellular transcriptome, intracellular signaling pathways remain largely unknown. Recently, several groups demonstrated that the metabolism of pluripotent mouse and human cells is different from that of somatic cells, showing a marked increase in glycolysis previously identified in cancer as the Warburg effect. Here, we sought to identify the earliest metabolic changes induced at the first hours of differentiation. High-resolution NMR analysis identified 35 metabolites and a distinct, gradual transition in metabolism during early differentiation. Metabolic and transcriptional analyses showed the induction of glycolysis toward acetate and acetyl-coA in pluripotent cells, and an increase in cholesterol biosynthesis during early differentiation. Importantly, this metabolic pathway regulated differentiation of human and mouse embryonic stem cells. Acetate delayed differentiation preventing differentiation-induced histone de-acetylation in a dose-dependent manner. Glycolytic inhibitors upstream of acetate caused differentiation of pluripotent cells, while those downstream delayed differentiation. Our data suggests that a rapid loss of glycolysis in early differentiation down-regulates acetate and acetyl-coA production, causing a loss of histone acetylation and concomitant loss of pluripotency. It demonstrate that pluripotent stem cells utilize a novel metabolism pathway to maintain pluripotency through acetate/acetyl-coA and highlights the important role metabolism plays in pluripotency and early differentiation of stem cells.Keywords: pluripotency, metabolomics, epigenetics, acetyl-coA
Procedia PDF Downloads 468528 Numerical Investigation of the Needle Opening Process in a High Pressure Gas Injector
Authors: Matthias Banholzer, Hagen Müller, Michael Pfitzner
Abstract:
Gas internal combustion engines are widely used as propulsion systems or in power plants to generate heat and electricity. While there are different types of injection methods including the manifold port fuel injection and the direct injection, the latter has more potential to increase the specific power by avoiding air displacement in the intake and to reduce combustion anomalies such as backfire or pre-ignition. During the opening process of the injector, multiple flow regimes occur: subsonic, transonic and supersonic. To cover the wide range of Mach numbers a compressible pressure-based solver is used. While the standard Pressure Implicit with Splitting of Operators (PISO) method is used for the coupling between velocity and pressure, a high-resolution non-oscillatory central scheme established by Kurganov and Tadmor calculates the convective fluxes. A blending function based on the local Mach- and CFL-number switches between the compressible and incompressible regimes of the developed model. As the considered operating points are well above the critical state of the used fluids, the ideal gas assumption is not valid anymore. For the real gas thermodynamics, the models based on the Soave-Redlich-Kwong equation of state were implemented. The caloric properties are corrected using a departure formalism, for the viscosity and the thermal conductivity the empirical correlation of Chung is used. For the injector geometry, the dimensions of a diesel injector were adapted. Simulations were performed using different nozzle and needle geometries and opening curves. It can be clearly seen that there is a significant influence of all three parameters.Keywords: high pressure gas injection, hybrid solver, hydrogen injection, needle opening process, real-gas thermodynamics
Procedia PDF Downloads 459527 Development of Liquefaction-Induced Ground Damage Maps for the Wairau Plains, New Zealand
Authors: Omer Altaf, Liam Wotherspoon, Rolando Orense
Abstract:
The Wairau Plains are located in the north-east of the South Island of New Zealand in the region of Marlborough. The region is cut by many active crustal faults such as the Wairau, Awatere, and Clarence faults, which give rise to frequent seismic events. This paper presents the preliminary results of the overall project in which liquefaction-induced ground damage maps are developed in the Wairau Plains based on the Ministry of Business, Innovation and Employment NZ guidance. A suite of maps has been developed in relation to the level of details that was available to inform the liquefaction hazard mapping. Maps at the coarsest level of detail make use of regional geologic information, applying semi-quantitative criteria based on geological age, design peak ground accelerations and depth to the water table. The next level of detail incorporates higher resolution surface geomorphologic characteristics to better delineate potentially liquefiable and non-liquefiable deposits across the region. The most detailed assessment utilised CPT sounding data to develop ground damage response curves for areas across the region and provide a finer level of categorisation of liquefaction vulnerability. Linking these with design level earthquakes defined through NZGS guidelines will enable detailed classification to be carried out at CPT investigation locations, from very low through to high liquefaction vulnerability. To update classifications to these detailed levels, CPT investigations in geomorphic regions are grouped together to provide an indication of the representative performance of the soils in these areas making use of the geomorphic mapping outlined above.Keywords: hazard, liquefaction, mapping, seismicity
Procedia PDF Downloads 137526 Study Employed a Computer Model and Satellite Remote Sensing to Evaluate the Temporal and Spatial Distribution of Snow in the Western Hindu Kush Region of Afghanistan
Authors: Noori Shafiqullah
Abstract:
Millions of people reside downstream of river basins that heavily rely on snowmelt originating from the Hindu Kush (HK) region. Snowmelt plays a critical role as a primary water source in these areas. This study aimed to evaluate snowfall and snowmelt characteristics in the HK region across altitudes ranging from 2019m to 4533m. To achieve this, the study employed a combination of remote sensing techniques and the Snow Model (SM) to analyze the spatial and temporal distribution of Snow Water Equivalent (SWE). By integrating the simulated Snow-cover Area (SCA) with data from the Moderate Resolution Imaging Spectroradiometer (MODIS), the study optimized the Precipitation Gradient (PG), snowfall assessment, and the degree-day factor (DDF) for snowmelt distribution. Ground observed data from various elevations were used to calculate a temperature lapse rate of -7.0 (°C km-1). Consequently, the DDF value was determined as 3 (mm °C-1 d-1) for altitudes below 3000m and 3 to 4 (mm °C-1 d-1) for higher altitudes above 3000m. Moreover, the distribution of precipitation varies with elevation, with the PG being 0.001 (m-1) at lower elevations below 4000m and 0 (m-1) at higher elevations above 4000m. This study successfully utilized the SM to assess SCA and SWE by incorporating the two optimized parameters. The analysis of simulated SCA and MODIS data yielded coefficient determinations of R2, resulting in values of 0.95 and 0.97 for the years 2014-2015, 2015-2016, and 2016-2017, respectively. These results demonstrate that the SM is a valuable tool for managing water resources in mountainous watersheds such as the HK, where data scarcity poses a challenge."Keywords: improved MODIS, experiment, snow water equivalent, snowmelt
Procedia PDF Downloads 67525 Resolution Method for Unforeseen Ground Condition Problem Case in Coal Fired Steam Power Plant Project Location Adipala, Indonesia
Authors: Andi Fallahi, Bona Ryan Situmeang
Abstract:
The Construction Industry is notoriously risky. Much of the preparatory paperwork that precedes construction project can be viewed as the formulation of risk allocation between the owner and the Contractor. The Owner is taking the risk that his project will not get built on the schedule that it will not get built for what he has budgeted and that it will not be of the quality he expected. The Contractor Face a multitude of risk. One of them is an unforeseen condition at the construction site. The Owner usually has the upper hand here if the unforeseen condition occurred. Site data contained in Ground Investigation report is often of significant contractual importance in disputes related to the unforeseen ground condition. A ground investigation can never fully disclose all the details of the underground condition (Risk of an unknown ground condition can never be 100% eliminated). Adipala Coal Fired Steam Power Plant (CSFPP) 1 x 660 project is one of the large CSFPP project in Indonesia based on Engineering, Procurement, and Construction (EPC) Contract. Unforeseen Ground Condition it’s responsible by the Contractor has stipulated in the clausal of Contract. In the implementation, there’s indicated unforeseen ground condition at Circulating Water Pump House (CWPH) area which caused the Contractor should be changed the Method of Work that give big impact against Time of Completion and Cost Project. This paper tries to analyze the best way for allocating the risk between The Owner and The Contractor. All parties that allocating of sharing risk fairly can ultimately save time and money for all parties, and get the job done on schedule for the least overall cost.Keywords: unforeseen ground condition, coal fired steam power plant, circulating water pump house, Indonesia
Procedia PDF Downloads 325524 Potassium-Phosphorus-Nitrogen Detection and Spectral Segmentation Analysis Using Polarized Hyperspectral Imagery and Machine Learning
Authors: Nicholas V. Scott, Jack McCarthy
Abstract:
Military, law enforcement, and counter terrorism organizations are often tasked with target detection and image characterization of scenes containing explosive materials in various types of environments where light scattering intensity is high. Mitigation of this photonic noise using classical digital filtration and signal processing can be difficult. This is partially due to the lack of robust image processing methods for photonic noise removal, which strongly influence high resolution target detection and machine learning-based pattern recognition. Such analysis is crucial to the delivery of reliable intelligence. Polarization filters are a possible method for ambient glare reduction by allowing only certain modes of the electromagnetic field to be captured, providing strong scene contrast. An experiment was carried out utilizing a polarization lens attached to a hyperspectral imagery camera for the purpose of exploring the degree to which an imaged polarized scene of potassium, phosphorus, and nitrogen mixture allows for improved target detection and image segmentation. Preliminary imagery results based on the application of machine learning algorithms, including competitive leaky learning and distance metric analysis, to polarized hyperspectral imagery, suggest that polarization filters provide a slight advantage in image segmentation. The results of this work have implications for understanding the presence of explosive material in dry, desert areas where reflective glare is a significant impediment to scene characterization.Keywords: explosive material, hyperspectral imagery, image segmentation, machine learning, polarization
Procedia PDF Downloads 137523 A Clinical Study of Tracheobronchopathia Osteochondroplastica: Findings from a Large Chinese Cohort
Authors: Ying Zhu, Ning Wu, Hai-Dong Huang, Yu-Chao Dong, Qin-Ying Sun, Wei Zhang, Qin Wang, Qiang Li
Abstract:
Background and study aims: Tracheobronchopathia osteochondroplastica (TO) is an uncommon disease of the tracheobronchial system that leads to narrowing of the airway lumen from cartilaginous and/or osseous submucosal nodules. The aim of this study is to perform a detailed review of this rare disease in a large cohort of patients with TO proven by fiberoptic bronchoscopy from China. Patients and Methods: Retrospective chart review was performed on 41,600 patients who underwent bronchoscopy in the Department of Respiratory Medicine of Changhai Hospital between January 2005 and December 2012. Cases of TO were identified based on characteristic features during bronchoscopic examination. Results: 22 cases of bronchoscopic TO were identified. Among whom one-half were male and the mean age was 47.45 ±10.91 years old. The most frequent symptoms at presentation were chronic cough (n=14) and increased sputum production (n=10). Radiographic abnormalities were observed in 3/18 patients and findings on computed tomography consistent with TO such as beaded intraluminal calcifications and/or increased luminal thickenings were observed in 18/22 patients. Patients were classified into the following categories based on the severity of bronchoscopic findings: Stage I (n=2), Stage II (n=6) and Stage III(n=14). The result that bronchoscopic improvement was observed in 2 patients administered with inhaled corticosteroids suggested that resolution of this disease is possible. Conclusions: TO is a benign disease with slow progression, which could be roughly divided into 3 stages on the basis of the characteristic endoscopic features and histopathologic findings. Chronic inflammation was thought to be more important than the other existing plausible hypotheses in the course of TO. Inhaled corticosteroids might have some impact on patients at Stage I/II.Keywords: airway obstruction, bronchoscopy, etiology, Tracheobronchopathia osteochondroplastica (TO), treatment
Procedia PDF Downloads 462522 Remote Sensing and GIS-Based Environmental Monitoring by Extracting Land Surface Temperature of Abbottabad, Pakistan
Authors: Malik Abid Hussain Khokhar, Muhammad Adnan Tahir, Hisham Bin Hafeez Awan
Abstract:
Continuous environmental determinism and climatic change in the entire globe due to increasing land surface temperature (LST) has become a vital phenomenon nowadays. LST is accelerating because of increasing greenhouse gases in the environment which results of melting down ice caps, ice sheets and glaciers. It has not only worse effects on vegetation and water bodies of the region but has also severe impacts on monsoon areas in the form of capricious rainfall and monsoon failure extensive precipitation. Environment can be monitored with the help of various geographic information systems (GIS) based algorithms i.e. SC (Single), DA (Dual Angle), Mao, Sobrino and SW (Split Window). Estimation of LST is very much possible from digital image processing of satellite imagery. This paper will encompass extraction of LST of Abbottabad using SW technique of GIS and Remote Sensing over last ten years by means of Landsat 7 ETM+ (Environmental Thematic Mapper) and Landsat 8 vide their Thermal Infrared (TIR Sensor) and Optical Land Imager (OLI sensor less Landsat 7 ETM+) having 100 m TIR resolution and 30 m Spectral Resolutions. These sensors have two TIR bands each; their emissivity and spectral radiance will be used as input statistics in SW algorithm for LST extraction. Emissivity will be derived from Normalized Difference Vegetation Index (NDVI) threshold methods using 2-5 bands of OLI with the help of e-cognition software, and spectral radiance will be extracted TIR Bands (Band 10-11 and Band 6 of Landsat 7 ETM+). Accuracy of results will be evaluated by weather data as well. The successive research will have a significant role for all tires of governing bodies related to climate change departments.Keywords: environment, Landsat 8, SW Algorithm, TIR
Procedia PDF Downloads 354521 Investigation of Detectability of Orbital Objects/Debris in Geostationary Earth Orbit by Microwave Kinetic Inductance Detectors
Authors: Saeed Vahedikamal, Ian Hepburn
Abstract:
Microwave Kinetic Inductance Detectors (MKIDs) are considered as one of the most promising photon detectors of the future in many Astronomical applications such as exoplanet detections. The MKID advantages stem from their single photon sensitivity (ranging from UV to optical and near infrared), photon energy resolution and high temporal capability (~microseconds). There has been substantial progress in the development of these detectors and MKIDs with Megapixel arrays is now possible. The unique capability of recording an incident photon and its energy (or wavelength) while also registering its time of arrival to within a microsecond enables an array of MKIDs to produce a four-dimensional data block of x, y, z and t comprising x, y spatial, z axis per pixel spectral and t axis per pixel which is temporal. This offers the possibility that the spectrum and brightness variation for any detected piece of space debris as a function of time might offer a unique identifier or fingerprint. Such a fingerprint signal from any object identified in multiple detections by different observers has the potential to determine the orbital features of the object and be used for their tracking. Modelling performed so far shows that with a 20 cm telescope located at an Astronomical observatory (e.g. La Palma, Canary Islands) we could detect sub cm objects at GEO. By considering a Lambertian sphere with a 10 % reflectivity (albedo of the Moon) we anticipate the following for a GEO object: 10 cm object imaged in a 1 second image capture; 1.2 cm object for a 70 second image integration or 0.65 cm object for a 4 minute image integration. We present details of our modelling and the potential instrument for a dedicated GEO surveillance system.Keywords: space debris, orbital debris, detection system, observation, microwave kinetic inductance detectors, MKID
Procedia PDF Downloads 94520 Late Presentation of Pseudophakic Macula Edema from Oral Kinase Inhibitors: A Case and Literature Review
Authors: Christolyn Raj, Lewis Levitz
Abstract:
Introduction: Two cases of late presentation ( > five years ) of bilateral pseudophakic macula edema related to oral tyrosine kinase inhibitors are described. These cases are the first of their type in the published literature. A review of ocular inflammatory complications of tyrosine kinase inhibitors in the current literature is explored. Case Presentations(s): Case 1 is an 83-year-old female who has been stable on Ibrutinib (Imbruvica ®) for chronic lymphocytic leukemia (CLL). She presented with bilateral blurred vision from severe cystoid macula edema seven years following routine cataract surgery. She was treated with intravitreal steroids with complete resolution without relapse. Case 2 is a 76-year-old female who was on therapy for polycythemia vera with Ruxolitinib (Jakafi®). She presented with bilateral blurred vision from mild cystoid macula edema six years following routine cataract surgery. She responded well to topical steroids without relapse. In both cases, oral tyrosine kinase inhibitor agents were presumed to be the underlying cause and were ceased. Over the last five years, there have been increasing reports in the literature of the inflammatory effects of tyrosine kinase inhibitors on the retina, uvea and optic nerve. Conclusion: Late presentation of pseudophakic macula edema following routine cataract surgery is rare. Such presentations should prompt investigation of the chronic use of systemic medications, especially oral kinase inhibitors. Patients who must remain on these agents require ongoing ophthalmologic assessment in view of their long-term inflammatory side effects.Keywords: macula edema, oral kinase inhibitors, retinal toxicity, pseudo-phakia
Procedia PDF Downloads 93519 Development of National Scale Hydropower Resource Assessment Scheme Using SWAT and Geospatial Techniques
Authors: Rowane May A. Fesalbon, Greyland C. Agno, Jodel L. Cuasay, Dindo A. Malonzo, Ma. Rosario Concepcion O. Ang
Abstract:
The Department of Energy of the Republic of the Philippines estimates that the country’s energy reserves for 2015 are dwindling– observed in the rotating power outages in several localities. To aid in the energy crisis, a national hydropower resource assessment scheme is developed. Hydropower is a resource that is derived from flowing water and difference in elevation. It is a renewable energy resource that is deemed abundant in the Philippines – being an archipelagic country that is rich in bodies of water and water resources. The objectives of this study is to develop a methodology for a national hydropower resource assessment using hydrologic modeling and geospatial techniques in order to generate resource maps for future reference and use of the government and other stakeholders. The methodology developed for this purpose is focused on two models – the implementation of the Soil and Water Assessment Tool (SWAT) for the river discharge and the use of geospatial techniques to analyze the topography and obtain the head, and generate the theoretical hydropower potential sites. The methodology is highly coupled with Geographic Information Systems to maximize the use of geodatabases and the spatial significance of the determined sites. The hydrologic model used in this workflow is SWAT integrated in the GIS software ArcGIS. The head is determined by a developed algorithm that utilizes a Synthetic Aperture Radar (SAR)-derived digital elevation model (DEM) which has a resolution of 10-meters. The initial results of the developed workflow indicate hydropower potential in the river reaches ranging from pico (less than 5 kW) to mini (1-3 MW) theoretical potential.Keywords: ArcSWAT, renewable energy, hydrologic model, hydropower, GIS
Procedia PDF Downloads 311518 Compact LWIR Borescope Sensor for Thermal Imaging of 2D Surface Temperature in Gas-Turbine Engines
Authors: Andy Zhang, Awnik Roy, Trevor B. Chen, Bibik Oleksandar, Subodh Adhikari, Paul S. Hsu
Abstract:
The durability of a combustor in gas-turbine engines is a strong function of its component temperatures and requires good control of these temperatures. Since the temperature of combustion gases frequently exceeds the melting point of the combustion liner walls, an efficient air-cooling system with optimized flow rates of cooling air is significantly important to elongate the lifetime of liner walls. To determine the effectiveness of the air-cooling system, accurate two-dimensional (2D) surface temperature measurement of combustor liner walls is crucial for advanced engine development. Traditional diagnostic techniques for temperature measurement in this application include the rmocouples, thermal wall paints, pyrometry, and phosphors. They have shown some disadvantages, including being intrusive and affecting local flame/flow dynamics, potential flame quenching, and physical damages to instrumentation due to harsh environments inside the combustor and strong optical interference from strong combustion emission in UV-Mid IR wavelength. To overcome these drawbacks, a compact and small borescope long-wave-infrared (LWIR) sensor is developed to achieve 2D high-spatial resolution, high-fidelity thermal imaging of 2D surface temperature in gas-turbine engines, providing the desired engine component temperature distribution. The compactLWIRborescope sensor makes it feasible to promote the durability of a combustor in gas-turbine engines and, furthermore, to develop more advanced gas-turbine engines.Keywords: borescope, engine, low-wave-infrared, sensor
Procedia PDF Downloads 131517 Photocaged Carbohydrates: Versatile Tools for Biotechnological Applications
Authors: Claus Bier, Dennis Binder, Alexander Gruenberger, Dagmar Drobietz, Dietrich Kohlheyer, Anita Loeschcke, Karl Erich Jaeger, Thomas Drepper, Joerg Pietruszka
Abstract:
Light absorbing chromophoric systems are important optogenetic tools for biotechnical and biophysical investigations. Processes such as fluorescence or photolysis can be triggered by light-absorption of chromophores. These play a central role in life science. Photocaged compounds belong to such chromophoric systems. The photo-labile protecting groups enable them to release biologically active substances with high temporal and spatial resolution. The properties of photocaged compounds are specified by the characteristics of the caging group as well as the characteristics of the linked effector molecule. In our research, we work with different types of photo-labile protecting groups and various effector molecules giving us possible access to a large library of caged compounds. As a function of the caged effector molecule, a nearly limitless number of biological systems can be directed. Our main interest focusses on photocaging carbohydrates (e.g. arabinose) and their derivatives as effector molecules. Based on these resulting photocaged compounds a precisely controlled photoinduced gene expression will give us access to studies of numerous biotechnological and synthetic biological applications. It could be shown, that the regulation of gene expression via light is possible with photocaged carbohydrates achieving a higher-order control over this processes. With the one-step cleavable photocaged carbohydrate, a homogeneous expression was achieved in comparison to free carbohydrates.Keywords: bacterial gene expression, biotechnology, caged compounds, carbohydrates, optogenetics, photo-removable protecting group
Procedia PDF Downloads 225516 Roof and Road Network Detection through Object Oriented SVM Approach Using Low Density LiDAR and Optical Imagery in Misamis Oriental, Philippines
Authors: Jigg L. Pelayo, Ricardo G. Villar, Einstine M. Opiso
Abstract:
The advances of aerial laser scanning in the Philippines has open-up entire fields of research in remote sensing and machine vision aspire to provide accurate timely information for the government and the public. Rapid mapping of polygonal roads and roof boundaries is one of its utilization offering application to disaster risk reduction, mitigation and development. The study uses low density LiDAR data and high resolution aerial imagery through object-oriented approach considering the theoretical concept of data analysis subjected to machine learning algorithm in minimizing the constraints of feature extraction. Since separating one class from another in distinct regions of a multi-dimensional feature-space, non-trivial computing for fitting distribution were implemented to formulate the learned ideal hyperplane. Generating customized hybrid feature which were then used in improving the classifier findings. Supplemental algorithms for filtering and reshaping object features are develop in the rule set for enhancing the final product. Several advantages in terms of simplicity, applicability, and process transferability is noticeable in the methodology. The algorithm was tested in the different random locations of Misamis Oriental province in the Philippines demonstrating robust performance in the overall accuracy with greater than 89% and potential to semi-automation. The extracted results will become a vital requirement for decision makers, urban planners and even the commercial sector in various assessment processes.Keywords: feature extraction, machine learning, OBIA, remote sensing
Procedia PDF Downloads 360