Search results for: gripper optimization
387 Development of a Systematic Approach to Assess the Applicability of Silver Coated Conductive Yarn
Authors: Y. T. Chui, W. M. Au, L. Li
Abstract:
Recently, wearable electronic textiles have been emerging in today’s market and were developed rapidly since, beside the needs for the clothing uses for leisure, fashion wear and personal protection, there also exist a high demand for the clothing to be capable for function in this electronic age, such as interactive interfaces, sensual being and tangible touch, social fabric, material witness and so on. With the requirements of wearable electronic textiles to be more comfortable, adorable, and easy caring, conductive yarn becomes one of the most important fundamental elements within the wearable electronic textile for interconnection between different functional units or creating a functional unit. The properties of conductive yarns from different companies can vary to a large extent. There are vitally important criteria for selecting the conductive yarns, which may directly affect its optimization, prospect, applicability and performance of the final garment. However, according to the literature review, few researches on conductive yarns on shelf focus on the assessment methods of conductive yarns for the scientific selection of material by a systematic way under different conditions. Therefore, in this study, direction of selecting high-quality conductive yarns is given. It is to test the stability and reliability of the conductive yarns according the problems industrialists would experience with the yarns during the every manufacturing process, in which, this assessment system can be classified into four stage. That is 1) Yarn stage, 2) Fabric stage, 3) Apparel stage and 4) End user stage. Several tests with clear experiment procedures and parameters are suggested to be carried out in each stage. This assessment method suggested that the optimal conducting yarns should be stable in property and resistant to various corrosions at every production stage or during using them. It is expected that this demonstration of assessment method can serve as a pilot study that assesses the stability of Ag/nylon yarns systematically at various conditions, i.e. during mass production with textile industry procedures, and from the consumer perspective. It aims to assist industrialists to understand the qualities and properties of conductive yarns and suggesting a few important parameters that they should be reminded of for the case of higher level of suitability, precision and controllability.Keywords: applicability, assessment method, conductive yarn, wearable electronics
Procedia PDF Downloads 535386 Hydraulic Characteristics of Mine Tailings by Metaheuristics Approach
Authors: Akhila Vasudev, Himanshu Kaushik, Tadikonda Venkata Bharat
Abstract:
A large number of mine tailings are produced every year as part of the extraction process of phosphates, gold, copper, and other materials. Mine tailings are high in water content and have very slow dewatering behavior. The efficient design of tailings dam and economical disposal of these slurries requires the knowledge of tailings consolidation behavior. The large-strain consolidation theory closely predicts the self-weight consolidation of these slurries as the theory considers the conservation of mass and momentum conservation and considers the hydraulic conductivity as a function of void ratio. Classical laboratory techniques, such as settling column test, seepage consolidation test, etc., are expensive and time-consuming for the estimation of hydraulic conductivity variation with void ratio. Inverse estimation of the constitutive relationships from the measured settlement versus time curves is explored. In this work, inverse analysis based on metaheuristics techniques will be explored for predicting the hydraulic conductivity parameters for mine tailings from the base excess pore water pressure dissipation curve and the initial conditions of the mine tailings. The proposed inverse model uses particle swarm optimization (PSO) algorithm, which is based on the social behavior of animals searching for food sources. The finite-difference numerical solution of the forward analytical model is integrated with the PSO algorithm to solve the inverse problem. The method is tested on synthetic data of base excess pore pressure dissipation curves generated using the finite difference method. The effectiveness of the method is verified using base excess pore pressure dissipation curve obtained from a settling column experiment and further ensured through comparison with available predicted hydraulic conductivity parameters.Keywords: base excess pore pressure, hydraulic conductivity, large strain consolidation, mine tailings
Procedia PDF Downloads 136385 A Comparative Study of the Techno-Economic Performance of the Linear Fresnel Reflector Using Direct and Indirect Steam Generation: A Case Study under High Direct Normal Irradiance
Authors: Ahmed Aljudaya, Derek Ingham, Lin Ma, Kevin Hughes, Mohammed Pourkashanian
Abstract:
Researchers, power companies, and state politicians have given concentrated solar power (CSP) much attention due to its capacity to generate large amounts of electricity whereas overcoming the intermittent nature of solar resources. The Linear Fresnel Reflector (LFR) is a well-known CSP technology type for being inexpensive, having a low land use factor, and suffering from low optical efficiency. The LFR was considered a cost-effective alternative option to the Parabolic Trough Collector (PTC) because of its simplistic design, and this often outweighs its lower efficiency. The LFR has been found to be a promising option for directly producing steam to a thermal cycle in order to generate low-cost electricity, but also it has been shown to be promising for indirect steam generation. The purpose of this important analysis is to compare the annual performance of the Direct Steam Generation (DSG) and Indirect Steam Generation (ISG) of LFR power plants using molten salt and other different Heat Transfer Fluids (HTF) to investigate their technical and economic effects. A 50 MWe solar-only system is examined as a case study for both steam production methods in extreme weather conditions. In addition, a parametric analysis is carried out to determine the optimal solar field size that provides the lowest Levelized Cost of Electricity (LCOE) while achieving the highest technical performance. As a result of optimizing the optimum solar field size, the solar multiple (SM) is found to be between 1.2 – 1.5 in order to achieve as low as 9 Cent/KWh for the direct steam generation of the linear Fresnel reflector. In addition, the power plant is capable of producing around 141 GWh annually and up to 36% of the capacity factor, whereas the ISG produces less energy at a higher cost. The optimization results show that the DSG’s performance overcomes the ISG in producing around 3% more annual energy, 2% lower LCOE, and 28% less capital cost.Keywords: concentrated solar power, levelized cost of electricity, linear Fresnel reflectors, steam generation
Procedia PDF Downloads 111384 Caregiver Training Results in Accurate Reporting of Stool Frequency
Authors: Matthew Heidman, Susan Dallabrida, Analice Costa
Abstract:
Background:Accuracy of caregiver reported outcomes is essential for infant growth and tolerability study success. Crying/fussiness, stool consistencies, and other gastrointestinal characteristics are important parameters regarding tolerability, and inter-caregiver reporting can see a significant amount of subjectivity and vary greatly within a study, compromising data. This study sought to elucidate how caregiver reported questions related to stool frequency are answered before and after a short amount of training and how training impacts caregivers’ understanding, and how they would answer the question. Methods:A digital survey was issued for 90 daysin the US (n=121) and 30 days in Mexico (n=88), targeting respondents with children ≤4 years of age. Respondents were asked a question in two formats, first without a line of training text and second with a line of training text. The question set was as follows, “If your baby had stool in his/her diaper and you changed the diaper and 10 min later there was more stool in the diaper, how many stools would you report this as?” followed by the same question beginning with “If you were given the instruction that IF there are at least 5 minutes in between stools, then it counts as two (2) stools…”.Four response items were provided for both questions, 1) 2 stools, 2) 1stool, 3) it depends on how much stool was in the first versus the second diaper, 4) There is not enough information to be able to answer the question. Response frequencies between questions were compared. Results: Responses to the question without training saw some variability in the US, with 69% selecting “2 stools”,11% selecting “1 stool”, 14% selecting “it depends on how much stool was in the first versus the second diaper”, and 7% selecting “There is not enough information to be able to answer the question” and in Mexico respondents selected 9%, 78%, 13%, and 0% respectively. However, responses to the question after training saw more consolidation in the US, with 85% of respondents selecting“2 stools,” representing an increase in those selecting the correct answer. Additionally in Mexico, with 84% of respondents selecting “1 episode” representing an increase in the those selecting the correct response. Conclusions: Caregiver reported outcomes are critical for infant growth and tolerability studies, however, they can be highly subjective and see a high variability of responses without guidance. Training is critical to standardize all caregivers’ perspective regarding how to answer questions accurately in order to provide an accurate dataset.Keywords: infant nutrition, clinical trial optimization, stool reporting, decentralized clinical trials
Procedia PDF Downloads 96383 Preparation of Indium Tin Oxide Nanoparticle-Modified 3-Aminopropyltrimethoxysilane-Functionalized Indium Tin Oxide Electrode for Electrochemical Sulfide Detection
Authors: Md. Abdul Aziz
Abstract:
Sulfide ion is water soluble, highly corrosive, toxic and harmful to the human beings. As a result, knowing the exact concentration of sulfide in water is very important. However, the existing detection and quantification methods have several shortcomings, such as high cost, low sensitivity, and massive instrumentation. Consequently, the development of novel sulfide sensor is relevant. Nevertheless, electrochemical methods gained enormous popularity due to a vast improvement in the technique and instrumentation, portability, low cost, rapid analysis and simplicity of design. Successful field application of electrochemical devices still requires vast improvement, which depends on the physical, chemical and electrochemical aspects of the working electrode. The working electrode made of bulk gold (Au) and platinum (Pt) are quite common, being very robust and endowed with good electrocatalytic properties. High cost, and electrode poisoning, however, have so far hindered their practical application in many industries. To overcome these obstacles, we developed a sulfide sensor based on an indium tin oxide nanoparticle (ITONP)-modified ITO electrode. To prepare ITONP-modified ITO, various methods were tested. Drop-drying of ITONPs (aq.) on aminopropyltrimethoxysilane-functionalized ITO (APTMS/ITO) was found to be the best method on the basis of voltammetric analysis of the sulfide ion. ITONP-modified APTMS/ITO (ITONP/APTMS/ITO) yielded much better electrocatalytic properties toward sulfide electro-οxidation than did bare or APTMS/ITO electrodes. The ITONPs and ITONP-modified ITO were also characterized using transmission electron microscopy and field emission scanning electron microscopy, respectively. Optimization of the type of inert electrolyte and pH yielded an ITONP/APTMS/ITO detector whose amperometrically and chronocoulοmetrically determined limits of detection for sulfide in aqueous solution were 3.0 µM and 0.90 µM, respectively. ITONP/APTMS/ITO electrodes which displayed reproducible performances were highly stable and were not susceptible to interference by common contaminants. Thus, the developed electrode can be considered as a promising tool for sensing sulfide.Keywords: amperometry, chronocoulometry, electrocatalytic properties, ITO-nanoparticle-modified ITO, sulfide sensor
Procedia PDF Downloads 131382 Ligandless Extraction and Determination of Trace Amounts of Lead in Pomegranate, Zucchini and Lettuce Samples after Dispersive Liquid-Liquid Microextraction with Ultrasonic Bath and Optimization of Extraction Condition with RSM Design
Authors: Fariba Tadayon, Elmira Hassanlou, Hasan Bagheri, Mostafa Jafarian
Abstract:
Heavy metals are released into water, plants, soil, and food by natural and human activities. Lead has toxic roles in the human body and may cause serious problems even in low concentrations, since it may have several adverse effects on human. Therefore, determination of lead in different samples is an important procedure in the studies of environmental pollution. In this work, an ultrasonic assisted-ionic liquid based-liquid-liquid microextraction (UA-IL-DLLME) procedure for the determination of lead in zucchini, pomegranate, and lettuce has been established and developed by using flame atomic absorption spectrometer (FAAS). For UA-IL-DLLME procedure, 10 mL of the sample solution containing Pb2+ was adjusted to pH=5 in a glass test tube with a conical bottom; then, 120 μL of 1-Hexyl-3-methylimidazolium hexafluoro phosphate (CMIM)(PF6) was rapidly injected into the sample solution with a microsyringe. After that, the resulting cloudy mixture was treated by ultrasonic for 5 min, then the separation of two phases was obtained by centrifugation for 5 min at 3000 rpm and IL-phase diluted with 1 cc ethanol, and the analytes were determined by FAAS. The effect of different experimental parameters in the extraction step including: ionic liquid volume, sonication time and pH was studied and optimized simultaneously by using Response Surface Methodology (RSM) employing a central composite design (CCD). The optimal conditions were determined to be an ionic liquid volume of 120 μL, sonication time of 5 min, and pH=5. The linear ranges of the calibration curve for the determination by FAAS of lead were 0.1-4 ppm with R2=0.992. Under optimized conditions, the limit of detection (LOD) for lead was 0.062 μg.mL-1, the enrichment factor (EF) was 93, and the relative standard deviation (RSD) for lead was calculated as 2.29%. The levels of lead for pomegranate, zucchini, and lettuce were calculated as 2.88 μg.g-1, 1.54 μg.g-1, 2.18 μg.g-1, respectively. Therefore, this method has been successfully applied for the analysis of the content of lead in different food samples by FAAS.Keywords: Dispersive liquid-liquid microextraction, Central composite design, Food samples, Flame atomic absorption spectrometry.
Procedia PDF Downloads 283381 Fire Safety Assessment of At-Risk Groups
Authors: Naser Kazemi Eilaki, Carolyn Ahmer, Ilona Heldal, Bjarne Christian Hagen
Abstract:
Older people and people with disabilities are recognized as at-risk groups when it comes to egress and travel from hazard zone to safe places. One's disability can negatively influence her or his escape time, and this becomes even more important when people from this target group live alone. This research deals with the fire safety of mentioned people's buildings by means of probabilistic methods. For this purpose, fire safety is addressed by modeling the egress of our target group from a hazardous zone to a safe zone. A common type of detached house with a prevalent plan has been chosen for safety analysis, and a limit state function has been developed according to the time-line evacuation model, which is based on a two-zone and smoke development model. An analytical computer model (B-Risk) is used to consider smoke development. Since most of the involved parameters in the fire development model pose uncertainty, an appropriate probability distribution function has been considered for each one of the variables with indeterministic nature. To achieve safety and reliability for the at-risk groups, the fire safety index method has been chosen to define the probability of failure (causalities) and safety index (beta index). An improved harmony search meta-heuristic optimization algorithm has been used to define the beta index. Sensitivity analysis has been done to define the most important and effective parameters for the fire safety of the at-risk group. Results showed an area of openings and intervals to egress exits are more important in buildings, and the safety of people would improve with increasing dimensions of occupant space (building). Fire growth is more critical compared to other parameters in the home without a detector and fire distinguishing system, but in a home equipped with these facilities, it is less important. Type of disabilities has a great effect on the safety level of people who live in the same home layout, and people with visual impairment encounter more risk of capturing compared to visual and movement disabilities.Keywords: fire safety, at-risk groups, zone model, egress time, uncertainty
Procedia PDF Downloads 103380 Regional Flood Frequency Analysis in Narmada Basin: A Case Study
Authors: Ankit Shah, R. K. Shrivastava
Abstract:
Flood and drought are two main features of hydrology which affect the human life. Floods are natural disasters which cause millions of rupees’ worth of damage each year in India and the whole world. Flood causes destruction in form of life and property. An accurate estimate of the flood damage potential is a key element to an effective, nationwide flood damage abatement program. Also, the increase in demand of water due to increase in population, industrial and agricultural growth, has let us know that though being a renewable resource it cannot be taken for granted. We have to optimize the use of water according to circumstances and conditions and need to harness it which can be done by construction of hydraulic structures. For their safe and proper functioning of hydraulic structures, we need to predict the flood magnitude and its impact. Hydraulic structures play a key role in harnessing and optimization of flood water which in turn results in safe and maximum use of water available. Mainly hydraulic structures are constructed on ungauged sites. There are two methods by which we can estimate flood viz. generation of Unit Hydrographs and Flood Frequency Analysis. In this study, Regional Flood Frequency Analysis has been employed. There are many methods for estimating the ‘Regional Flood Frequency Analysis’ viz. Index Flood Method. National Environmental and Research Council (NERC Methods), Multiple Regression Method, etc. However, none of the methods can be considered universal for every situation and location. The Narmada basin is located in Central India. It is drained by most of the tributaries, most of which are ungauged. Therefore it is very difficult to estimate flood on these tributaries and in the main river. As mentioned above Artificial Neural Network (ANN)s and Multiple Regression Method is used for determination of Regional flood Frequency. The annual peak flood data of 20 sites gauging sites of Narmada Basin is used in the present study to determine the Regional Flood relationships. Homogeneity of the considered sites is determined by using the Index Flood Method. Flood relationships obtained by both the methods are compared with each other, and it is found that ANN is more reliable than Multiple Regression Method for the present study area.Keywords: artificial neural network, index flood method, multi layer perceptrons, multiple regression, Narmada basin, regional flood frequency
Procedia PDF Downloads 419379 Digital Transformation: Actionable Insights to Optimize the Building Performance
Authors: Jovian Cheung, Thomas Kwok, Victor Wong
Abstract:
Buildings are entwined with smart city developments. Building performance relies heavily on electrical and mechanical (E&M) systems and services accounting for about 40 percent of global energy use. By cohering the advancement of technology as well as energy and operation-efficient initiatives into the buildings, people are enabled to raise building performance and enhance the sustainability of the built environment in their daily lives. Digital transformation in the buildings is the profound development of the city to leverage the changes and opportunities of digital technologies To optimize the building performance, intelligent power quality and energy management system is developed for transforming data into actions. The system is formed by interfacing and integrating legacy metering and internet of things technologies in the building and applying big data techniques. It provides operation and energy profile and actionable insights of a building, which enables to optimize the building performance through raising people awareness on E&M services and energy consumption, predicting the operation of E&M systems, benchmarking the building performance, and prioritizing assets and energy management opportunities. The intelligent power quality and energy management system comprises four elements, namely the Integrated Building Performance Map, Building Performance Dashboard, Power Quality Analysis, and Energy Performance Analysis. It provides predictive operation sequence of E&M systems response to the built environment and building activities. The system collects the live operating conditions of E&M systems over time to identify abnormal system performance, predict failure trends and alert users before anticipating system failure. The actionable insights collected can also be used for system design enhancement in future. This paper will illustrate how intelligent power quality and energy management system provides operation and energy profile to optimize the building performance and actionable insights to revitalize an existing building into a smart building. The system is driving building performance optimization and supporting in developing Hong Kong into a suitable smart city to be admired.Keywords: intelligent buildings, internet of things technologies, big data analytics, predictive operation and maintenance, building performance
Procedia PDF Downloads 157378 Real-Time Data Stream Partitioning over a Sliding Window in Real-Time Spatial Big Data
Authors: Sana Hamdi, Emna Bouazizi, Sami Faiz
Abstract:
In recent years, real-time spatial applications, like location-aware services and traffic monitoring, have become more and more important. Such applications result dynamic environments where data as well as queries are continuously moving. As a result, there is a tremendous amount of real-time spatial data generated every day. The growth of the data volume seems to outspeed the advance of our computing infrastructure. For instance, in real-time spatial Big Data, users expect to receive the results of each query within a short time period without holding in account the load of the system. But with a huge amount of real-time spatial data generated, the system performance degrades rapidly especially in overload situations. To solve this problem, we propose the use of data partitioning as an optimization technique. Traditional horizontal and vertical partitioning can increase the performance of the system and simplify data management. But they remain insufficient for real-time spatial Big data; they can’t deal with real-time and stream queries efficiently. Thus, in this paper, we propose a novel data partitioning approach for real-time spatial Big data named VPA-RTSBD (Vertical Partitioning Approach for Real-Time Spatial Big data). This contribution is an implementation of the Matching algorithm for traditional vertical partitioning. We find, firstly, the optimal attribute sequence by the use of Matching algorithm. Then, we propose a new cost model used for database partitioning, for keeping the data amount of each partition more balanced limit and for providing a parallel execution guarantees for the most frequent queries. VPA-RTSBD aims to obtain a real-time partitioning scheme and deals with stream data. It improves the performance of query execution by maximizing the degree of parallel execution. This affects QoS (Quality Of Service) improvement in real-time spatial Big Data especially with a huge volume of stream data. The performance of our contribution is evaluated via simulation experiments. The results show that the proposed algorithm is both efficient and scalable, and that it outperforms comparable algorithms.Keywords: real-time spatial big data, quality of service, vertical partitioning, horizontal partitioning, matching algorithm, hamming distance, stream query
Procedia PDF Downloads 157377 Exploration of Slow-Traffic System Strategies for New Urban Areas Under the Integration of Industry and City - Taking Qianfeng District of Guang’an City as an Example
Authors: Qikai Guan
Abstract:
With the deepening of China's urbanization process, the development of urban industry has entered a new period, due to the gradual compounding and diversification of urban industrial functions, urban planning has shifted from the previous single industrial space arrangement and functional design to focusing on the upgrading of the urban structure, and on the diversified needs of people. As an important part of urban activity space, ‘slow moving space’ is of great significance in alleviating urban traffic congestion, optimizing residents' travel experience and improving urban ecological space. Therefore, this paper takes the slow-moving transportation system under the perspective of industry-city integration as the starting point, through sorting out the development needs of the city in the process of industry-city integration, analyzing the characteristics of the site base, sorting out a series of compatibility between the layout of the new industrial zone and the urban slow-moving system, and integrating the design concepts. At the same time, through the analysis and summarization of domestic and international experience, the construction ideas are proposed. Finally, the following aspects of planning strategy optimization are proposed: industrial layout, urban vitality, ecological pattern, regional characteristics and landscape image. In terms of specific design, on the one hand, it builds a regional slow-moving network, puts forward a diversified design strategy for the industry-oriented and multi-functional composite central area, realizes the coexistence of pedestrian-oriented and multiple transportation modes, basically covers the public facilities, and enhances the vitality of the city. On the other hand, it improves the landscape ecosystem, creates a healthy, diversified and livable superline landscape system, helps the construction of the ‘green core’ of the central city, and improves the travel experience of the residents.Keywords: industry-city integration, slow-moving system, public space, functional integration
Procedia PDF Downloads 10376 Chemical Synthesis, Characterization and Dose Optimization of Chitosan-Based Nanoparticles of MCPA for Management of Broad-Leaved Weeds (Chenopodium album, Lathyrus aphaca, Angalis arvensis and Melilotus indica) of Wheat
Authors: Muhammad Ather Nadeem, Bilal Ahmad Khan, Tasawer Abbas
Abstract:
Nanoherbicides utilize nanotechnology to enhance the delivery of biological or chemical herbicides using combinations of nanomaterials. The aim of this research was to examine the efficacy of chitosan nanoparticles containing MCPA herbicide as a potential eco-friendly alternative for weed control in wheat crops. Scanning electron microscopy (SEM), X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FT-IR), and ultraviolet absorbance were used to analyze the developed nanoparticles. The SEM analysis indicated that the average size of the particles was 35 nm, forming clusters with a porous structure. Both nanoparticles of fluroxyper + MCPA exhibited maximal absorption peaks at a wavelength of 320 nm. The compound fluroxyper +MCPA has a strong peak at a 2θ value of 30.55°, which correlates to the 78 plane of the anatase phase. The weeds, including Chenopodium album, Lathyrus aphaca, Angalis arvensis, and Melilotus indica, were sprayed with the nanoparticles while they were in the third or fourth leaf stage. There were seven distinct dosages used: doses (D0 (Check weeds), D1 (Recommended dose of traditional herbicide, D2 (Recommended dose of Nano-herbicide (NPs-H)), D3 (NPs-H with 05-fold lower dose), D4 ((NPs-H) with 10-fold lower dose), D5 (NPs-H with 15-fold lower dose), and D6 (NPs-H with 20-fold lower dose)). The chitosan-based nanoparticles of MCPA at the prescribed dosage of conventional herbicide resulted in complete death and visual damage, with a 100% fatality rate. The dosage that was 5-fold lower exhibited the lowest levels of plant height (3.95 cm), chlorophyll content (5.63%), dry biomass (0.10 g), and fresh biomass (0.33 g) in the broad-leaved weed of wheat. The herbicide nanoparticles, when used at a dosage 10-fold lower than that of conventional herbicides, had a comparable impact on the prescribed dosage. Nano-herbicides have the potential to improve the efficiency of standard herbicides by increasing stability and lowering toxicity.Keywords: mortality, visual injury, chlorophyl contents, chitosan-based nanoparticles
Procedia PDF Downloads 65375 The Analysis of Drill Bit Optimization by the Application of New Electric Impulse Technology in Shallow Water Absheron Peninsula
Authors: Ayshan Gurbanova
Abstract:
Despite based on the fact that drill bit which is the smallest part of bottom hole assembly costs only in between 10% and 15% of the total expenses made, they are the first equipment that is in contact with the formation itself. Hence, it is consequential to choose the appropriate type and dimension of drilling bit, which will prevent majority of problems by not demanding many tripping procedure. However, within the advance in technology, it is now seamless to be beneficial in the terms of many concepts such as subsequent time of operation, energy, expenditure, power and so forth. With the intention of applying the method to Azerbaijan, the field of Shallow Water Absheron Peninsula has been suggested, where the mainland has been located 15 km away from the wildcat wells, named as “NKX01”. It has the water depth of 22 m as indicated. In 2015 and 2016, the seismic survey analysis of 2D and 3D have been conducted in contract area as well as onshore shallow water depth locations. With the aim of indicating clear elucidation, soil stability, possible submersible dangerous scenarios, geohazards and bathymetry surveys have been carried out as well. Within the seismic analysis results, the exact location of exploration wells have been determined and along with this, the correct measurement decisions have been made to divide the land into three productive zones. In the term of the method, Electric Impulse Technology (EIT) is based on discharge energies of electricity within the corrosivity in rock. Take it simply, the highest value of voltages could be created in the less range of nano time, where it is sent to the rock through electrodes’ baring as demonstrated below. These electrodes- higher voltage powered and grounded are placed on the formation which could be obscured in liquid. With the design, it is more seamless to drill horizontal well based on the advantage of loose contact of formation. There is also no chance of worn ability as there are no combustion, mechanical power exist. In the case of energy, the usage of conventional drilling accounts for 1000 𝐽/𝑐𝑚3 , where this value accounts for between 100 and 200 𝐽/𝑐𝑚3 in EIT. Last but not the least, from the test analysis, it has been yielded that it achieves the value of ROP more than 2 𝑚/ℎ𝑟 throughout 15 days. Taking everything into consideration, it is such a fact that with the comparison of data analysis, this method is highly applicable to the fields of Azerbaijan.Keywords: drilling, drill bit cost, efficiency, cost
Procedia PDF Downloads 73374 Anaerobic Co-digestion of the Halophyte Salicornia Ramosissima and Pig Manure in Lab-Scale Batch and Semi-continuous Stirred Tank Reactors: Biomethane Production and Reactor Performance
Authors: Aadila Cayenne, Hinrich Uellendahl
Abstract:
Optimization of the anaerobic digestion (AD) process of halophytic plants is essential as the biomass contains a high salt content that can inhibit the AD process. Anaerobic co-digestion, together with manure, can resolve the inhibitory effects of saline biomass in order to dilute the salt concentration and establish favorable conditions for the microbial consortia of the AD process. The present laboratory study investigated the co-digestion of S. ramosissima (Sram), and pig manure (PM) in batch and semi-continuous stirred tank reactors (CSTR) under mesophilic (38oC) conditions. The 0.5L batch reactor experiments were in mono- and co-digestion of Sram: PM using different percent volatile solid (VS) based ratios (0:100, 15:85, 25:75, 35:65, 50:50, 100:0) with an inoculum to substate (I/R) ratio of 2. Two 5L CSTR systems (R1 and R2) were operated for 133 days with a feed of PM in a control reactor (R1) and with a co-digestion feed in an increasing Sram VS ratio of Sram: PM of 15:85, 25:75, 35:65 in reactor R2 at an organic loading rate (OLR) of 2 gVS/L/d and hydraulic retention time (HRT) of 20 days. After a start-up phase of 8 weeks for both reactors R1 and R2 with PM feed alone, the halophyte biomass Sram was added to the feed of R2 in an increasing ratio of 15 – 35 %VS Sram over an 11-week period. The process performance was monitored by pH, total solid (TS), VS, total nitrogen (TN), ammonium-nitrogen (NH4 – N), volatile fatty acids (VFA), and biomethane production. In the batch experiments, biomethane yields of 423, 418, 392, 365, 315, and 214 mL-CH4/gVS were achieved for mixtures of 0:100, 15:85, 25:75, 35:65, 50:50, 100:0 %VS Sram: PM, respectively. In the semi-continuous reactor processes, the average biomethane yields were 235, 387, and 365 mL-CH4/gVS for the phase of a co-digestion feed ratio in R2 of 15:85, 25:75, and 35:65 %VS Sram: PM, respectively. The methane yield of PM alone in R1 was in the corresponding phases on average 260, 388, and 446 mL-CH4/gVS. Accordingly, in the continuous AD process, the methane yield of the halophyte Sram was highest at 386 mL-CH4/gVS in the co-digestion ratio of 25:75%VS Sram: PM and significantly lower at 15:85 %VS Sram: PM (100 mL-CH4/gVS) and at 35:65 %VS Sram (214 mL-CH4/gVS). The co-digestion process showed no signs of inhibition at 2 – 4 g/L NH4 – N, 3.5 – 4.5 g/L TN, and total VFA of 0.45 – 2.6 g/L (based on Acetic, Propionic, Butyric and Valeric acid). This study demonstrates that a stable co-digestion process of S. ramosissima and pig manure can be achieved with a feed of 25%VS Sram at HRT of 20 d and OLR of 2 gVS/L/d.Keywords: anaerobic co-digestion, biomethane production, halophytes, pig manure, salicornia ramosissima
Procedia PDF Downloads 152373 Impact of CYP3A5 Polymorphism on Tacrolimus to Predict the Optimal Initial Dose Requirements in South Indian Renal Transplant Recipients
Authors: S. Sreeja, Radhakrishnan R. Nair, Noble Gracious, Sreeja S. Nair, M. Radhakrishna Pillai
Abstract:
Background: Tacrolimus is a potent immunosuppressant clinically used for the long term treatment of antirejection of transplanted organs in liver and kidney transplant recipients though dose optimization is poorly managed. However, So far no study has been carried out on the South Indian kidney transplant patients. The objective of this study is to evaluate the potential influence of a functional polymorphism in CYP3A5*3 gene on tacrolimus physiological availability/dose ratio in South Indian renal transplant patients. Materials and Methods: Twenty five renal transplant recipients receiving tacrolimus were enrolled in this study. Their body weight, drug dosage, and therapeutic concentration of Tacrolimus were observed. All patients were on standard immunosuppressive regime of Tacrolimus-Mycophenolate mofetil along with steroids on a starting dose of Tac 0.1 mg/kg/day. CYP3A5 genotyping was performed by PCR followed with RFLP. Conformation of RFLP analysis and variation in the nucleotide sequence of CYP3A5*3 gene were determined by direct sequencing using a validated automated generic analyzer. Results: A significant association was found between tacrolimus per dose/kg/d and CYP3A5 gene (A6986G) polymorphism in the study population. The CYP3A5 *1/*1, *1/*3 and *3/*3 genotypes were detected in 5 (20 %), 5 (20 %) and 15 (60 %) of the 25 graft recipients, respectively. CYP3A5*3 genotypes were found to be a good predictor of tacrolimus Concentration/Dose ratio in kidney transplant recipients. Significantly higher L/D was observed among non-expressors 9.483 ng/mL(4.5- 14.1) as compared with the expressors 5.154 ng/mL (4.42-6.5 ) of CYP3A5. Acute rejection episodes were significantly higher for CYP3A5*1 homozygotes compared to patients with CYP3A5*1/*3 and CYP3A5*3/*3 genotypes (40 % versus 20 % and 13 %, respectively ). The dose normalized TAC concentration (ng/ml/mg/kg) was significantly lower in patients having CYP3A5*1/*3 polymorphism. Conclusion: This is the first study to extensively determine the effect of CYP3A5*3 genetic polymorphism on tacrolimus pharmacokinetics in South Indian renal transplant recipients and also shows that majority of our patients carry mutant allele A6986G in CYP3A5*3 gene. Identification of CYP3A5 polymorphism prior to transplantation could contribute to evaluate the appropriate initial dosage of tacrolimus for each patient.Keywords: kidney transplant patients, CYP3A5 genotype, tacrolimus, RFLP
Procedia PDF Downloads 301372 Computational Pipeline for Lynch Syndrome Detection: Integrating Alignment, Variant Calling, and Annotations
Authors: Rofida Gamal, Mostafa Mohammed, Mariam Adel, Marwa Gamal, Marwa kamal, Ayat Saber, Maha Mamdouh, Amira Emad, Mai Ramadan
Abstract:
Lynch Syndrome is an inherited genetic condition associated with an increased risk of colorectal and other cancers. Detecting Lynch Syndrome in individuals is crucial for early intervention and preventive measures. This study proposes a computational pipeline for Lynch Syndrome detection by integrating alignment, variant calling, and annotation. The pipeline leverages popular tools such as FastQC, Trimmomatic, BWA, bcftools, and ANNOVAR to process the input FASTQ file, perform quality trimming, align reads to the reference genome, call variants, and annotate them. It is believed that the computational pipeline was applied to a dataset of Lynch Syndrome cases, and its performance was evaluated. It is believed that the quality check step ensured the integrity of the sequencing data, while the trimming process is thought to have removed low-quality bases and adaptors. In the alignment step, it is believed that the reads were accurately mapped to the reference genome, and the subsequent variant calling step is believed to have identified potential genetic variants. The annotation step is believed to have provided functional insights into the detected variants, including their effects on known Lynch Syndrome-associated genes. The results obtained from the pipeline revealed Lynch Syndrome-related positions in the genome, providing valuable information for further investigation and clinical decision-making. The pipeline's effectiveness was demonstrated through its ability to streamline the analysis workflow and identify potential genetic markers associated with Lynch Syndrome. It is believed that the computational pipeline presents a comprehensive and efficient approach to Lynch Syndrome detection, contributing to early diagnosis and intervention. The modularity and flexibility of the pipeline are believed to enable customization and adaptation to various datasets and research settings. Further optimization and validation are believed to be necessary to enhance performance and applicability across diverse populations.Keywords: Lynch Syndrome, computational pipeline, alignment, variant calling, annotation, genetic markers
Procedia PDF Downloads 76371 Studies on Biojetfuel Obtained from Vegetable Oil: Process Characteristics, Engine Performance and Their Comparison with Mineral Jetfuel
Authors: F. Murilo T. Luna, Vanessa F. Oliveira, Alysson Rocha, Expedito J. S. Parente, Andre V. Bueno, Matheus C. M. Farias, Celio L. Cavalcante Jr.
Abstract:
Aviation jetfuel used in aircraft gas-turbine engines is customarily obtained from the kerosene distillation fraction of petroleum (150-275°C). Mineral jetfuel consists of a hydrocarbon mixture containing paraffins, naphthenes and aromatics, with low olefins content. In order to ensure their safety, several stringent requirements must be met by jetfuels, such as: high energy density, low risk of explosion, physicochemical stability and low pour point. In this context, aviation fuels eventually obtained from biofeedstocks (which have been coined as ‘biojetfuel’), must be used as ‘drop in’, since adaptations in aircraft engines are not desirable, to avoid problems with their operation reliability. Thus, potential aviation biofuels must present the same composition and physicochemical properties of conventional jetfuel. Among the potential feedtstocks for aviation biofuel, the babaçu oil, extracted from a palm tree extensively found in some regions of Brazil, contains expressive quantities of short chain saturated fatty acids and may be an interesting choice for biojetfuel production. In this study, biojetfuel was synthesized through homogeneous transesterification of babaçu oil using methanol and its properties were compared with petroleum-based jetfuel through measurements of oxidative stability, physicochemical properties and low temperature properties. The transesterification reactions were carried out using methanol and after decantation/wash procedures, the methyl esters were purified by molecular distillation under high vacuum at different temperatures. The results indicate significant improvement in oxidative stability and pour point of the products when compared to the fresh oil. After optimization of operational conditions, potential biojetfuel samples were obtained, consisting mainly of C8 esters, showing low pour point and high oxidative stability. Jet engine tests are being conducted in an automated test bed equipped with pollutant emissions analysers to study the operational performance of the biojetfuel that was obtained and compare with a mineral commercial jetfuel.Keywords: biojetfuel, babaçu oil, oxidative stability, engine tests
Procedia PDF Downloads 259370 Treatment of Municipal Wastewater by Means of Uv-Assisted Irradiation Technologies: Fouling Studies and Optimization of Operational Parameters
Authors: Tooba Aslam, Efthalia Chatzisymeon
Abstract:
UV-assisted irradiation technologies are well-established for water and wastewater treatment. UVC treatments are widely used at large-scale, while UVA irradiation has more often been applied in combination with a catalyst (e.g. TiO₂ or FeSO₄) in smaller-scale systems. A technical issue of these systems is the formation of fouling on the quartz sleeves that houses the lamps. This fouling can prevent complete irradiation, therefore reducing the efficiency of the process. This paper investigates the effects of operational parameters, such as the type of wastewater, irradiation source, H₂O₂ addition, and water pH on fouling formation and, ultimately, the treatment of municipal wastewater. Batch experiments have been performed at lab-scale while monitoring water quality parameters including: COD, TS, TSS, TDS, temperature, pH, hardness, alkalinity, turbidity, TOC, UV transmission, UV₂₅₄ absorbance, and metal concentrations. The residence time of the wastewater in the reactor was 5 days in order to observe any fouling formation on the quartz surface. Over this period, it was observed that chemical oxygen demand (COD) decreased by 30% and 59% during photolysis (Ultraviolet A) and photo-catalysis (UVA/Fe/H₂O₂), respectively. Higher fouling formation was observed with iron-rich and phosphorous-rich wastewater. The highest rate of fouling was developed with phosphorous-rich wastewater, followed by the iron-rich wastewater. Photo-catalysis (UVA/Fe/H₂O₂) had better removal efficiency than photolysis (UVA). This was attributed to the Photo-Fenton reaction, which was initiated under these operational conditions. Scanning electron microscope (SEM) measurements of fouling formed on the quartz sleeves showed that particles vary in size, shape, and structure; some have more distinct structures and are generally larger and have less compact structure than the others. Energy-dispersive X-ray spectroscopy (EDX) results showed that the major metals present in the fouling cake were iron, phosphorous, and calcium. In conclusion, iron-rich wastewaters are more suitable for UV-assisted treatment since fouling formation on quartz sleeves can be minimized by the formation of oxidizing agents during treatment, such as hydroxyl radicals.Keywords: advanced oxidation processes, photo-fenton treatment, photo-catalysis, wastewater treatment
Procedia PDF Downloads 77369 An Electrochemical Enzymatic Biosensor Based on Multi-Walled Carbon Nanotubes and Poly (3,4 Ethylenedioxythiophene) Nanocomposites for Organophosphate Detection
Authors: Navpreet Kaur, Himkusha Thakur, Nirmal Prabhakar
Abstract:
The most controversial issue in crop production is the use of Organophosphate insecticides. This is evident in many reports that Organophosphate (OP) insecticides, among the broad range of pesticides are mainly involved in acute and chronic poisoning cases. OPs detection is of crucial importance for health protection, food and environmental safety. In our study, a nanocomposite of poly (3,4 ethylenedioxythiophene) (PEDOT) and multi-walled carbon nanotubes (MWCNTs) has been deposited electrochemically onto the surface of fluorine doped tin oxide sheets (FTO) for the analysis of malathion OP. The -COOH functionalization of MWCNTs has been done for the covalent binding with amino groups of AChE enzyme. The use of PEDOT-MWCNT films exhibited an excellent conductivity, enables fast transfer kinetics and provided a favourable biocompatible microenvironment for AChE, for the significant malathion OP detection. The prepared biosensors were characterized by Fourier transform infrared spectrometry (FTIR), Field emission-scanning electron microscopy (FE-SEM) and electrochemical studies. Various optimization studies were done for different parameters including pH (7.5), AChE concentration (50 mU), substrate concentration (0.3 mM) and inhibition time (10 min). Substrate kinetics has been performed and studied for the determination of Michaelis Menten constant. The detection limit for malathion OP was calculated to be 1 fM within the linear range 1 fM to 1 µM. The activity of inhibited AChE enzyme was restored to 98% of its original value by 2-pyridine aldoxime methiodide (2-PAM) (5 mM) treatment for 11 min. The oxime 2-PAM is able to remove malathion from the active site of AChE by means of trans-esterification reaction. The storage stability and reusability of the prepared biosensor is observed to be 30 days and seven times, respectively. The application of the developed biosensor has also been evaluated for spiked lettuce sample. Recoveries of malathion from the spiked lettuce sample ranged between 96-98%. The low detection limit obtained by the developed biosensor made them reliable, sensitive and a low cost process.Keywords: PEDOT-MWCNT, malathion, organophosphates, acetylcholinesterase, biosensor, oxime (2-PAM)
Procedia PDF Downloads 445368 Research on Configuration of Large-Scale Linear Array Feeder Truss Parabolic Cylindrical Antenna of Satellite
Authors: Chen Chuanzhi, Guo Yunyun
Abstract:
The large linear array feeding parabolic cylindrical antenna of the satellite has the ability of large-area line focusing, multi-directional beam clusters simultaneously in a certain azimuth plane and elevation plane, corresponding quickly to different orientations and different directions in a wide frequency range, dual aiming of frequency and direction, and combining space power. Therefore, the large-diameter parabolic cylindrical antenna has become one of the new development directions of spaceborne antennas. Limited by the size of the rocked fairing, the large-diameter spaceborne antenna is required to be small mass and have a deployment function. After being orbited, the antenna can be deployed by expanding and be stabilized. However, few types of structures can be used to construct large cylindrical shell structures in existing structures, which greatly limits the development and application of such antennas. Aiming at high structural efficiency, the geometrical characteristics of parabolic cylinders and mechanism topological mapping law to the expandable truss are studied, and the basic configuration of deployable truss with cylindrical shell is structured. Then a modular truss parabolic cylindrical antenna is designed in this paper. The antenna has the characteristics of stable structure, high precision of reflecting surface formation, controllable motion process, high storage rate, and lightweight, etc. On the basis of the overall configuration comprehensive theory and optimization method, the structural stiffness of the modular truss parabolic cylindrical antenna is improved. And the bearing density and impact resistance of support structure are improved based on the internal tension optimal distribution method of reflector forming. Finally, a truss-type cylindrical deployable support structure with high constriction-deployment ratio, high stiffness, controllable deployment, and low mass is successfully developed, laying the foundation for the application of large-diameter parabolic cylindrical antennas in satellite antennas.Keywords: linear array feed antenna, truss type, parabolic cylindrical antenna, spaceborne antenna
Procedia PDF Downloads 158367 Assessment of Radiation Protection Measures in Diagnosis and Treatment: A Critical Review
Authors: Buhari Samaila, Buhari Maidamma
Abstract:
Background: The use of ionizing radiation in medical diagnostics and treatment is indispensable for accurate imaging and effective cancer therapies. However, radiation exposure carries inherent risks, necessitating strict protection measures to safeguard both patients and healthcare workers. This review critically examines the existing radiation protection measures in diagnostic radiology and radiotherapy, highlighting technological advancements, regulatory frameworks, and challenges. Objective: The objective of this review is to critically evaluate the effectiveness of current radiation protection measures in diagnostic and therapeutic radiology, focusing on minimizing patient and staff exposure to ionizing radiation while ensuring optimal clinical outcomes and propose future directions for improvement. Method: A comprehensive literature review was conducted, covering scientific studies, regulatory guidelines, and international standards on radiation protection in both diagnostic radiology and radiotherapy. Emphasis was placed on ALARA principles, dose optimization techniques, and protective measures for both patients and healthcare workers. Results: Radiation protection measures in diagnostic radiology include the use of shielding devices, minimizing exposure times, and employing advanced imaging technologies to reduce dose. In radiotherapy, accurate treatment planning and image-guided techniques enhance patient safety, while shielding and dose monitoring safeguard healthcare personnel. Challenges such as limited infrastructure in low-income settings and gaps in healthcare worker training persist, impacting the overall efficacy of protection strategies. Conclusion: While significant advancements have been made in radiation protection, challenges remain in optimizing safety, especially in resource-limited settings. Future efforts should focus on enhancing training, investing in advanced technologies, and strengthening regulatory compliance to ensure continuous improvement in radiation safety practices.Keywords: radiation protection, diagnostic radiology, radiotherapy, ALARA, patient safety, healthcare worker safety
Procedia PDF Downloads 24366 Energy Reclamation in Micro Cavitating Flow
Authors: Morteza Ghorbani, Reza Ghorbani
Abstract:
Cavitation phenomenon has attracted much attention in the mechanical and biomedical technologies. Despite the simplicity and mostly low cost of the devices generating cavitation bubbles, the physics behind the generation and collapse of these bubbles particularly in micro/nano scale has still not well understood. In the chemical industry, micro/nano bubble generation is expected to be applicable to the development of porous materials such as microcellular plastic foams. Moreover, it was demonstrated that the presence of micro/nano bubbles on a surface reduced the adsorption of proteins. Thus, the micro/nano bubbles could act as antifouling agents. Micro and nano bubbles were also employed in water purification, froth floatation, even in sonofusion, which was not completely validated. Small bubbles could also be generated using micro scale hydrodynamic cavitation. In this study, compared to the studies available in the literature, we are proposing a novel approach in micro scale utilizing the energy produced during the interaction of the spray affected by the hydrodynamic cavitating flow and a thin aluminum plate. With a decrease in the size, cavitation effects become significant. It is clearly shown that with the aid of hydrodynamic cavitation generated inside the micro/mini-channels in addition to the optimization of the distance between the tip of the microchannel configuration and the solid surface, surface temperatures can be increased up to 50C under the conditions of this study. The temperature rise on the surfaces near the collapsing small bubbles was exploited for energy harvesting in small scale, in such a way that miniature, cost-effective, and environmentally friendly energy-harvesting devices can be developed. Such devices will not require any external power and moving parts in contrast to common energy-harvesting devices, such as those involving piezoelectric materials and micro engine. Energy harvesting from thermal energy has been widely exploited to achieve energy savings and clean technologies. We are proposing a cost effective and environmentally friendly solution for the growing individual energy needs thanks to the energy application of cavitating flows. The necessary power for consumer devices, such as cell phones and laptops, can be provided using this approach. Thus, this approach has the potential for solving personal energy needs in an inexpensive and environmentally friendly manner and can trigger a shift of paradigm in energy harvesting.Keywords: cavitation, energy, harvesting, micro scale
Procedia PDF Downloads 191365 Dynamic Analysis and Clutch Adaptive Prefill in Dual Clutch Transmission
Authors: Bin Zhou, Tongli Lu, Jianwu Zhang, Hongtao Hao
Abstract:
Dual clutch transmissions (DCT) offer a high comfort performance in terms of the gearshift. Hydraulic multi-disk clutches are the key components of DCT, its engagement determines the shifting comfort. The prefill of the clutches requests an initial engagement which the clutches just contact against each other but not transmit substantial torque from the engine, this initial clutch engagement point is called the touch point. Open-loop control is typically implemented for the clutch prefill, a lot of uncertainties, such as oil temperature and clutch wear, significantly affects the prefill, probably resulting in an inappropriate touch point. Underfill causes the engine flaring in gearshift while overfill arises clutch tying up, both deteriorating the shifting comfort of DCT. Therefore, it is important to enable an adaptive capacity for the clutch prefills regarding the uncertainties. In this paper, a dynamic model of the hydraulic actuator system is presented, including the variable force solenoid and clutch piston, and validated by a test. Subsequently, the open-loop clutch prefill is simulated based on the proposed model. Two control parameters of the prefill, fast fill time and stable fill pressure is analyzed with regard to the impact on the prefill. The former has great effects on the pressure transients, the latter directly influences the touch point. Finally, an adaptive method is proposed for the clutch prefill during gear shifting, in which clutch fill control parameters are adjusted adaptively and continually. The adaptive strategy is changing the stable fill pressure according to the current clutch slip during a gearshift, improving the next prefill process. The stable fill pressure is increased by means of the clutch slip while underfill and decreased with a constant value for overfill. The entire strategy is designed in the Simulink/Stateflow, and implemented in the transmission control unit with optimization. Road vehicle test results have shown the strategy realized its adaptive capability and proven it improves the shifting comfort.Keywords: clutch prefill, clutch slip, dual clutch transmission, touch point, variable force solenoid
Procedia PDF Downloads 308364 An Approach to Determine Proper Daylighting Design Solution Considering Visual Comfort and Lighting Energy Efficiency in High-Rise Residential Building
Authors: Zehra Aybike Kılıç, Alpin Köknel Yener
Abstract:
Daylight is a powerful driver in terms of improving human health, enhancing productivity and creating sustainable solutions by minimizing energy demand. A proper daylighting system allows not only a pleasant and attractive visual and thermal environment, but also reduces lighting energy consumption and heating/cooling energy load with the optimization of aperture size, glazing type and solar control strategy, which are the major design parameters of daylighting system design. Particularly, in high-rise buildings where large openings that allow maximum daylight and view out are preferred, evaluation of daylight performance by considering the major parameters of the building envelope design becomes crucial in terms of ensuring occupants’ comfort and improving energy efficiency. Moreover, it is increasingly necessary to examine the daylighting design of high-rise residential buildings, considering the share of residential buildings in the construction sector, the duration of occupation and the changing space requirements. This study aims to identify a proper daylighting design solution considering window area, glazing type and solar control strategy for a high-residential building in terms of visual comfort and lighting energy efficiency. The dynamic simulations are carried out/conducted by DIVA for Rhino version 4.1.0.12. The results are evaluated with Daylight Autonomy (DA) to demonstrate daylight availability in the space and Daylight Glare Probability (DGP) to describe the visual comfort conditions related to glare. Furthermore, it is also analyzed that the lighting energy consumption occurred in each scenario to determine the optimum solution reducing lighting energy consumption by optimizing daylight performance. The results revealed that it is only possible that reduction in lighting energy consumption as well as providing visual comfort conditions in buildings with the proper daylighting design decision regarding glazing type, transparency ratio and solar control device.Keywords: daylighting , glazing type, lighting energy efficiency, residential building, solar control strategy, visual comfort
Procedia PDF Downloads 176363 Astronomical Object Classification
Authors: Alina Muradyan, Lina Babayan, Arsen Nanyan, Gohar Galstyan, Vigen Khachatryan
Abstract:
We present a photometric method for identifying stars, galaxies and quasars in multi-color surveys, which uses a library of ∼> 65000 color templates for comparison with observed objects. The method aims for extracting the information content of object colors in a statistically correct way, and performs a classification as well as a redshift estimation for galaxies and quasars in a unified approach based on the same probability density functions. For the redshift estimation, we employ an advanced version of the Minimum Error Variance estimator which determines the redshift error from the redshift dependent probability density function itself. The method was originally developed for the Calar Alto Deep Imaging Survey (CADIS), but is now used in a wide variety of survey projects. We checked its performance by spectroscopy of CADIS objects, where the method provides high reliability (6 errors among 151 objects with R < 24), especially for the quasar selection, and redshifts accurate within σz ≈ 0.03 for galaxies and σz ≈ 0.1 for quasars. For an optimization of future survey efforts, a few model surveys are compared, which are designed to use the same total amount of telescope time but different sets of broad-band and medium-band filters. Their performance is investigated by Monte-Carlo simulations as well as by analytic evaluation in terms of classification and redshift estimation. If photon noise were the only error source, broad-band surveys and medium-band surveys should perform equally well, as long as they provide the same spectral coverage. In practice, medium-band surveys show superior performance due to their higher tolerance for calibration errors and cosmic variance. Finally, we discuss the relevance of color calibration and derive important conclusions for the issues of library design and choice of filters. The calibration accuracy poses strong constraints on an accurate classification, which are most critical for surveys with few, broad and deeply exposed filters, but less severe for surveys with many, narrow and less deep filters.Keywords: VO, ArVO, DFBS, FITS, image processing, data analysis
Procedia PDF Downloads 80362 Fuzzy Decision Making to the Construction Project Management: Glass Facade Selection
Authors: Katarina Rogulj, Ivana Racetin, Jelena Kilic
Abstract:
In this study, the fuzzy logic approach (FLA) was developed for construction project management (CPM) under uncertainty and duality. The focus was on decision making in selecting the type of the glass facade for a residential-commercial building in the main design. The adoption of fuzzy sets was capable of reflecting construction managers’ reliability level over subjective judgments, and thus the robustness of the system can be achieved. An α-cuts method was utilized for discretizing the fuzzy sets in FLA. This method can communicate all uncertain information in the optimization process, taking into account the values of this information. Furthermore, FLA provides in-depth analyses of diverse policy scenarios that are related to various levels of economic aspects when it comes to the construction projects' valid decision making. The developed approach is applied to CPM to demonstrate its applicability. Analyzing the materials of glass facades, variants were defined. The development of the FLA for the CPM included relevant construction projec'ts stakeholders that were involved in the criteria definition to evaluate each variant. Using fuzzy Decision-Making Trial and Evaluation Laboratory Method (DEMATEL) comparison of the glass facade was conducted. This way, a rank, according to the priorities for inclusion into the main design, of variants is obtained. The concept was tested on a residential-commercial building in the city of Rijeka, Croatia. The newly developed methodology was then compared with the existing one. The aim of the research was to define an approach that will improve current judgments and decisions when it comes to the material selection of buildings facade as one of the most important architectural and engineering tasks in the main design. The advantage of the new methodology compared to the old one is that it includes the subjective side of the managers’ decisions, as an inevitable factor in each decision making. The proposed approach can help construction projects managers to identify the desired type of glass facade according to their preference and practical conditions, as well as facilitate in-depth analyses of tradeoffs between economic efficiency and architectural design.Keywords: construction projects management, DEMATEL, fuzzy logic approach, glass façade selection
Procedia PDF Downloads 137361 Inverse Saturable Absorption in Non-linear Amplifying Loop Mirror Mode-Locked Fiber Laser
Authors: Haobin Zheng, Xiang Zhang, Yong Shen, Hongxin Zou
Abstract:
The research focuses on mode-locked fiber lasers with a non-linear amplifying loop mirror (NALM). Although these lasers have shown potential, they still have limitations in terms of low repetition rate. The self-starting of mode-locking in NALM is influenced by the cross-phase modulation (XPM) effect, which has not been thoroughly studied. The aim of this study is two-fold. First, to overcome the difficulties associated with increasing the repetition rate in mode-locked fiber lasers with NALM. Second, to analyze the influence of XPM on self-starting of mode-locking. The power distributions of two counterpropagating beams in the NALM and the differential non-linear phase shift (NPS) accumulations are calculated. The analysis is conducted from the perspective of NPS accumulation. The differential NPSs for continuous wave (CW) light and pulses in the fiber loop are compared to understand the inverse saturable absorption (ISA) mechanism during pulse formation in NALM. The study reveals a difference in differential NPSs between CW light and pulses in the fiber loop in NALM. This difference leads to an ISA mechanism, which has not been extensively studied in artificial saturable absorbers. The ISA in NALM provides an explanation for experimentally observed phenomena, such as active mode-locking initiation through tapping the fiber or fine-tuning light polarization. These findings have important implications for optimizing the design of NALM and reducing the self-starting threshold of high-repetition-rate mode-locked fiber lasers. This study contributes to the theoretical understanding of NALM mode-locked fiber lasers by exploring the ISA mechanism and its impact on self-starting of mode-locking. The research fills a gap in the existing knowledge regarding the XPM effect in NALM and its role in pulse formation. This study provides insights into the ISA mechanism in NALM mode-locked fiber lasers and its role in selfstarting of mode-locking. The findings contribute to the optimization of NALM design and the reduction of self-starting threshold, which are essential for achieving high-repetition-rate operation in fiber lasers. Further research in this area can lead to advancements in the field of mode-locked fiber lasers with NALM.Keywords: inverse saturable absorption, NALM, mode-locking, non-linear phase shift
Procedia PDF Downloads 101360 Bulk-Density and Lignocellulose Composition: Influence of Changing Lignocellulosic Composition on Bulk-Density during Anaerobic Digestion and Implication of Compacted Lignocellulose Bed on Mass Transfer
Authors: Aastha Paliwal, H. N. Chanakya, S. Dasappa
Abstract:
Lignocellulose, as an alternate feedstock for biogas production, has been an active area of research. However, lignocellulose poses a lot of operational difficulties- widespread variation in the structural organization of lignocellulosic matrix, amenability to degradation, low bulk density, to name a few. Amongst these, the low bulk density of the lignocellulosic feedstock is crucial to the process operation and optimization. Low bulk densities render the feedstock floating in conventional liquid/wet digesters. Low bulk densities also restrict the maximum achievable organic loading rate in the reactor, decreasing the power density of the reactor. However, during digestion, lignocellulose undergoes very high compaction (up to 26 times feeding density). This first reduces the achievable OLR (because of low feeding density) and compaction during digestion, then renders the reactor space underutilized and also imposes significant mass transfer limitations. The objective of this paper was to understand the effects of compacting lignocellulose on mass transfer and the influence of loss of different components on the bulk density and hence structural integrity of the digesting lignocellulosic feedstock. 10 different lignocellulosic feedstocks (monocots and dicots) were digested anaerobically in a fed-batch, leach bed reactor -solid-state stratified bed reactor (SSBR). Percolation rates of the recycled bio-digester liquid (BDL) were also measured during the reactor run period to understand the implication of compaction on mass transfer. After 95 ds, in a destructive sampling, lignocellulosic feedstocks digested at different SRT were investigated to quantitate the weekly changes in bulk density and lignocellulosic composition. Further, percolation rate data was also compared to bulk density data. Results from the study indicate loss of hemicellulose (r²=0.76), hot water extractives (r²=0.68), and oxalate extractives (r²=0.64) had dominant influence on changing the structural integrity of the studied lignocellulose during anaerobic digestion. Further, feeding bulk density of the lignocellulose can be maintained between 300-400kg/m³ to achieve higher OLR, and bulk density of 440-500kg/m³ incurs significant mass transfer limitation for high compacting beds of dicots.Keywords: anaerobic digestion, bulk density, feed compaction, lignocellulose, lignocellulosic matrix, cellulose, hemicellulose, lignin, extractives, mass transfer
Procedia PDF Downloads 168359 Backwash Optimization for Drinking Water Treatment Biological Filters
Authors: Sarra K. Ikhlef, Onita Basu
Abstract:
Natural organic matter (NOM) removal efficiency using drinking water treatment biological filters can be highly influenced by backwashing conditions. Backwashing has the ability to remove the accumulated biomass and particles in order to regenerate the biological filters' removal capacity and prevent excessive headloss buildup. A lab scale system consisting of 3 biological filters was used in this study to examine the implications of different backwash strategies on biological filtration performance. The backwash procedures were evaluated based on their impacts on dissolved organic carbon (DOC) removals, biological filters’ biomass, backwash water volume usage, and particle removal. Results showed that under nutrient limited conditions, the simultaneous use of air and water under collapse pulsing conditions lead to a DOC removal of 22% which was significantly higher (p>0.05) than the 12% removal observed under water only backwash conditions. Employing a bed expansion of 20% under nutrient supplemented conditions compared to a 30% reference bed expansion while using the same amount of water volume lead to similar DOC removals. On the other hand, utilizing a higher bed expansion (40%) lead to significantly lower DOC removals (23%). Also, a backwash strategy that reduced the backwash water volume usage by about 20% resulted in similar DOC removals observed with the reference backwash. The backwash procedures investigated in this study showed no consistent impact on biological filters' biomass concentrations as measured by the phospholipids and the adenosine tri-phosphate (ATP) methods. Moreover, none of these two analyses showed a direct correlation with DOC removal. On the other hand, dissolved oxygen (DO) uptake showed a direct correlation with DOC removals. The addition of the extended terminal subfluidization wash (ETSW) demonstrated no apparent impact on DOC removals. ETSW also successfully eliminated the filter ripening sequence (FRS). As a result, the additional water usage resulting from implementing ETSW was compensated by water savings after restart. Results from this study provide insight to researchers and water treatment utilities on how to better optimize the backwashing procedure for the goal of optimizing the overall biological filtration process.Keywords: biological filtration, backwashing, collapse pulsing, ETSW
Procedia PDF Downloads 273358 Modelling of Solidification in a Latent Thermal Energy Storage with a Finned Tube Bundle Heat Exchanger Unit
Authors: Remo Waser, Simon Maranda, Anastasia Stamatiou, Ludger J. Fischer, Joerg Worlitschek
Abstract:
In latent heat storage, a phase change material (PCM) is used to store thermal energy. The heat transfer rate during solidification is limited and considered as a key challenge in the development of latent heat storages. Thus, finned heat exchangers (HEX) are often utilized to increase the heat transfer rate of the storage system. In this study, a new modeling approach to calculating the heat transfer rate in latent thermal energy storages with complex HEX geometries is presented. This model allows for an optimization of the HEX design in terms of costs and thermal performance of the system. Modeling solidification processes requires the calculation of time-dependent heat conduction with moving boundaries. Commonly used computational fluid dynamic (CFD) methods enable the analysis of the heat transfer in complex HEX geometries. If applied to the entire storage, the drawback of this approach is the high computational effort due to small time steps and fine computational grids required for accurate solutions. An alternative to describe the process of solidification is the so-called temperature-based approach. In order to minimize the computational effort, a quasi-stationary assumption can be applied. This approach provides highly accurate predictions for tube heat exchangers. However, it shows unsatisfactory results for more complex geometries such as finned tube heat exchangers. The presented simulation model uses a temporal and spatial discretization of heat exchanger tube. The spatial discretization is based on the smallest possible symmetric segment of the HEX. The heat flow in each segment is calculated using finite volume method. Since the heat transfer fluid temperature can be derived using energy conservation equations, the boundary conditions at the inner tube wall is dynamically updated for each time step and segment. The model allows a prediction of the thermal performance of latent thermal energy storage systems using complex HEX geometries with considerably low computational effort.Keywords: modelling of solidification, finned tube heat exchanger, latent thermal energy storage
Procedia PDF Downloads 268