Search results for: adomian decomposition
78 Evaluation of Natural Waste Materials for Ammonia Removal in Biofilters
Authors: R. F. Vieira, D. Lopes, I. Baptista, S. A. Figueiredo, V. F. Domingues, R. Jorge, C. Delerue-matos, O. M. Freitas
Abstract:
Odours are generated in municipal solid wastes management plants as a result of decomposition of organic matter, especially when anaerobic degradation occurs. Information was collected about the substances and respective concentration in the surrounding atmosphere of some management plants. The main components which are associated with these unpleasant odours were identified: ammonia, hydrogen sulfide and mercaptans. The first is the most common and the one that presents the highest concentrations, reaching values of 700 mg/m3. Biofiltration, which involves simultaneously biodegradation, absorption and adsorption processes, is a sustainable technology for the treatment of these odour emissions when a natural packing material is used. The packing material should ideally be cheap, durable, and allow the maximum microbiological activity and adsorption/absorption. The presence of nutrients and water is required for biodegradation processes. Adsorption and absorption are enhanced by high specific surface area, high porosity and low density. The main purpose of this work is the exploitation of natural waste materials, locally available, as packing media: heather (Erica lusitanica), chestnut bur (from Castanea sativa), peach pits (from Prunus persica) and eucalyptus bark (from Eucalyptus globulus). Preliminary batch tests of ammonia removal were performed in order to select the most interesting materials for biofiltration, which were then characterized. The following physical and chemical parameters were evaluated: density, moisture, pH, buffer and water retention capacity. The determination of equilibrium isotherms and the adjustment to Langmuir and Freundlich models was also performed. Both models can fit the experimental results. Based both in the material performance as adsorbent and in its physical and chemical characteristics, eucalyptus bark was considered the best material. It presents a maximum adsorption capacity of 0.78±0.45 mol/kg for ammonia. The results from its characterization are: 121 kg/m3 density, 9.8% moisture, pH equal to 5.7, buffer capacity of 0.370 mmol H+/kg of dry matter and water retention capacity of 1.4 g H2O/g of dry matter. The application of natural materials locally available, with little processing, in biofiltration is an economic and sustainable alternative that should be explored.Keywords: ammonia removal, biofiltration, natural materials, odour control
Procedia PDF Downloads 36977 Composition Dependence of Ni 2p Core Level Shift in Fe1-xNix Alloys
Authors: Shakti S. Acharya, V. R. R. Medicherla, Rajeev Rawat, Komal Bapna, Deepnarayan Biswas, Khadija Ali, K. Maiti
Abstract:
The discovery of invar effect in 35% Ni concentration Fe1-xNix alloy has stimulated enormous experimental and theoretical research. Elemental Fe and low Ni concentration Fe1-xNix alloys which possess body centred cubic (bcc) crystal structure at ambient temperature and pressure transform to hexagonally close packed (hcp) phase at around 13 GPa. Magnetic order was found to be absent at 11K for Fe92Ni8 alloy when subjected to a high pressure of 26 GPa. The density functional theoretical calculations predicted substantial hyperfine magnetic fields, but were not observed in Mossbaur spectroscopy. The bulk modulus of fcc Fe1-xNix alloys with Ni concentration more than 35%, is found to be independent of pressure. The magnetic moment of Fe is also found be almost same in these alloys from 4 to 10 GPa pressure. Fe1-xNix alloys exhibit a complex microstructure which is formed by a series of complex phase transformations like martensitic transformation, spinodal decomposition, ordering, mono-tectoid reaction, eutectoid reaction at temperatures below 400°C. Despite the existence of several theoretical models the field is still in its infancy lacking full knowledge about the anomalous properties exhibited by these alloys. Fe1-xNix alloys have been prepared by arc melting the high purity constituent metals in argon ambient. These alloys have annealed at around 3000C in vacuum sealed quartz tube for two days to make the samples homogeneous. These alloys have been structurally characterized by x-ray diffraction and were found to exhibit a transition from bcc to fcc for x > 0.3. Ni 2p core levels of the alloys have been measured using high resolution (0.45 eV) x-ray photoelectron spectroscopy. Ni 2p core level shifts to lower binding energy with respect to that of pure Ni metal giving rise to negative core level shifts (CLSs). Measured CLSs exhibit a linear dependence in fcc region (x > 0.3) and were found to deviate slightly in bcc region (x < 0.3). ESCA potential model fails correlate CLSs with site potentials or charges in metallic alloys. CLSs in these alloys occur mainly due to shift in valence bands with composition due to intra atomic charge redistribution.Keywords: arc melting, core level shift, ESCA potential model, valence band
Procedia PDF Downloads 38076 Application of Recycled Paper Mill Sludge on the Growth of Khaya Senegalensis and Its Effect on Soil Properties, Nutrients and Heavy Metals
Authors: A. Rosazlin Abdullah, I. Che Fauziah, K. Wan Rasidah, A. B. Rosenani
Abstract:
The paper industry performs an essential role in the global economy of the world. A study was conducted on the paper mill sludge that is applied on the Khaya senegalensis for 1 year planning period at University Agriculture Park, Puchong, Selangor, Malaysia to determine the growth of Khaya senegalensis, soil properties, nutrients concentrations and effects on the status of heavy metals. Paper Mill Sludge (PMS) and composted Recycled Paper Mill Sludge (RPMS) were used with different rates of nitrogen (0, 150, 300 and 600 kg ha-1) at the ratio of 1:1 (Recycled Paper Mill Sludge (RPMS) : Empty Fruit Brunch (EFB). The growth parameters were measured twice a month for 1 year. Plant nutrients and heavy metal uptake were determined. The paper mill sludge has the potential to be a supplementary N fertilizer as well as a soil amendment. The application of RPMS with N, significantly contributed to the improvement in plant growth parameters such as plant height (4.24 m), basal diameter (10.30 cm), total plant biomass and improved soil physical and chemical properties. The pH, EC, available P and total C in soil were varied among the treatments during the planting period. The treatments with raw and RPM compost had higher pH values than those applied with inorganic fertilizer and control. Nevertheless, there was no salinity problem recorded during the planting period and available P in soil treated with raw and RPMS compost was higher than the control plots that reflects the mineralization of organic P from the decomposition of pulp sludge. The weight of the free and occluded light fractions of carbon concentration was significantly higher in the soils treated with raw and RPMS compost. The application of raw and composted RPMS gave significantly higher concentration of the heavy metals, but the total concentrations of heavy metals in the soils were below the critical values. Hence, the paper mill sludge can be successfully used as soil amendment in acidic soil without any serious threat. The use of paper mill sludge for the soil fertility, shows improvement in land application signifies a unique opportunity to recycle sludge back to the land to alleviate the potential waste management problem.Keywords: growth, heavy metals, nutrients uptake, production, waste management
Procedia PDF Downloads 36875 Comparison of Iodine Density Quantification through Three Material Decomposition between Philips iQon Dual Layer Spectral CT Scanner and Siemens Somatom Force Dual Source Dual Energy CT Scanner: An in vitro Study
Authors: Jitendra Pratap, Jonathan Sivyer
Abstract:
Introduction: Dual energy/Spectral CT scanning permits simultaneous acquisition of two x-ray spectra datasets and can complement radiological diagnosis by allowing tissue characterisation (e.g., uric acid vs. non-uric acid renal stones), enhancing structures (e.g. boost iodine signal to improve contrast resolution), and quantifying substances (e.g. iodine density). However, the latter showed inconsistent results between the 2 main modes of dual energy scanning (i.e. dual source vs. dual layer). Therefore, the present study aimed to determine which technology is more accurate in quantifying iodine density. Methods: Twenty vials with known concentrations of iodine solutions were made using Optiray 350 contrast media diluted in sterile water. The concentration of iodine utilised ranged from 0.1 mg/ml to 1.0mg/ml in 0.1mg/ml increments, 1.5 mg/ml to 4.5 mg/ml in 0.5mg/ml increments followed by further concentrations at 5.0 mg/ml, 7mg/ml, 10 mg/ml and 15mg/ml. The vials were scanned using Dual Energy scan mode on a Siemens Somatom Force at 80kV/Sn150kV and 100kV/Sn150kV kilovoltage pairing. The same vials were scanned using Spectral scan mode on a Philips iQon at 120kVp and 140kVp. The images were reconstructed at 5mm thickness and 5mm increment using Br40 kernel on the Siemens Force and B Filter on Philips iQon. Post-processing of the Dual Energy data was performed on vendor-specific Siemens Syngo VIA (VB40) and Philips Intellispace Portal (Ver. 12) for the Spectral data. For each vial and scan mode, the iodine concentration was measured by placing an ROI in the coronal plane. Intraclass correlation analysis was performed on both datasets. Results: The iodine concentrations were reproduced with a high degree of accuracy for Dual Layer CT scanner. Although the Dual Source images showed a greater degree of deviation in measured iodine density for all vials, the dataset acquired at 80kV/Sn150kV had a higher accuracy. Conclusion: Spectral CT scanning by the dual layer technique has higher accuracy for quantitative measurements of iodine density compared to the dual source technique.Keywords: CT, iodine density, spectral, dual-energy
Procedia PDF Downloads 12074 Cover Layer Evaluation in Soil Organic Matter of Mixing and Compressed Unsaturated
Authors: Nayara Torres B. Acioli, José Fernando T. Jucá
Abstract:
The uncontrolled emission of gases in urban residues' embankment located near urban areas is a social and environmental problem, common in Brazilian cities. Several environmental impacts in the local and global scope may be generated by atmospheric air contamination by the biogas resulted from the decomposition of solid urban materials. In Brazil, the cities of small size figure mostly with 90% of all cities, with the population smaller than 50,000 inhabitants, according to the 2011 IBGE' census, most of the landfill covering layer is composed of clayey, pure soil. The embankments undertaken with pure soil may reach up to 60% of retention of methane, for the other 40% it may be dispersed into the atmosphere. In face of this figures the oxidative covering layer is granted some space of study, envisaging to reduce this perceptual available in the atmosphere, releasing, in spite of methane, carbonic gas which is almost 20 times as less polluting than Methane. This paper exposes the results of studies on the characteristics of the soil used for the oxidative coverage layer of the experimental embankment of Solid Urban Residues (SUR), built in Muribeca-PE, Brazil, supported of the Group of Solid Residues (GSR), located at Federal University of Pernambuco, through laboratory vacuum experiments (determining the characteristics curve), granularity, and permeability, that in soil with saturation over 85% offers dramatic drops in the test of permeability to the air, by little increments of water, based in the existing Brazilian norm for this procedure. The suction was studied, as in the other tests, from the division of prospection of an oxidative coverage layer of 60cm, in the upper half (0.1 m to 0.3 m) and lower half (0.4 m to 0.6 m). Therefore, the consequences to be presented from the lixiviation of the fine materials after 5 years of finalization of the embankment, what made its permeability increase. Concerning its humidity, it is most retained in the upper part, that comprises the compound, with a difference in the order of 8 percent the superior half to inferior half, retaining the least suction from the surface. These results reveal the efficiency of the oxidative coverage layer in retaining the rain water, it has a lower cost when compared to the other types of layer, offering larger availability of this layer as an alternative for a solution for the appropriate disposal of residues.Keywords: oxidative coverage layer, permeability, suction, saturation
Procedia PDF Downloads 29073 Investigation of the EEG Signal Parameters during Epileptic Seizure Phases in Consequence to the Application of External Healing Therapy on Subjects
Authors: Karan Sharma, Ajay Kumar
Abstract:
Epileptic seizure is a type of disease due to which electrical charge in the brain flows abruptly resulting in abnormal activity by the subject. One percent of total world population gets epileptic seizure attacks.Due to abrupt flow of charge, EEG (Electroencephalogram) waveforms change. On the display appear a lot of spikes and sharp waves in the EEG signals. Detection of epileptic seizure by using conventional methods is time-consuming. Many methods have been evolved that detect it automatically. The initial part of this paper provides the review of techniques used to detect epileptic seizure automatically. The automatic detection is based on the feature extraction and classification patterns. For better accuracy decomposition of the signal is required before feature extraction. A number of parameters are calculated by the researchers using different techniques e.g. approximate entropy, sample entropy, Fuzzy approximate entropy, intrinsic mode function, cross-correlation etc. to discriminate between a normal signal & an epileptic seizure signal.The main objective of this review paper is to present the variations in the EEG signals at both stages (i) Interictal (recording between the epileptic seizure attacks). (ii) Ictal (recording during the epileptic seizure), using most appropriate methods of analysis to provide better healthcare diagnosis. This research paper then investigates the effects of a noninvasive healing therapy on the subjects by studying the EEG signals using latest signal processing techniques. The study has been conducted with Reiki as a healing technique, beneficial for restoring balance in cases of body mind alterations associated with an epileptic seizure. Reiki is practiced around the world and is recommended for different health services as a treatment approach. Reiki is an energy medicine, specifically a biofield therapy developed in Japan in the early 20th century. It is a system involving the laying on of hands, to stimulate the body’s natural energetic system. Earlier studies have shown an apparent connection between Reiki and the autonomous nervous system. The Reiki sessions are applied by an experienced therapist. EEG signals are measured at baseline, during session and post intervention to bring about effective epileptic seizure control or its elimination altogether.Keywords: EEG signal, Reiki, time consuming, epileptic seizure
Procedia PDF Downloads 40772 Historical Tree Height Growth Associated with Climate Change in Western North America
Authors: Yassine Messaoud, Gordon Nigh, Faouzi Messaoud, Han Chen
Abstract:
The effect of climate change on tree growth in boreal and temperate forests has received increased interest in the context of global warming. However, most studies were conducted in small areas and with a limited number of tree species. Here, we examined the height growth responses of seventeen tree species to climate change in Western North America. 37009 stands from forest inventory databases in Canada and USA with varying establishment date were selected. Dominant and co-dominant trees from each stand were sampled to determine top tree height at 50 years breast height age. Height was related to historical mean annual and summer temperatures, annual and summer Palmer Drought Severity Index, tree establishment date, slope, aspect, soil fertility as determined by the rate of carbon organic matter decomposition (carbon/nitrogen), geographic locations (latitude, longitude, and elevation), species range (coastal, interior, and both ranges), shade tolerance and leaf form (needle leaves, deciduous needle leaves, and broadleaves). Climate change had mostly a positive effect on tree height growth. The results explained 62.4% of the height growth variance. Since 1880, height growth increase was greater for coastal, high shade tolerant, and broadleaf species. Height growth increased more on steep slopes and high soil fertility soils. Greater height growth was mostly observed at the leading range and upward. Conversely, some species showed the opposite pattern probably due to the increase of drought (coastal Mediterranean area), precipitation and cloudiness (Alaska and British Columbia) and peculiarity (higher latitudes-lower elevations and vice versa) of western North America topography. This study highlights the role of the species ecological amplitude and traits, and geographic locations as the main factors determining the growth response and its magnitude to the recent global climate change.Keywords: Height growth, global climate change, species range, species characteristics, species ecological amplitude, geographic locations, western North America
Procedia PDF Downloads 18771 Multiscale Modelling of Textile Reinforced Concrete: A Literature Review
Authors: Anicet Dansou
Abstract:
Textile reinforced concrete (TRC)is increasingly used nowadays in various fields, in particular civil engineering, where it is mainly used for the reinforcement of damaged reinforced concrete structures. TRC is a composite material composed of multi- or uni-axial textile reinforcements coupled with a fine-grained cementitious matrix. The TRC composite is an alternative solution to the traditional Fiber Reinforcement Polymer (FRP) composite. It has good mechanical performance and better temperature stability but also, it makes it possible to meet the criteria of sustainable development better.TRCs are highly anisotropic composite materials with nonlinear hardening behavior; their macroscopic behavior depends on multi-scale mechanisms. The characterization of these materials through numerical simulation has been the subject of many studies. Since TRCs are multiscale material by definition, numerical multi-scale approaches have emerged as one of the most suitable methods for the simulation of TRCs. They aim to incorporate information pertaining to microscale constitute behavior, mesoscale behavior, and macro-scale structure response within a unified model that enables rapid simulation of structures. The computational costs are hence significantly reduced compared to standard simulation at a fine scale. The fine scale information can be implicitly introduced in the macro scale model: approaches of this type are called non-classical. A representative volume element is defined, and the fine scale information are homogenized over it. Analytical and computational homogenization and nested mesh methods belong to these approaches. On the other hand, in classical approaches, the fine scale information are explicitly introduced in the macro scale model. Such approaches pertain to adaptive mesh refinement strategies, sub-modelling, domain decomposition, and multigrid methods This research presents the main principles of numerical multiscale approaches. Advantages and limitations are identified according to several criteria: the assumptions made (fidelity), the number of input parameters required, the calculation costs (efficiency), etc. A bibliographic study of recent results and advances and of the scientific obstacles to be overcome in order to achieve an effective simulation of textile reinforced concrete in civil engineering is presented. A comparative study is further carried out between several methods for the simulation of TRCs used for the structural reinforcement of reinforced concrete structures.Keywords: composites structures, multiscale methods, numerical modeling, textile reinforced concrete
Procedia PDF Downloads 10970 The Extent of Virgin Olive-Oil Prices' Distribution Revealing the Behavior of Market Speculators
Authors: Fathi Abid, Bilel Kaffel
Abstract:
The olive tree, the olive harvest during winter season and the production of olive oil better known by professionals under the name of the crushing operation have interested institutional traders such as olive-oil offices and private companies such as food industry refining and extracting pomace olive oil as well as export-import public and private companies specializing in olive oil. The major problem facing producers of olive oil each winter campaign, contrary to what is expected, it is not whether the harvest will be good or not but whether the sale price will allow them to cover production costs and achieve a reasonable margin of profit or not. These questions are entirely legitimate if we judge by the importance of the issue and the heavy complexity of the uncertainty and competition made tougher by a high level of indebtedness and the experience and expertise of speculators and producers whose objectives are sometimes conflicting. The aim of this paper is to study the formation mechanism of olive oil prices in order to learn about speculators’ behavior and expectations in the market, how they contribute by their industry knowledge and their financial alliances and the size the financial challenge that may be involved for them to build private information hoses globally to take advantage. The methodology used in this paper is based on two stages, in the first stage we study econometrically the formation mechanisms of olive oil price in order to understand the market participant behavior by implementing ARMA, SARMA, GARCH and stochastic diffusion processes models, the second stage is devoted to prediction purposes, we use a combined wavelet- ANN approach. Our main findings indicate that olive oil market participants interact with each other in a way that they promote stylized facts formation. The unstable participant’s behaviors create the volatility clustering, non-linearity dependent and cyclicity phenomena. By imitating each other in some periods of the campaign, different participants contribute to the fat tails observed in the olive oil price distribution. The best prediction model for the olive oil price is based on a back propagation artificial neural network approach with input information based on wavelet decomposition and recent past history.Keywords: olive oil price, stylized facts, ARMA model, SARMA model, GARCH model, combined wavelet-artificial neural network, continuous-time stochastic volatility mode
Procedia PDF Downloads 34069 Using Structured Analysis and Design Technique Method for Unmanned Aerial Vehicle Components
Authors: Najeh Lakhoua
Abstract:
Introduction: Scientific developments and techniques for the systemic approach generate several names to the systemic approach: systems analysis, systems analysis, structural analysis. The main purpose of these reflections is to find a multi-disciplinary approach which organizes knowledge, creates universal language design and controls complex sets. In fact, system analysis is structured sequentially by steps: the observation of the system by various observers in various aspects, the analysis of interactions and regulatory chains, the modeling that takes into account the evolution of the system, the simulation and the real tests in order to obtain the consensus. Thus the system approach allows two types of analysis according to the structure and the function of the system. The purpose of this paper is to present an application of system analysis of Unmanned Aerial Vehicle (UAV) components in order to represent the architecture of this system. Method: There are various analysis methods which are proposed, in the literature, in to carry out actions of global analysis and different points of view as SADT method (Structured Analysis and Design Technique), Petri Network. The methodology adopted in order to contribute to the system analysis of an Unmanned Aerial Vehicle has been proposed in this paper and it is based on the use of SADT. In fact, we present a functional analysis based on the SADT method of UAV components Body, power supply and platform, computing, sensors, actuators, software, loop principles, flight controls and communications). Results: In this part, we present the application of SADT method for the functional analysis of the UAV components. This SADT model will be composed exclusively of actigrams. It starts with the main function ‘To analysis of the UAV components’. Then, this function is broken into sub-functions and this process is developed until the last decomposition level has been reached (levels A1, A2, A3 and A4). Recall that SADT techniques are semi-formal; however, for the same subject, different correct models can be built without having to know with certitude which model is the good or, at least, the best. In fact, this kind of model allows users a sufficient freedom in its construction and so the subjective factor introduces a supplementary dimension for its validation. That is why the validation step on the whole necessitates the confrontation of different points of views. Conclusion: In this paper, we presented an application of system analysis of Unmanned Aerial Vehicle components. In fact, this application of system analysis is based on SADT method (Structured Analysis Design Technique). This functional analysis proved the useful use of SADT method and its ability of describing complex dynamic systems.Keywords: system analysis, unmanned aerial vehicle, functional analysis, architecture
Procedia PDF Downloads 20468 Efficiency and Equity in Italian Secondary School
Authors: Giorgia Zotti
Abstract:
This research comprehensively investigates the multifaceted interplay determining school performance, individual backgrounds, and regional disparities within the landscape of Italian secondary education. Leveraging data gleaned from the INVALSI 2021-2022 database, the analysis meticulously scrutinizes two fundamental distributions of educational achievements: the standardized Invalsi test scores and official grades in Italian and Mathematics, focusing specifically on final-year secondary school students in Italy. Applying a comprehensive methodology, the study initially employs Data Envelopment Analysis (DEA) to assess school performances. This methodology involves constructing a production function encompassing inputs (hours spent at school) and outputs (Invalsi scores in Italian and Mathematics, along with official grades in Italian and Math). The DEA approach is applied in both of its versions: traditional and conditional. The latter incorporates environmental variables such as school type, size, demographics, technological resources, and socio-economic indicators. Additionally, the analysis delves into regional disparities by leveraging the Theil Index, providing insights into disparities within and between regions. Moreover, in the frame of the inequality of opportunity theory, the study quantifies the inequality of opportunity in students' educational achievements. The methodology applied is the Parametric Approach in the ex-ante version, considering diverse circumstances like parental education and occupation, gender, school region, birthplace, and language spoken at home. Consequently, a Shapley decomposition is applied to understand how much each circumstance affects the outcomes. The outcomes of this comprehensive investigation unveil pivotal determinants of school performance, notably highlighting the influence of school type (Liceo) and socioeconomic status. The research unveils regional disparities, elucidating instances where specific schools outperform others in official grades compared to Invalsi scores, shedding light on the intricate nature of regional educational inequalities. Furthermore, it emphasizes a heightened inequality of opportunity within the distribution of Invalsi test scores in contrast to official grades, underscoring pronounced disparities at the student level. This analysis provides insights for policymakers, educators, and stakeholders, fostering a nuanced understanding of the complexities within Italian secondary education.Keywords: inequality, education, efficiency, DEA approach
Procedia PDF Downloads 7667 A Study on Inverse Determination of Impact Force on a Honeycomb Composite Panel
Authors: Hamed Kalhori, Lin Ye
Abstract:
In this study, an inverse method was developed to reconstruct the magnitude and duration of impact forces exerted to a rectangular carbon fibre-epoxy composite honeycomb sandwich panel. The dynamic signals captured by Piezoelectric (PZT) sensors installed on the panel remotely from the impact locations were utilized to reconstruct the impact force generated by an instrumented hammer through an extended deconvolution approach. Two discretized forms of convolution integral are considered; the traditional one with an explicit transfer function and the modified one without an explicit transfer function. Deconvolution, usually applied to reconstruct the time history (e.g. magnitude) of a stochastic force at a defined location, is extended to identify both the location and magnitude of the impact force among a number of potential impact locations. It is assumed that a number of impact forces are simultaneously exerted to all potential locations, but the magnitude of all forces except one is zero, implicating that the impact occurs only at one location. The extended deconvolution is then applied to determine the magnitude as well as location (among the potential ones), incorporating the linear superposition of responses resulted from impact at each potential location. The problem can be categorized into under-determined (the number of sensors is less than that of impact locations), even-determined (the number of sensors equals that of impact locations), or over-determined (the number of sensors is greater than that of impact locations) cases. For an under-determined case, it comprises three potential impact locations and one PZT sensor for the rectangular carbon fibre-epoxy composite honeycomb sandwich panel. Assessments are conducted to evaluate the factors affecting the precision of the reconstructed force. Truncated Singular Value Decomposition (TSVD) and the Tikhonov regularization are independently chosen to regularize the problem to find the most suitable method for this system. The selection of optimal value of the regularization parameter is investigated through L-curve and Generalized Cross Validation (GCV) methods. In addition, the effect of different width of signal windows on the reconstructed force is examined. It is observed that the impact force generated by the instrumented impact hammer is sensitive to the impact locations of the structure, having a shape from a simple half-sine to a complicated one. The accuracy of the reconstructed impact force is evaluated using the correlation co-efficient between the reconstructed force and the actual one. Based on this criterion, it is concluded that the forces reconstructed by using the extended deconvolution without an explicit transfer function together with Tikhonov regularization match well with the actual forces in terms of magnitude and duration.Keywords: honeycomb composite panel, deconvolution, impact localization, force reconstruction
Procedia PDF Downloads 53666 Computational Characterization of Electronic Charge Transfer in Interfacial Phospholipid-Water Layers
Authors: Samira Baghbanbari, A. B. P. Lever, Payam S. Shabestari, Donald Weaver
Abstract:
Existing signal transmission models, although undoubtedly useful, have proven insufficient to explain the full complexity of information transfer within the central nervous system. The development of transformative models will necessitate a more comprehensive understanding of neuronal lipid membrane electrophysiology. Pursuant to this goal, the role of highly organized interfacial phospholipid-water layers emerges as a promising case study. A series of phospholipids in neural-glial gap junction interfaces as well as cholesterol molecules have been computationally modelled using high-performance density functional theory (DFT) calculations. Subsequent 'charge decomposition analysis' calculations have revealed a net transfer of charge from phospholipid orbitals through the organized interfacial water layer before ultimately finding its way to cholesterol acceptor molecules. The specific pathway of charge transfer from phospholipid via water layers towards cholesterol has been mapped in detail. Cholesterol is an essential membrane component that is overrepresented in neuronal membranes as compared to other mammalian cells; given this relative abundance, its apparent role as an electronic acceptor may prove to be a relevant factor in further signal transmission studies of the central nervous system. The timescales over which this electronic charge transfer occurs have also been evaluated by utilizing a system design that systematically increases the number of water molecules separating lipids and cholesterol. Memory loss through hydrogen-bonded networks in water can occur at femtosecond timescales, whereas existing action potential-based models are limited to micro or nanosecond scales. As such, the development of future models that attempt to explain faster timescale signal transmission in the central nervous system may benefit from our work, which provides additional information regarding fast timescale energy transfer mechanisms occurring through interfacial water. The study possesses a dataset that includes six distinct phospholipids and a collection of cholesterol. Ten optimized geometric characteristics (features) were employed to conduct binary classification through an artificial neural network (ANN), differentiating cholesterol from the various phospholipids. This stems from our understanding that all lipids within the first group function as electronic charge donors, while cholesterol serves as an electronic charge acceptor.Keywords: charge transfer, signal transmission, phospholipids, water layers, ANN
Procedia PDF Downloads 7465 An Improved Adaptive Dot-Shape Beamforming Algorithm Research on Frequency Diverse Array
Authors: Yanping Liao, Zenan Wu, Ruigang Zhao
Abstract:
Frequency diverse array (FDA) beamforming is a technology developed in recent years, and its antenna pattern has a unique angle-distance-dependent characteristic. However, the beam is always required to have strong concentration, high resolution and low sidelobe level to form the point-to-point interference in the concentrated set. In order to eliminate the angle-distance coupling of the traditional FDA and to make the beam energy more concentrated, this paper adopts a multi-carrier FDA structure based on proposed power exponential frequency offset to improve the array structure and frequency offset of the traditional FDA. The simulation results show that the beam pattern of the array can form a dot-shape beam with more concentrated energy, and its resolution and sidelobe level performance are improved. However, the covariance matrix of the signal in the traditional adaptive beamforming algorithm is estimated by the finite-time snapshot data. When the number of snapshots is limited, the algorithm has an underestimation problem, which leads to the estimation error of the covariance matrix to cause beam distortion, so that the output pattern cannot form a dot-shape beam. And it also has main lobe deviation and high sidelobe level problems in the case of limited snapshot. Aiming at these problems, an adaptive beamforming technique based on exponential correction for multi-carrier FDA is proposed to improve beamforming robustness. The steps are as follows: first, the beamforming of the multi-carrier FDA is formed under linear constrained minimum variance (LCMV) criteria. Then the eigenvalue decomposition of the covariance matrix is performed to obtain the diagonal matrix composed of the interference subspace, the noise subspace and the corresponding eigenvalues. Finally, the correction index is introduced to exponentially correct the small eigenvalues of the noise subspace, improve the divergence of small eigenvalues in the noise subspace, and improve the performance of beamforming. The theoretical analysis and simulation results show that the proposed algorithm can make the multi-carrier FDA form a dot-shape beam at limited snapshots, reduce the sidelobe level, improve the robustness of beamforming, and have better performance.Keywords: adaptive beamforming, correction index, limited snapshot, multi-carrier frequency diverse array, robust
Procedia PDF Downloads 13064 Study of Biofouling Wastewater Treatment Technology
Authors: Sangho Park, Mansoo Kim, Kyujung Chae, Junhyuk Yang
Abstract:
The International Maritime Organization (IMO) recognized the problem of invasive species invasion and adopted the "International Convention for the Control and Management of Ships' Ballast Water and Sediments" in 2004, which came into force on September 8, 2017. In 2011, the IMO approved the "Guidelines for the Control and Management of Ships' Biofouling to Minimize the Transfer of Invasive Aquatic Species" to minimize the movement of invasive species by hull-attached organisms and required ships to manage the organisms attached to their hulls. Invasive species enter new environments through ships' ballast water and hull attachment. However, several obstacles to implementing these guidelines have been identified, including a lack of underwater cleaning equipment, regulations on underwater cleaning activities in ports, and difficulty accessing crevices in underwater areas. The shipping industry, which is the party responsible for understanding these guidelines, wants to implement them for fuel cost savings resulting from the removal of organisms attached to the hull, but they anticipate significant difficulties in implementing the guidelines due to the obstacles mentioned above. Robots or people remove the organisms attached to the hull underwater, and the resulting wastewater includes various species of organisms and particles of paint and other pollutants. Currently, there is no technology available to sterilize the organisms in the wastewater or stabilize the heavy metals in the paint particles. In this study, we aim to analyze the characteristics of the wastewater generated from the removal of hull-attached organisms and select the optimal treatment technology. The organisms in the wastewater generated from the removal of the attached organisms meet the biological treatment standard (D-2) using the sterilization technology applied in the ships' ballast water treatment system. The heavy metals and other pollutants in the paint particles generated during removal are treated using stabilization technologies such as thermal decomposition. The wastewater generated is treated using a two-step process: 1) development of sterilization technology through pretreatment filtration equipment and electrolytic sterilization treatment and 2) development of technology for removing particle pollutants such as heavy metals and dissolved inorganic substances. Through this study, we will develop a biological removal technology and an environmentally friendly processing system for the waste generated after removal that meets the requirements of the government and the shipping industry and lays the groundwork for future treatment standards.Keywords: biofouling, ballast water treatment system, filtration, sterilization, wastewater
Procedia PDF Downloads 10963 Transport Properties of Alkali Nitrites
Authors: Y. Mateyshina, A.Ulihin, N.Uvarov
Abstract:
Electrolytes with different type of charge carrier can find widely application in different using, e.g. sensors, electrochemical equipments, batteries and others. One of important components ensuring stable functioning of the equipment is electrolyte. Electrolyte has to be characterized by high conductivity, thermal stability, and wide electrochemical window. In addition to many advantageous characteristic for liquid electrolytes, the solid state electrolytes have good mechanical stability, wide working range of temperature range. Thus search of new system of solid electrolytes with high conductivity is an actual task of solid state chemistry. Families of alkali perchlorates and nitrates have been investigated by us earlier. In literature data about transport properties of alkali nitrites are absent. Nevertheless, alkali nitrites MeNO2 (Me= Li+, Na+, K+, Rb+ and Cs+), except for the lithium salt, have high-temperature phases with crystal structure of the NaCl-type. High-temperature phases of nitrites are orientationally disordered, i.e. non-spherical anions are reoriented over several equivalents directions in the crystal lattice. Pure lithium nitrite LiNO2 is characterized by ionic conductivity near 10-4 S/cm at 180°C and more stable as compared with lithium nitrate and can be used as a component for synthesis of composite electrolytes. In this work composite solid electrolytes in the binary system LiNO2 - A (A= MgO, -Al2O3, Fe2O3, CeO2, SnO2, SiO2) were synthesized and their structural, thermodynamic and electrical properties investigated. Alkali nitrite was obtained by exchange reaction from water solutions of barium nitrite and alkali sulfate. The synthesized salt was characterized by X-ray powder diffraction technique using D8 Advance X-Ray Diffractometer with Cu K radiation. Using thermal analysis, the temperatures of dehydration and thermal decomposition of salt were determined.. The conductivity was measured using a two electrode scheme in a forevacuum (6.7 Pa) with an HP 4284A (Precision LCR meter) in a frequency range 20 Hz < ν < 1 MHz. Solid composite electrolytes LiNO2 - A A (A= MgO, -Al2O3, Fe2O3, CeO2, SnO2, SiO2) have been synthesized by mixing of preliminary dehydrated components followed by sintering at 250°C. In the series of nitrite of alkaline metals Li+-Cs+, the conductivity varies not monotonically with increasing radius of cation. The minimum conductivity is observed for KNO2; however, with further increase in the radius of cation in the series, the conductivity tends to increase. The work was supported by the Russian Foundation for Basic research, grant #14-03-31442.Keywords: conductivity, alkali nitrites, composite electrolytes, transport properties
Procedia PDF Downloads 32262 Cardiothoracic Ratio in Postmortem Computed Tomography: A Tool for the Diagnosis of Cardiomegaly
Authors: Alex Eldo Simon, Abhishek Yadav
Abstract:
This study aimed to evaluate the utility of postmortem computed tomography (CT) and heart weight measurements in the assessment of cardiomegaly in cases of sudden death due to cardiac origin by comparing the results of these two diagnostic methods. The study retrospectively analyzed postmortem computed tomography (PMCT) data from 54 cases of sudden natural death and compared the findings with those of the autopsy. The study involved measuring the cardiothoracic ratio (CTR) from coronal computed tomography (CT) images and determining the actual cardiac weight by weighing the heart during the autopsy. The inclusion criteria for the study were cases of sudden death suspected to be caused by cardiac pathology, while exclusion criteria included death due to unnatural causes such as trauma or poisoning, diagnosed natural causes of death related to organs other than the heart, and cases of decomposition. Sensitivity, specificity, and diagnostic accuracy were calculated, and to evaluate the accuracy of using the cardiothoracic ratio (CTR) to detect an enlarged heart, the study generated receiver operating characteristic (ROC) curves. The cardiothoracic ratio (CTR) is a radiological tool used to assess cardiomegaly by measuring the maximum cardiac diameter in relation to the maximum transverse diameter of the chest wall. The clinically used criteria for CTR have been modified from 0.50 to 0.57 for use in postmortem settings, where abnormalities can be detected by comparing CTR values to this threshold. A CTR value of 0.57 or higher is suggestive of hypertrophy but not conclusive. Similarly, heart weight is measured during the traditional autopsy, and a cardiac weight greater than 450 grams is defined as hypertrophy. Of the 54 cases evaluated, 22 (40.7%) had a cardiothoracic ratio (CTR) ranging from > 0.50 to equal 0.57, and 12 cases (22.2%) had a CTR greater than 0.57, which was defined as hypertrophy. The mean CTR was calculated as 0.52 ± 0.06. Among the 54 cases evaluated, the weight of the heart was measured, and the mean was calculated as 369.4 ± 99.9 grams. Out of the 54 cases evaluated, 12 were found to have hypertrophy as defined by PMCT, while only 9 cases were identified with hypertrophy in traditional autopsy. The sensitivity and specificity of the test were calculated as 55.56% and 84.44%, respectively. The sensitivity of the hypertrophy test was found to be 55.56% (95% CI: 26.66, 81.12¹), the specificity was 84.44% (95% CI: 71.22, 92.25¹), and the diagnostic accuracy was 79.63% (95% CI: 67.1, 88.23¹). The limitation of the study was a low sample size of only 54 cases, which may limit the generalizability of the findings. The comparison of the cardiothoracic ratio with heart weight in this study suggests that PMCT may serve as a screening tool for medico-legal autopsies when performed by forensic pathologists. However, it should be noted that the low sensitivity of the test (55.5%) may limit its diagnostic accuracy, and therefore, further studies with larger sample sizes and more diverse populations are needed to validate these findings.Keywords: PMCT, virtopsy, CTR, cardiothoracic ratio
Procedia PDF Downloads 8161 The Impact of Monetary Policy on Aggregate Market Liquidity: Evidence from Indian Stock Market
Authors: Byomakesh Debata, Jitendra Mahakud
Abstract:
The recent financial crisis has been characterized by massive monetary policy interventions by the Central bank, and it has amplified the importance of liquidity for the stability of the stock market. This paper empirically elucidates the actual impact of monetary policy interventions on stock market liquidity covering all National Stock Exchange (NSE) Stocks, which have been traded continuously from 2002 to 2015. The present study employs a multivariate VAR model along with VAR-granger causality test, impulse response functions, block exogeneity test, and variance decomposition to analyze the direction as well as the magnitude of the relationship between monetary policy and market liquidity. Our analysis posits a unidirectional relationship between monetary policy (call money rate, base money growth rate) and aggregate market liquidity (traded value, turnover ratio, Amihud illiquidity ratio, turnover price impact, high-low spread). The impulse response function analysis clearly depicts the influence of monetary policy on stock liquidity for every unit innovation in monetary policy variables. Our results suggest that an expansionary monetary policy increases aggregate stock market liquidity and the reverse is documented during the tightening of monetary policy. To ascertain whether our findings are consistent across all periods, we divided the period of study as pre-crisis (2002 to 2007) and post-crisis period (2007-2015) and ran the same set of models. Interestingly, all liquidity variables are highly significant in the post-crisis period. However, the pre-crisis period has witnessed a moderate predictability of monetary policy. To check the robustness of our results we ran the same set of VAR models with different monetary policy variables and found the similar results. Unlike previous studies, we found most of the liquidity variables are significant throughout the sample period. This reveals the predictability of monetary policy on aggregate market liquidity. This study contributes to the existing body of literature by documenting a strong predictability of monetary policy on stock liquidity in an emerging economy with an order driven market making system like India. Most of the previous studies have been carried out in developing economies with quote driven or hybrid market making system and their results are ambiguous across different periods. From an eclectic sense, this study may be considered as a baseline study to further find out the macroeconomic determinants of liquidity of stocks at individual as well as aggregate level.Keywords: market liquidity, monetary policy, order driven market, VAR, vector autoregressive model
Procedia PDF Downloads 37560 Bioincision of Gmelina Arborea Roxb. Heartwood with Inonotus Dryophilus (Berk.) Murr. for Improved Chemical Uptake and Penetration
Authors: A. O. Adenaiya, S. F. Curling, O. Y. Ogunsanwo, G . A. Ormondroyd
Abstract:
Treatment of wood with chemicals in order to prolong its service life may prove difficult in some refractory wood species. This impermeability in wood is usually due to biochemical changes which occur during heartwood formation. Bioincision, which is a short-term, controlled microbial decomposition of wood, is one of the promising approaches capable of improving the amenability of refractory wood to chemical treatments. Gmelina Arborea, a mainstay timber species in Nigeria, has impermeable heartwood due to the excessive tyloses which occlude its vessels. Therefore, the chemical uptake and penetration in Gmelina arborea heartwood bioincised with Inonotus dryophilus fungus was investigated. Five mature Gmelina Arborea trees were harvested at the Departmental plantation in Ajibode, Ibadan, Nigeria and a bolt of 300 cm was obtained from the basal portion of each tree. The heartwood portion of the bolts was extracted and converted into dimensions 20 mm x 20 mm x 60 mm and subsequently conditioned (200C at 65% Relative Humidity). Twenty wood samples each were bioincised with the white-rot fungus Inonotus dryophilus (ID, 999) for 3, 5, 7 and 9 weeks using standard procedure, while a set of sterile control samples were prepared. Ten of each bioincised and control sample were pressure-treated with 5% tanalith preservative, while the other ten of each bioincised and control samples were pressure-treated with a liquid dye for easy traceability of the chemical in the wood, both using a full cell treatment process. The bioincised and control samples were evaluated for their Weight Loss before chemical treatment (WL, %), Preservative Absorption (PA, Kg/m3), Preservative Retention (PR, Kg/m3), Axial Absorption (AA, Kg/m3), Lateral Absorption (LA, Kg/m3), Axial Penetration Depth (APD, mm), Radial Penetration Depth (RPD, mm), and Tangential Penetration Depth (TPD, mm). The data obtained were analyzed using ANOVA at α0.05. Results show that the weight loss was least in the samples bioincised for three weeks (0.09%) and highest after 7 weeks of bioincision (0.48%). The samples bioincised for 3 weeks had the least PA (106.72 Kg/m3) and PR (5.87 Kg/m3), while the highest PA (134.9 Kg/m3) and PR were observed after 7 weeks of bioincision (7.42 Kg/m3). The AA ranged from 27.28 Kg/m3 (3 weeks) to 67.05 Kg/m3 (5 weeks), while the LA was least after 5 weeks of incubation (28.1 Kg/m3) and highest after 9 weeks (71.74 Kg/m3). Significantly lower APD was observed in control samples (6.97 mm) than in the samples bioincised after 9weeks (19.22 mm). The RPD increased from 0.08 mm (control samples) to 3.48 mm (5 weeks), while TPD ranged from 0.38 mm (control samples) to 0.63 mm (9 weeks), implying that liquid flow in the wood was predominantly through the axial pathway. Bioincising G. arborea heartwood with I. dryophilus fungus for 9 weeks is capable of enhancing chemical uptake and deeper penetration of chemicals in the wood through the degradation of the occluding vessel tyloses, which is accompanied by a minimal degradation of the polymeric wood constituents.Keywords: Bioincision, chemical uptake, penetration depth, refractory wood, tyloses
Procedia PDF Downloads 10659 An Energy Integration Study While Utilizing Heat of Flue Gas: Sponge Iron Process
Authors: Venkata Ramanaiah, Shabina Khanam
Abstract:
Enormous potential for saving energy is available in coal-based sponge iron plants as these are associated with the high percentage of energy wastage per unit sponge iron production. An energy integration option is proposed, in the present paper, to a coal based sponge iron plant of 100 tonnes per day production capacity, being operated in India using SL/RN (Stelco-Lurgi/Republic Steel-National Lead) process. It consists of the rotary kiln, rotary cooler, dust settling chamber, after burning chamber, evaporating cooler, electrostatic precipitator (ESP), wet scrapper and chimney as important equipment. Principles of process integration are used in the proposed option. It accounts for preheating kiln inlet streams like kiln feed and slinger coal up to 170ᴼC using waste gas exiting ESP. Further, kiln outlet stream is cooled from 1020ᴼC to 110ᴼC using kiln air. The working areas in the plant where energy is being lost and can be conserved are identified. Detailed material and energy balances are carried out around the sponge iron plant, and a modified model is developed, to find coal requirement of proposed option, based on hot utility, heat of reactions, kiln feed and air preheating, radiation losses, dolomite decomposition, the heat required to vaporize the coal volatiles, etc. As coal is used as utility and process stream, an iterative approach is used in solution methodology to compute coal consumption. Further, water consumption, operating cost, capital investment, waste gas generation, profit, and payback period of the modification are computed. Along with these, operational aspects of the proposed design are also discussed. To recover and integrate waste heat available in the plant, three gas-solid heat exchangers and four insulated ducts with one FD fan for each are installed additionally. Thus, the proposed option requires total capital investment of $0.84 million. Preheating of kiln feed, slinger coal and kiln air streams reduce coal consumption by 24.63% which in turn reduces waste gas generation by 25.2% in comparison to the existing process. Moreover, 96% reduction in water is also observed, which is the added advantage of the modification. Consequently, total profit is found as $2.06 million/year with payback period of 4.97 months only. The energy efficient factor (EEF), which is the % of the maximum energy that can be saved through design, is found to be 56.7%. Results of the proposed option are also compared with literature and found in good agreement.Keywords: coal consumption, energy conservation, process integration, sponge iron plant
Procedia PDF Downloads 14458 A First-Principles Investigation of Magnesium-Hydrogen System: From Bulk to Nano
Authors: Paramita Banerjee, K. R. S. Chandrakumar, G. P. Das
Abstract:
Bulk MgH2 has drawn much attention for the purpose of hydrogen storage because of its high hydrogen storage capacity (~7.7 wt %) as well as low cost and abundant availability. However, its practical usage has been hindered because of its high hydrogen desorption enthalpy (~0.8 eV/H2 molecule), which results in an undesirable desorption temperature of 3000C at 1 bar H2 pressure. To surmount the limitations of bulk MgH2 for the purpose of hydrogen storage, a detailed first-principles density functional theory (DFT) based study on the structure and stability of neutral (Mgm) and positively charged (Mgm+) Mg nanoclusters of different sizes (m = 2, 4, 8 and 12), as well as their interaction with molecular hydrogen (H2), is reported here. It has been found that due to the absence of d-electrons within the Mg atoms, hydrogen remained in molecular form even after its interaction with neutral and charged Mg nanoclusters. Interestingly, the H2 molecules do not enter into the interstitial positions of the nanoclusters. Rather, they remain on the surface by ornamenting these nanoclusters and forming new structures with a gravimetric density higher than 15 wt %. Our observation is that the inclusion of Grimme’s DFT-D3 dispersion correction in this weakly interacting system has a significant effect on binding of the H2 molecules with these nanoclusters. The dispersion corrected interaction energy (IE) values (0.1-0.14 eV/H2 molecule) fall in the right energy window, that is ideal for hydrogen storage. These IE values are further verified by using high-level coupled-cluster calculations with non-iterative triples corrections i.e. CCSD(T), (which has been considered to be a highly accurate quantum chemical method) and thereby confirming the accuracy of our ‘dispersion correction’ incorporated DFT calculations. The significance of the polarization and dispersion energy in binding of the H2 molecules are confirmed by performing energy decomposition analysis (EDA). A total of 16, 24, 32 and 36 H2 molecules can be attached to the neutral and charged nanoclusters of size m = 2, 4, 8 and 12 respectively. Ab-initio molecular dynamics (AIMD) simulation shows that the outermost H2 molecules are desorbed at a rather low temperature viz. 150 K (-1230C) which is expected. However, complete dehydrogenation of these nanoclusters occur at around 1000C. Most importantly, the host nanoclusters remain stable up to ~500 K (2270C). All these results on the adsorption and desorption of molecular hydrogen with neutral and charged Mg nanocluster systems indicate towards the possibility of reducing the dehydrogenation temperature of bulk MgH2 by designing new Mg-based nano materials which will be able to adsorb molecular hydrogen via this weak Mg-H2 interaction, rather than the strong Mg-H bonding. Notwithstanding the fact that in practical applications, these interactions will be further complicated by the effect of substrates as well as interactions with other clusters, the present study has implications on our fundamental understanding to this problem.Keywords: density functional theory, DFT, hydrogen storage, molecular dynamics, molecular hydrogen adsorption, nanoclusters, physisorption
Procedia PDF Downloads 41557 Prediction of Live Birth in a Matched Cohort of Elective Single Embryo Transfers
Authors: Mohsen Bahrami, Banafsheh Nikmehr, Yueqiang Song, Anuradha Koduru, Ayse K. Vuruskan, Hongkun Lu, Tamer M. Yalcinkaya
Abstract:
In recent years, we have witnessed an explosion of studies aimed at using a combination of artificial intelligence (AI) and time-lapse imaging data on embryos to improve IVF outcomes. However, despite promising results, no study has used a matched cohort of transferred embryos which only differ in pregnancy outcome, i.e., embryos from a single clinic which are similar in parameters, such as: morphokinetic condition, patient age, and overall clinic and lab performance. Here, we used time-lapse data on embryos with known pregnancy outcomes to see if the rich spatiotemporal information embedded in this data would allow the prediction of the pregnancy outcome regardless of such critical parameters. Methodology—We did a retrospective analysis of time-lapse data from our IVF clinic utilizing Embryoscope 100% of the time for embryo culture to blastocyst stage with known clinical outcomes, including live birth vs nonpregnant (embryos with spontaneous abortion outcomes were excluded). We used time-lapse data from 200 elective single transfer embryos randomly selected from January 2019 to June 2021. Our sample included 100 embryos in each group with no significant difference in patient age (P=0.9550) and morphokinetic scores (P=0.4032). Data from all patients were combined to make a 4th order tensor, and feature extraction were subsequently carried out by a tensor decomposition methodology. The features were then used in a machine learning classifier to classify the two groups. Major Findings—The performance of the model was evaluated using 100 random subsampling cross validation (train (80%) - test (20%)). The prediction accuracy, averaged across 100 permutations, exceeded 80%. We also did a random grouping analysis, in which labels (live birth, nonpregnant) were randomly assigned to embryos, which yielded 50% accuracy. Conclusion—The high accuracy in the main analysis and the low accuracy in random grouping analysis suggest a consistent spatiotemporal pattern which is associated with pregnancy outcomes, regardless of patient age and embryo morphokinetic condition, and beyond already known parameters, such as: early cleavage or early blastulation. Despite small samples size, this ongoing analysis is the first to show the potential of AI methods in capturing the complex morphokinetic changes embedded in embryo time-lapse data, which contribute to successful pregnancy outcomes, regardless of already known parameters. The results on a larger sample size with complementary analysis on prediction of other key outcomes, such as: euploidy and aneuploidy of embryos will be presented at the meeting.Keywords: IVF, embryo, machine learning, time-lapse imaging data
Procedia PDF Downloads 9356 Electrodeposition of Silicon Nanoparticles Using Ionic Liquid for Energy Storage Application
Authors: Anjali Vanpariya, Priyanka Marathey, Sakshum Khanna, Roma Patel, Indrajit Mukhopadhyay
Abstract:
Silicon (Si) is a promising negative electrode material for lithium-ion batteries (LiBs) due to its low cost, non-toxicity, and a high theoretical capacity of 4200 mAhg⁻¹. The primary challenge of the application of Si-based LiBs is large volume expansion (~ 300%) during the charge-discharge process. Incorporation of graphene, carbon nanotubes (CNTs), morphological control, and nanoparticles was utilized as effective strategies to tackle volume expansion issues. However, molten salt methods can resolve the issue, but high-temperature requirement limits its application. For sustainable and practical approach, room temperature (RT) based methods are essentially required. Use of ionic liquids (ILs) for electrodeposition of Si nanostructures can possibly resolve the issue of temperature as well as greener media. In this work, electrodeposition of Si nanoparticles on gold substrate was successfully carried out in the presence of ILs media, 1-butyl-3-methylimidazolium-bis (trifluoromethyl sulfonyl) imide (BMImTf₂N) at room temperature. Cyclic voltammetry (CV) suggests the sequential reduction of Si⁴⁺ to Si²⁺ and then Si nanoparticles (SiNs). The structure and morphology of the electrodeposited SiNs were investigated by FE-SEM and observed interconnected Si nanoparticles of average particle size ⁓100-200 nm. XRD and XPS data confirm the deposition of Si on Au (111). The first discharge-charge capacity of Si anode material has been found to be 1857 and 422 mAhg⁻¹, respectively, at current density 7.8 Ag⁻¹. The irreversible capacity of the first discharge-charge process can be attributed to the solid electrolyte interface (SEI) formation via electrolyte decomposition, and trapped Li⁺ inserted into the inner pores of Si. Pulverization of SiNs results in the creation of a new active site, which facilitates the formation of new SEI in the subsequent cycles leading to fading in a specific capacity. After 20 cycles, charge-discharge profiles have been stabilized, and a reversible capacity of 150 mAhg⁻¹ is retained. Electrochemical impedance spectroscopy (EIS) data shows the decrease in Rct value from 94.7 to 47.6 kΩ after 50 cycles of charge-discharge, which demonstrates the improvements of the interfacial charge transfer kinetics. The decrease in the Warburg impedance after 50 cycles of charge-discharge measurements indicates facile diffusion in fragmented and smaller Si nanoparticles. In summary, Si nanoparticles deposited on gold substrate using ILs as media and characterized well with different analytical techniques. Synthesized material was successfully utilized for LiBs application, which is well supported by CV and EIS data.Keywords: silicon nanoparticles, ionic liquid, electrodeposition, cyclic voltammetry, Li-ion battery
Procedia PDF Downloads 12555 Smart BIM Documents - the Development of the Ontology-Based Tool for Employer Information Requirements (OntEIR), and its Transformation into SmartEIR
Authors: Shadan Dwairi
Abstract:
Defining proper requirements is one of the key factors for a successful construction projects. Although there have been many attempts put forward in assist in identifying requirements, but still this area is under developed. In Buildings Information Modelling (BIM) projects. The Employer Information Requirements (EIR) is the fundamental requirements document and a necessary ingredient in achieving a successful BIM project. The provision on full and clear EIR is essential to achieving BIM Level-2. As Defined by PAS 1192-2, EIR is a “pre-tender document that sets out the information to be delivered and the standards and processes to be adopted by the supplier as part of the project delivery process”. It also notes that “EIR should be incorporated into tender documentation to enable suppliers to produce an initial BIM Execution Plan (BEP)”. The importance of effective definition of EIR lies in its contribution to a better productivity during the construction process in terms of cost and time, in addition to improving the quality of the built asset. Proper and clear information is a key aspect of the EIR, in terms of the information it contains and more importantly the information the client receives at the end of the project that will enable the effective management and operation of the asset, where typically about 60%-80% of the cost is spent. This paper reports on the research done in developing the Ontology-based tool for Employer Information Requirements (OntEIR). OntEIR has proven the ability to produce a full and complete set of EIRs, which ensures that the clients’ information needs for the final model delivered by BIM is clearly defined from the beginning of the process. It also reports on the work being done into transforming OntEIR into a smart tool for Defining Employer Information Requirements (smartEIR). smartEIR transforms the OntEIR tool into enabling it to develop custom EIR- tailored for the: Project Type, Project Requirements, and the Client Capabilities. The initial idea behind smartEIR is moving away from the notion “One EIR fits All”. smartEIR utilizes the links made in OntEIR and creating a 3D matrix that transforms it into a smart tool. The OntEIR tool is based on the OntEIR framework that utilizes both Ontology and the Decomposition of Goals to elicit and extract the complete set of requirements needed for a full and comprehensive EIR. A new ctaegorisation system for requirements is also introduced in the framework and tool, which facilitates the understanding and enhances the clarification of the requirements especially for novice clients. Findings of the evaluation of the tool that was done with experts in the industry, showed that the OntEIR tool contributes towards effective and efficient development of EIRs that provide a better understanding of the information requirements as requested by BIM, and support the production of a complete BIM Execution Plan (BEP) and a Master Information Delivery Plan (MIDP).Keywords: building information modelling, employer information requirements, ontology, web-based, tool
Procedia PDF Downloads 12754 Synthesized Doped TiO2 Photocatalysts for Mineralization of Quinalphos from Aqueous Streams
Authors: Nidhi Sharotri, Dhiraj Sud
Abstract:
Water pollution by pesticides constitutes a serious ecological problem due to their potential toxicity and bioaccumulation. The widespread use of pesticides in industry and agriculture along with their resistance to natural decomposition, biodegradation, chemical and photochemical degradation under typical environmental conditions has resulted in the emergence of these chemicals and their transformed products in natural water. Among AOP’s, heterogeneous photocatalysis using TiO2 as photocatalyst appears as the most emerging destructive technology for mineralization of the pollutant in aquatic streams. Among the various semiconductors (TiO2, ZnO, CdS, FeTiO3, MnTiO3, SrTiO2 and SnO2), TiO2 has proven to be the most efficient photocatalyst for environmental applications due to its biological and chemical inertness, high photo reactivity, non-toxicity, and photo stability. Semiconductor photocatalysts are characterized by an electronic band structure in which valence band and conduction band are separated by a band gap, i.e. a region of forbidden energy. Semiconductor based photocatalysts produces e-/h+ pairs which have been employed for degradation of organic pollutants. The present paper focuses on modification of TiO2 photocatalyst in order to shift its absorption edge towards longer wavelength to make it active under natural light. Semiconductor TiO2 photocatalysts was prepared by doping with anion (N), cation (Mn) and double doped (Mn, N) using greener approach. Titanium isopropoxide is used as titania precursor and ethanedithiol, hydroxyl amine hydrochloride, manganous chloride as sulphur, nitrogen and manganese precursors respectively. Synthesized doped TiO2 nanomaterials are characterized for surface morphology (SEM, TEM), crystallinity (XRD) and optical properties (absorption spectra and band gap). EPR data confirms the substitutional incorporation of Mn2+ in TiO2 lattice. The doping influences the phase transformation of rutile and anatase phase crystal and thereby the absorption spectrum changes were observed. The effect of variation of reaction parameters such as solvent, reaction time and calcination temperature on the yield, surface morphology and optical properties was also investigated. The TEM studies show the particle size of nanomaterials varies from 10-50 nm. The calculated band gap of nanomaterials varies from 2.30-2.60 eV. The photocatalytic degradation of organic pollutant organophosphate pesticide (Quinalphos) has been investigated by studying the changes in UV absorption spectrum and the promising results were obtained under visible light. The complete mineralization of quinalphos has occurred as no intermediates were recorded after 8 hrs of degradation confirmed from the HPLC studies.Keywords: quinalphos, doped-TiO2, mineralization, EPR
Procedia PDF Downloads 32853 Validating Quantitative Stormwater Simulations in Edmonton Using MIKE URBAN
Authors: Mohamed Gaafar, Evan Davies
Abstract:
Many municipalities within Canada and abroad use chloramination to disinfect drinking water so as to avert the production of the disinfection by-products (DBPs) that result from conventional chlorination processes and their consequential public health risks. However, the long-lasting monochloramine disinfectant (NH2Cl) can pose a significant risk to the environment. As, it can be introduced into stormwater sewers, from different water uses, and thus freshwater sources. Little research has been undertaken to monitor and characterize the decay of NH2Cl and to study the parameters affecting its decomposition in stormwater networks. Therefore, the current study was intended to investigate this decay starting by building a stormwater model and validating its hydraulic and hydrologic computations, and then modelling water quality in the storm sewers and examining the effects of different parameters on chloramine decay. The presented work here is only the first stage of this study. The 30th Avenue basin in Southern Edmonton was chosen as a case study, because the well-developed basin has various land-use types including commercial, industrial, residential, parks and recreational. The City of Edmonton has already built a MIKE-URBAN stormwater model for modelling floods. Nevertheless, this model was built to the trunk level which means that only the main drainage features were presented. Additionally, this model was not calibrated and known to consistently compute pipe flows higher than the observed values; not to the benefit of studying water quality. So the first goal was to complete modelling and updating all stormwater network components. Then, available GIS Data was used to calculate different catchment properties such as slope, length and imperviousness. In order to calibrate and validate this model, data of two temporary pipe flow monitoring stations, collected during last summer, was used along with records of two other permanent stations available for eight consecutive summer seasons. The effect of various hydrological parameters on model results was investigated. It was found that model results were affected by the ratio of impervious areas. The catchment length was tested, however calculated, because it is approximate representation of the catchment shape. Surface roughness coefficients were calibrated using. Consequently, computed flows at the two temporary locations had correlation coefficients of values 0.846 and 0.815, where the lower value pertained to the larger attached catchment area. Other statistical measures, such as peak error of 0.65%, volume error of 5.6%, maximum positive and negative differences of 2.17 and -1.63 respectively, were all found in acceptable ranges.Keywords: stormwater, urban drainage, simulation, validation, MIKE URBAN
Procedia PDF Downloads 30052 Social Problems and Gender Wage Gap Faced by Working Women in Readymade Garment Sector of Pakistan
Authors: Narjis Kahtoon
Abstract:
The issue of the wage discrimination on the basis of gender and social problem has been a significant research problem for several decades. Whereas lots of have explored reasons for the persistence of an inequality in the wages of male and female, none has successfully explained away the entire differentiation. The wage discrimination on the basis of gender and social problem of working women is a global issue. Although inequality in political and economic and social make-up of countries all over the world, the gender wage discrimination, and social constraint is present. The aim of the research is to examine the gender wage discrimination and social constraint from an international perspective and to determine whether any pattern exists among cultural dimensions of a country and the man and women remuneration gap in Readymade Garment Sector of Pakistan. Population growth rate is significant indicator used to explain the change in population and play a crucial point in the economic development of a country. In Pakistan, readymade garment sector consists of small, medium and large sized firms. With an estimated 30 percent of the workforce in textile- Garment is females’. Readymade garment industry is a labor intensive industry and relies on the skills of individual workers and provides highest value addition in the textile sector. In the Garment sector, female workers are concentrated in poorly paid, labor-intensive down-stream production (readymade garments, linen, towels, etc.), while male workers dominate capital- intensive (ginning, spinning and weaving) processes. Gender wage discrimination and social constraint are reality in Pakistan Labor Market. This research allows us not only to properly detect the size of gender wage discrimination and social constraint but to also fully understand its consequences in readymade garment sector of Pakistan. Furthermore, research will evaluated this measure for the three main clusters like Lahore, Karachi, and Faisalabad. These data contain complete details of male and female workers and supervisors in the readymade garment sector of Pakistan. These sources of information provide a unique opportunity to reanalyze the previous finding in the literature. The regression analysis focused on the standard 'Mincerian' earning equation and estimates it separately by gender, the research will also imply the cultural dimensions developed by Hofstede (2001) to profile a country’s cultural status and compare those cultural dimensions to the wage inequalities. Readymade garment of Pakistan is one of the important sectors since its products have huge demand at home and abroad. These researches will a major influence on the measures undertaken to design a public policy regarding wage discrimination and social constraint in readymade garment sector of Pakistan.Keywords: gender wage differentials, decomposition, garment, cultural
Procedia PDF Downloads 21051 Factors Affecting Air Surface Temperature Variations in the Philippines
Authors: John Christian Lequiron, Gerry Bagtasa, Olivia Cabrera, Leoncio Amadore, Tolentino Moya
Abstract:
Changes in air surface temperature play an important role in the Philippine’s economy, industry, health, and food production. While increasing global mean temperature in the recent several decades has prompted a number of climate change and variability studies in the Philippines, most studies still focus on rainfall and tropical cyclones. This study aims to investigate the trend and variability of observed air surface temperature and determine its major influencing factor/s in the Philippines. A non-parametric Mann-Kendall trend test was applied to monthly mean temperature of 17 synoptic stations covering 56 years from 1960 to 2015 and a mean change of 0.58 °C or a positive trend of 0.0105 °C/year (p < 0.05) was found. In addition, wavelet decomposition was used to determine the frequency of temperature variability show a 12-month, 30-80-month and more than 120-month cycles. This indicates strong annual variations, interannual variations that coincide with ENSO events, and interdecadal variations that are attributed to PDO and CO2 concentrations. Air surface temperature was also correlated with smoothed sunspot number and galactic cosmic rays, the results show a low to no effect. The influence of ENSO teleconnection on temperature, wind pattern, cloud cover, and outgoing longwave radiation on different ENSO phases had significant effects on regional temperature variability. Particularly, an anomalous anticyclonic (cyclonic) flow east of the Philippines during the peak and decay phase of El Niño (La Niña) events leads to the advection of warm southeasterly (cold northeasterly) air mass over the country. Furthermore, an apparent increasing cloud cover trend is observed over the West Philippine Sea including portions of the Philippines, and this is believed to lessen the effect of the increasing air surface temperature. However, relative humidity was also found to be increasing especially on the central part of the country, which results in a high positive trend of heat index, exacerbating the effects on human discomfort. Finally, an assessment of gridded temperature datasets was done to look at the viability of using three high-resolution datasets in future climate analysis and model calibration and verification. Several error statistics (i.e. Pearson correlation, Bias, MAE, and RMSE) were used for this validation. Results show that gridded temperature datasets generally follows the observed surface temperature change and anomalies. In addition, it is more representative of regional temperature rather than a substitute to station-observed air temperature.Keywords: air surface temperature, carbon dioxide, ENSO, galactic cosmic rays, smoothed sunspot number
Procedia PDF Downloads 32450 A Hybrid LES-RANS Approach to Analyse Coupled Heat Transfer and Vortex Structures in Separated and Reattached Turbulent Flows
Authors: C. D. Ellis, H. Xia, X. Chen
Abstract:
Experimental and computational studies investigating heat transfer in separated flows have been of increasing importance over the last 60 years, as efforts are being made to understand and improve the efficiency of components such as combustors, turbines, heat exchangers, nuclear reactors and cooling channels. Understanding of not only the time-mean heat transfer properties but also the unsteady properties is vital for design of these components. As computational power increases, more sophisticated methods of modelling these flows become available for use. The hybrid LES-RANS approach has been applied to a blunt leading edge flat plate, utilising a structured grid at a moderate Reynolds number of 20300 based on the plate thickness. In the region close to the wall, the RANS method is implemented for two turbulence models; the one equation Spalart-Allmaras model and Menter’s two equation SST k-ω model. The LES region occupies the flow away from the wall and is formulated without any explicit subgrid scale LES modelling. Hybridisation is achieved between the two methods by the blending of the nearest wall distance. Validation of the flow was obtained by assessing the mean velocity profiles in comparison to similar studies. Identifying the vortex structures of the flow was obtained by utilising the λ2 criterion to identify vortex cores. The qualitative structure of the flow compared with experiments of similar Reynolds number. This identified the 2D roll up of the shear layer, breaking down via the Kelvin-Helmholtz instability. Through this instability the flow progressed into hairpin like structures, elongating as they advanced downstream. Proper Orthogonal Decomposition (POD) analysis has been performed on the full flow field and upon the surface temperature of the plate. As expected, the breakdown of POD modes for the full field revealed a relatively slow decay compared to the surface temperature field. Both POD fields identified the most energetic fluctuations occurred in the separated and recirculation region of the flow. Latter modes of the surface temperature identified these levels of fluctuations to dominate the time-mean region of maximum heat transfer and flow reattachment. In addition to the current research, work will be conducted in tracking the movement of the vortex cores and the location and magnitude of temperature hot spots upon the plate. This information will support the POD and statistical analysis performed to further identify qualitative relationships between the vortex dynamics and the response of the surface heat transfer.Keywords: heat transfer, hybrid LES-RANS, separated and reattached flow, vortex dynamics
Procedia PDF Downloads 23149 Environmental Degradation and Sustainable Measures: A Case Study in Nepal
Authors: Megha Raj Regmi
Abstract:
Water Supply and Sanitation coverage in Nepal is not satisfactory in South Asia. Far less than expected achievements have been realized in sanitation following the SDG for Nepal. There are so many queues of buckets to fetch water in the heart of the capital city Kathmandu. In Kathmandu Valley, daily water demand is 400 million litres, but the supply is only 200 million litres daily. Over- exploitation of ground water and traditional water sources causing the water levels to drop to alarming levels while most of the traditional waterspouts are also drying up. While about 40% of the World's population is deprived of drinking water, the urban populace uses excessive quantities of fresh water to flush the excreta. Water Supply and Basic Sanitation coverage in Nepal is 86% and 92%, respectively, of the total population. This research work basically deals with more than one thousand dry toilets constructed in peri-urban areas. The work has used appropriate technology and studied their performances in the context of Nepal based on complete laboratory analyses and regular monitoring. It has been found that dry toilets have a clear advantage in NPK recovery over traditional water-borne sanitation technology. This paper also deals with the effect of temperature in the decomposition process in dry toilets and also focuses on the different distinct technologies employed in Kathmandu Valley. This paper suggests the modifications needed in the implementation and study of the effect of human urine in composting and application on agriculture and the experience of more than one thousand Dry toilets in Kathmandu Valley. It also deals with the practices of bio-gas generation and community-led total sanitation to cope with the challenges of sanitation and hygiene in Nepal. The paper also describes in depth the different types of biomass energy production methods from the human and cattle manure units, including bio-gas generation from the kitchen wastes produced by a student hostel mixed with toilet waste. The uses of decomposed feces as a soil conditioner have been described along with the challenges and prospects of the uses of urine in agriculture as eco-friendly fertilizer in the context of Nepal. Finally, the paper exhibits a comparative study of all types of dry toilet developments in developed and developing countries like Australia, South Korea, Malaysia, China, India, Ukraine and Nepal. The community groups in our financial assistance have made many models of public toilets with biogas which are very successful in the height of 600 m up to 2000 meters from the mean sea level. In conclusion it makes a plea for the acceptance of these toilets for planners and decision makers with a set of pragmatic recommendations.Keywords: bio- gas public toilet, dry toilet, low-cost technology, sustainable sanitation, total sanitation
Procedia PDF Downloads 13