Search results for: singular value decomposition (SVD)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 744

Search results for: singular value decomposition (SVD)

84 Adaptive Programming for Indigenous Early Learning: The Early Years Model

Authors: Rachel Buchanan, Rebecca LaRiviere

Abstract:

Context: The ongoing effects of colonialism continue to be experienced through paternalistic policies and funding processes that cause disjuncture between and across Indigenous early childhood programming on-reserve and in urban and Northern settings in Canada. While various educational organizations and social service providers have risen to address these challenges in the short, medium and long term, there continues to be a lack in nation-wide cohesive, culturally grounded, and meaningful early learning programming for Indigenous children in Canada. Indigenous-centered early learning programs tend to face one of two scaling dilemmas: their program goals are too prescriptive to enable the program to be meaningfully replicated in different cultural/ community settings, or their program goals are too broad to be meaningfully adapted to the unique cultural and contextual needs and desires of Indigenous communities (the “franchise approach”). There are over 600 First Nations communities in Canada representing more than 50 Nations and languages. Consequently, Indigenous early learning programming cannot be applied with a universal or “one size fits all” approach. Sustainable and comprehensive programming must be responsive to each community context, building upon existing strengths and assets to avoid program duplication and irrelevance. Thesis: Community-driven and culturally adapted early childhood programming is critical but cannot be achieved on a large scale within traditional program models that are constrained by prescriptive overarching program goals. Principles, rather than goals, are an effective way to navigate and evaluate complex and dynamic systems. Principles guide an intervention to be adaptable, flexible and scalable. The Martin Family Initiative (MFI) ’s Early Years program engages a principles-based approach to programming. As will be discussed in this paper, this approach enables the program to catalyze existing community-based strengths and organizational assets toward bridging gaps across and disjuncture between Indigenous early learning programs, as well as to scale programming in sustainable, context-responsive and dynamic ways. This paper argues that using a principles-driven and adaptive scaling approach, the Early Years model establishes important learnings for culturally adapted Indigenous early learning programming in Canada. Methodology: The Early Years has leveraged this approach to develop an array of programming with partner organizations and communities across the country. The Early Years began as a singular pilot project in one First Nation. In just three years, it has expanded to five different regions and community organizations. In each context, the program supports the partner organization through different means and to different ends, the extent to which is determined in partnership with each community-based organization: in some cases, this means supporting the organization to build home visiting programming from the ground-up; in others, it means offering organization-specific culturally adapted early learning resources to support the programming that already exists in communities. Principles underpin but do not define the practices of the program in each of these relationships. This paper will explore numerous examples of principles-based adaptability with the context of the Early Years, concluding that the program model offers theadaptability and dynamism necessary to respond to unique and ever-evolving community contexts and needs of Indigenous children today.

Keywords: culturally adapted programming, indigenous early learning, principles-based approach, program scaling

Procedia PDF Downloads 186
83 Computer-Assisted Management of Building Climate and Microgrid with Model Predictive Control

Authors: Vinko Lešić, Mario Vašak, Anita Martinčević, Marko Gulin, Antonio Starčić, Hrvoje Novak

Abstract:

With 40% of total world energy consumption, building systems are developing into technically complex large energy consumers suitable for application of sophisticated power management approaches to largely increase the energy efficiency and even make them active energy market participants. Centralized control system of building heating and cooling managed by economically-optimal model predictive control shows promising results with estimated 30% of energy efficiency increase. The research is focused on implementation of such a method on a case study performed on two floors of our faculty building with corresponding sensors wireless data acquisition, remote heating/cooling units and central climate controller. Building walls are mathematically modeled with corresponding material types, surface shapes and sizes. Models are then exploited to predict thermal characteristics and changes in different building zones. Exterior influences such as environmental conditions and weather forecast, people behavior and comfort demands are all taken into account for deriving price-optimal climate control. Finally, a DC microgrid with photovoltaics, wind turbine, supercapacitor, batteries and fuel cell stacks is added to make the building a unit capable of active participation in a price-varying energy market. Computational burden of applying model predictive control on such a complex system is relaxed through a hierarchical decomposition of the microgrid and climate control, where the former is designed as higher hierarchical level with pre-calculated price-optimal power flows control, and latter is designed as lower level control responsible to ensure thermal comfort and exploit the optimal supply conditions enabled by microgrid energy flows management. Such an approach is expected to enable the inclusion of more complex building subsystems into consideration in order to further increase the energy efficiency.

Keywords: price-optimal building climate control, Microgrid power flow optimisation, hierarchical model predictive control, energy efficient buildings, energy market participation

Procedia PDF Downloads 465
82 Reduction of Nitrogen Monoxide with Carbon Monoxide from Gas Streams by 10% wt. Cu-Ce-Fe-Co/Activated Carbon

Authors: K. L. Pan, M. B. Chang

Abstract:

Nitrogen oxides (NOₓ) is regarded as one of the most important air pollutants. It not only causes adverse environmental effects but also harms human lungs and respiratory system. As a post-combustion treatment, selective catalytic reduction (SCR) possess the highest NO removal efficiency ( ≥ 85%), which is considered as the most effective technique for removing NO from gas streams. However, injection of reducing agent such as NH₃ is requested, and it is costly and may cause secondary pollution. Reduction of NO with carbon monoxide (CO) as reducing agent has been previously investigated. In this process, the key step involves the NO adsorption and dissociation. Also, the high performance mainly relies on the amounts of oxygen vacancy on catalyst surface and redox ability of catalyst, because oxygen vacancy can activate the N-O bond to promote its dissociation. Additionally, perfect redox ability can promote the adsorption of NO and oxidation of CO. Typically, noble metals such as iridium (Ir), platinum (Pt), and palladium (Pd) are used as catalyst for the reduction of NO with CO; however, high cost has limited their applications. Recently, transition metal oxides have been investigated for the reduction of NO with CO, especially CuₓOy, CoₓOy, Fe₂O₃, and MnOₓ are considered as effective catalysts. However, deactivation is inevitable as oxygen (O₂) exists in the gas streams because active sites (oxygen vacancies) of catalyst are occupied by O₂. In this study, Cu-Ce-Fe-Co is prepared and supported on activated carbon by impregnation method to form 10% wt. Cu-Ce-Fe-Co/activated carbon catalyst. Generally, addition of activated carbon on catalyst can bring several advantages: (1) NO can be effectively adsorbed by interaction between catalyst and activated carbon, resulting in the improvement of NO removal, (2) direct NO decomposition may be achieved over carbon associated with catalyst, and (3) reduction of NO could be enhanced by a reducing agent over carbon-supported catalyst. Therefore, 10% wt. Cu-Ce-Fe-Co/activated carbon may have better performance for reduction of NO with CO. Experimental results indicate that NO conversion achieved with 10% wt. Cu-Ce-Fe-Co/activated carbon reaches 83% at 150°C with 300 ppm NO and 10,000 ppm CO. As temperature is further increased to 200°C, 100% NO conversion could be achieved, implying that 10% wt. Cu-Ce-Fe-Co/activated carbon prepared has good activity for the reduction of NO with CO. In order to investigate the effect of O₂ on reduction of NO with CO, 1-5% O₂ are introduced into the system. The results indicate that NO conversions still maintain at ≥ 90% with 1-5% O₂ conditions at 200°C. It is worth noting that effect of O₂ on reduction of NO with CO could be significantly improved as carbon is used as support. It is inferred that carbon support can react with O₂ to produce CO₂ as O₂ exists in the gas streams. Overall, 10% wt. Cu-Ce-Fe-Co/activated carbon is demonstrated with good potential for reduction of NO with CO, and possible mechanisms will be elucidated in this paper.

Keywords: nitrogen oxides (NOₓ), carbon monoxide (CO), reduction of NO with CO, carbon material, catalysis

Procedia PDF Downloads 256
81 Desulphurization of Waste Tire Pyrolytic Oil (TPO) Using Photodegradation and Adsorption Techniques

Authors: Moshe Mello, Hilary Rutto, Tumisang Seodigeng

Abstract:

The nature of tires makes them extremely challenging to recycle due to the available chemically cross-linked polymer and, therefore, they are neither fusible nor soluble and, consequently, cannot be remolded into other shapes without serious degradation. Open dumping of tires pollutes the soil, contaminates underground water and provides ideal breeding grounds for disease carrying vermins. The thermal decomposition of tires by pyrolysis produce char, gases and oil. The composition of oils derived from waste tires has common properties to commercial diesel fuel. The problem associated with the light oil derived from pyrolysis of waste tires is that it has a high sulfur content (> 1.0 wt.%) and therefore emits harmful sulfur oxide (SOx) gases to the atmosphere when combusted in diesel engines. Desulphurization of TPO is necessary due to the increasing stringent environmental regulations worldwide. Hydrodesulphurization (HDS) is the commonly practiced technique for the removal of sulfur species in liquid hydrocarbons. However, the HDS technique fails in the presence of complex sulfur species such as Dibenzothiopene (DBT) present in TPO. This study aims to investigate the viability of photodegradation (Photocatalytic oxidative desulphurization) and adsorptive desulphurization technologies for efficient removal of complex and non-complex sulfur species in TPO. This study focuses on optimizing the cleaning (removal of impurities and asphaltenes) process by varying process parameters; temperature, stirring speed, acid/oil ratio and time. The treated TPO will then be sent for vacuum distillation to attain the desired diesel like fuel. The effect of temperature, pressure and time will be determined for vacuum distillation of both raw TPO and the acid treated oil for comparison purposes. Polycyclic sulfides present in the distilled (diesel like) light oil will be oxidized dominantly to the corresponding sulfoxides and sulfone via a photo-catalyzed system using TiO2 as a catalyst and hydrogen peroxide as an oxidizing agent and finally acetonitrile will be used as an extraction solvent. Adsorptive desulphurization will be used to adsorb traces of sulfurous compounds which remained during photocatalytic desulphurization step. This desulphurization convoy is expected to give high desulphurization efficiency with reasonable oil recovery.

Keywords: adsorption, asphaltenes, photocatalytic oxidation, pyrolysis

Procedia PDF Downloads 272
80 Evaluation of Natural Waste Materials for Ammonia Removal in Biofilters

Authors: R. F. Vieira, D. Lopes, I. Baptista, S. A. Figueiredo, V. F. Domingues, R. Jorge, C. Delerue-matos, O. M. Freitas

Abstract:

Odours are generated in municipal solid wastes management plants as a result of decomposition of organic matter, especially when anaerobic degradation occurs. Information was collected about the substances and respective concentration in the surrounding atmosphere of some management plants. The main components which are associated with these unpleasant odours were identified: ammonia, hydrogen sulfide and mercaptans. The first is the most common and the one that presents the highest concentrations, reaching values of 700 mg/m3. Biofiltration, which involves simultaneously biodegradation, absorption and adsorption processes, is a sustainable technology for the treatment of these odour emissions when a natural packing material is used. The packing material should ideally be cheap, durable, and allow the maximum microbiological activity and adsorption/absorption. The presence of nutrients and water is required for biodegradation processes. Adsorption and absorption are enhanced by high specific surface area, high porosity and low density. The main purpose of this work is the exploitation of natural waste materials, locally available, as packing media: heather (Erica lusitanica), chestnut bur (from Castanea sativa), peach pits (from Prunus persica) and eucalyptus bark (from Eucalyptus globulus). Preliminary batch tests of ammonia removal were performed in order to select the most interesting materials for biofiltration, which were then characterized. The following physical and chemical parameters were evaluated: density, moisture, pH, buffer and water retention capacity. The determination of equilibrium isotherms and the adjustment to Langmuir and Freundlich models was also performed. Both models can fit the experimental results. Based both in the material performance as adsorbent and in its physical and chemical characteristics, eucalyptus bark was considered the best material. It presents a maximum adsorption capacity of 0.78±0.45 mol/kg for ammonia. The results from its characterization are: 121 kg/m3 density, 9.8% moisture, pH equal to 5.7, buffer capacity of 0.370 mmol H+/kg of dry matter and water retention capacity of 1.4 g H2O/g of dry matter. The application of natural materials locally available, with little processing, in biofiltration is an economic and sustainable alternative that should be explored.

Keywords: ammonia removal, biofiltration, natural materials, odour control

Procedia PDF Downloads 369
79 Composition Dependence of Ni 2p Core Level Shift in Fe1-xNix Alloys

Authors: Shakti S. Acharya, V. R. R. Medicherla, Rajeev Rawat, Komal Bapna, Deepnarayan Biswas, Khadija Ali, K. Maiti

Abstract:

The discovery of invar effect in 35% Ni concentration Fe1-xNix alloy has stimulated enormous experimental and theoretical research. Elemental Fe and low Ni concentration Fe1-xNix alloys which possess body centred cubic (bcc) crystal structure at ambient temperature and pressure transform to hexagonally close packed (hcp) phase at around 13 GPa. Magnetic order was found to be absent at 11K for Fe92Ni8 alloy when subjected to a high pressure of 26 GPa. The density functional theoretical calculations predicted substantial hyperfine magnetic fields, but were not observed in Mossbaur spectroscopy. The bulk modulus of fcc Fe1-xNix alloys with Ni concentration more than 35%, is found to be independent of pressure. The magnetic moment of Fe is also found be almost same in these alloys from 4 to 10 GPa pressure. Fe1-xNix alloys exhibit a complex microstructure which is formed by a series of complex phase transformations like martensitic transformation, spinodal decomposition, ordering, mono-tectoid reaction, eutectoid reaction at temperatures below 400°C. Despite the existence of several theoretical models the field is still in its infancy lacking full knowledge about the anomalous properties exhibited by these alloys. Fe1-xNix alloys have been prepared by arc melting the high purity constituent metals in argon ambient. These alloys have annealed at around 3000C in vacuum sealed quartz tube for two days to make the samples homogeneous. These alloys have been structurally characterized by x-ray diffraction and were found to exhibit a transition from bcc to fcc for x > 0.3. Ni 2p core levels of the alloys have been measured using high resolution (0.45 eV) x-ray photoelectron spectroscopy. Ni 2p core level shifts to lower binding energy with respect to that of pure Ni metal giving rise to negative core level shifts (CLSs). Measured CLSs exhibit a linear dependence in fcc region (x > 0.3) and were found to deviate slightly in bcc region (x < 0.3). ESCA potential model fails correlate CLSs with site potentials or charges in metallic alloys. CLSs in these alloys occur mainly due to shift in valence bands with composition due to intra atomic charge redistribution.

Keywords: arc melting, core level shift, ESCA potential model, valence band

Procedia PDF Downloads 380
78 Application of Recycled Paper Mill Sludge on the Growth of Khaya Senegalensis and Its Effect on Soil Properties, Nutrients and Heavy Metals

Authors: A. Rosazlin Abdullah, I. Che Fauziah, K. Wan Rasidah, A. B. Rosenani

Abstract:

The paper industry performs an essential role in the global economy of the world. A study was conducted on the paper mill sludge that is applied on the Khaya senegalensis for 1 year planning period at University Agriculture Park, Puchong, Selangor, Malaysia to determine the growth of Khaya senegalensis, soil properties, nutrients concentrations and effects on the status of heavy metals. Paper Mill Sludge (PMS) and composted Recycled Paper Mill Sludge (RPMS) were used with different rates of nitrogen (0, 150, 300 and 600 kg ha-1) at the ratio of 1:1 (Recycled Paper Mill Sludge (RPMS) : Empty Fruit Brunch (EFB). The growth parameters were measured twice a month for 1 year. Plant nutrients and heavy metal uptake were determined. The paper mill sludge has the potential to be a supplementary N fertilizer as well as a soil amendment. The application of RPMS with N, significantly contributed to the improvement in plant growth parameters such as plant height (4.24 m), basal diameter (10.30 cm), total plant biomass and improved soil physical and chemical properties. The pH, EC, available P and total C in soil were varied among the treatments during the planting period. The treatments with raw and RPM compost had higher pH values than those applied with inorganic fertilizer and control. Nevertheless, there was no salinity problem recorded during the planting period and available P in soil treated with raw and RPMS compost was higher than the control plots that reflects the mineralization of organic P from the decomposition of pulp sludge. The weight of the free and occluded light fractions of carbon concentration was significantly higher in the soils treated with raw and RPMS compost. The application of raw and composted RPMS gave significantly higher concentration of the heavy metals, but the total concentrations of heavy metals in the soils were below the critical values. Hence, the paper mill sludge can be successfully used as soil amendment in acidic soil without any serious threat. The use of paper mill sludge for the soil fertility, shows improvement in land application signifies a unique opportunity to recycle sludge back to the land to alleviate the potential waste management problem.

Keywords: growth, heavy metals, nutrients uptake, production, waste management

Procedia PDF Downloads 368
77 Comparison of Iodine Density Quantification through Three Material Decomposition between Philips iQon Dual Layer Spectral CT Scanner and Siemens Somatom Force Dual Source Dual Energy CT Scanner: An in vitro Study

Authors: Jitendra Pratap, Jonathan Sivyer

Abstract:

Introduction: Dual energy/Spectral CT scanning permits simultaneous acquisition of two x-ray spectra datasets and can complement radiological diagnosis by allowing tissue characterisation (e.g., uric acid vs. non-uric acid renal stones), enhancing structures (e.g. boost iodine signal to improve contrast resolution), and quantifying substances (e.g. iodine density). However, the latter showed inconsistent results between the 2 main modes of dual energy scanning (i.e. dual source vs. dual layer). Therefore, the present study aimed to determine which technology is more accurate in quantifying iodine density. Methods: Twenty vials with known concentrations of iodine solutions were made using Optiray 350 contrast media diluted in sterile water. The concentration of iodine utilised ranged from 0.1 mg/ml to 1.0mg/ml in 0.1mg/ml increments, 1.5 mg/ml to 4.5 mg/ml in 0.5mg/ml increments followed by further concentrations at 5.0 mg/ml, 7mg/ml, 10 mg/ml and 15mg/ml. The vials were scanned using Dual Energy scan mode on a Siemens Somatom Force at 80kV/Sn150kV and 100kV/Sn150kV kilovoltage pairing. The same vials were scanned using Spectral scan mode on a Philips iQon at 120kVp and 140kVp. The images were reconstructed at 5mm thickness and 5mm increment using Br40 kernel on the Siemens Force and B Filter on Philips iQon. Post-processing of the Dual Energy data was performed on vendor-specific Siemens Syngo VIA (VB40) and Philips Intellispace Portal (Ver. 12) for the Spectral data. For each vial and scan mode, the iodine concentration was measured by placing an ROI in the coronal plane. Intraclass correlation analysis was performed on both datasets. Results: The iodine concentrations were reproduced with a high degree of accuracy for Dual Layer CT scanner. Although the Dual Source images showed a greater degree of deviation in measured iodine density for all vials, the dataset acquired at 80kV/Sn150kV had a higher accuracy. Conclusion: Spectral CT scanning by the dual layer technique has higher accuracy for quantitative measurements of iodine density compared to the dual source technique.

Keywords: CT, iodine density, spectral, dual-energy

Procedia PDF Downloads 120
76 Cover Layer Evaluation in Soil Organic Matter of Mixing and Compressed Unsaturated

Authors: Nayara Torres B. Acioli, José Fernando T. Jucá

Abstract:

The uncontrolled emission of gases in urban residues' embankment located near urban areas is a social and environmental problem, common in Brazilian cities. Several environmental impacts in the local and global scope may be generated by atmospheric air contamination by the biogas resulted from the decomposition of solid urban materials. In Brazil, the cities of small size figure mostly with 90% of all cities, with the population smaller than 50,000 inhabitants, according to the 2011 IBGE' census, most of the landfill covering layer is composed of clayey, pure soil. The embankments undertaken with pure soil may reach up to 60% of retention of methane, for the other 40% it may be dispersed into the atmosphere. In face of this figures the oxidative covering layer is granted some space of study, envisaging to reduce this perceptual available in the atmosphere, releasing, in spite of methane, carbonic gas which is almost 20 times as less polluting than Methane. This paper exposes the results of studies on the characteristics of the soil used for the oxidative coverage layer of the experimental embankment of Solid Urban Residues (SUR), built in Muribeca-PE, Brazil, supported of the Group of Solid Residues (GSR), located at Federal University of Pernambuco, through laboratory vacuum experiments (determining the characteristics curve), granularity, and permeability, that in soil with saturation over 85% offers dramatic drops in the test of permeability to the air, by little increments of water, based in the existing Brazilian norm for this procedure. The suction was studied, as in the other tests, from the division of prospection of an oxidative coverage layer of 60cm, in the upper half (0.1 m to 0.3 m) and lower half (0.4 m to 0.6 m). Therefore, the consequences to be presented from the lixiviation of the fine materials after 5 years of finalization of the embankment, what made its permeability increase. Concerning its humidity, it is most retained in the upper part, that comprises the compound, with a difference in the order of 8 percent the superior half to inferior half, retaining the least suction from the surface. These results reveal the efficiency of the oxidative coverage layer in retaining the rain water, it has a lower cost when compared to the other types of layer, offering larger availability of this layer as an alternative for a solution for the appropriate disposal of residues.

Keywords: oxidative coverage layer, permeability, suction, saturation

Procedia PDF Downloads 289
75 Investigation of the EEG Signal Parameters during Epileptic Seizure Phases in Consequence to the Application of External Healing Therapy on Subjects

Authors: Karan Sharma, Ajay Kumar

Abstract:

Epileptic seizure is a type of disease due to which electrical charge in the brain flows abruptly resulting in abnormal activity by the subject. One percent of total world population gets epileptic seizure attacks.Due to abrupt flow of charge, EEG (Electroencephalogram) waveforms change. On the display appear a lot of spikes and sharp waves in the EEG signals. Detection of epileptic seizure by using conventional methods is time-consuming. Many methods have been evolved that detect it automatically. The initial part of this paper provides the review of techniques used to detect epileptic seizure automatically. The automatic detection is based on the feature extraction and classification patterns. For better accuracy decomposition of the signal is required before feature extraction. A number of parameters are calculated by the researchers using different techniques e.g. approximate entropy, sample entropy, Fuzzy approximate entropy, intrinsic mode function, cross-correlation etc. to discriminate between a normal signal & an epileptic seizure signal.The main objective of this review paper is to present the variations in the EEG signals at both stages (i) Interictal (recording between the epileptic seizure attacks). (ii) Ictal (recording during the epileptic seizure), using most appropriate methods of analysis to provide better healthcare diagnosis. This research paper then investigates the effects of a noninvasive healing therapy on the subjects by studying the EEG signals using latest signal processing techniques. The study has been conducted with Reiki as a healing technique, beneficial for restoring balance in cases of body mind alterations associated with an epileptic seizure. Reiki is practiced around the world and is recommended for different health services as a treatment approach. Reiki is an energy medicine, specifically a biofield therapy developed in Japan in the early 20th century. It is a system involving the laying on of hands, to stimulate the body’s natural energetic system. Earlier studies have shown an apparent connection between Reiki and the autonomous nervous system. The Reiki sessions are applied by an experienced therapist. EEG signals are measured at baseline, during session and post intervention to bring about effective epileptic seizure control or its elimination altogether.

Keywords: EEG signal, Reiki, time consuming, epileptic seizure

Procedia PDF Downloads 406
74 Historical Tree Height Growth Associated with Climate Change in Western North America

Authors: Yassine Messaoud, Gordon Nigh, Faouzi Messaoud, Han Chen

Abstract:

The effect of climate change on tree growth in boreal and temperate forests has received increased interest in the context of global warming. However, most studies were conducted in small areas and with a limited number of tree species. Here, we examined the height growth responses of seventeen tree species to climate change in Western North America. 37009 stands from forest inventory databases in Canada and USA with varying establishment date were selected. Dominant and co-dominant trees from each stand were sampled to determine top tree height at 50 years breast height age. Height was related to historical mean annual and summer temperatures, annual and summer Palmer Drought Severity Index, tree establishment date, slope, aspect, soil fertility as determined by the rate of carbon organic matter decomposition (carbon/nitrogen), geographic locations (latitude, longitude, and elevation), species range (coastal, interior, and both ranges), shade tolerance and leaf form (needle leaves, deciduous needle leaves, and broadleaves). Climate change had mostly a positive effect on tree height growth. The results explained 62.4% of the height growth variance. Since 1880, height growth increase was greater for coastal, high shade tolerant, and broadleaf species. Height growth increased more on steep slopes and high soil fertility soils. Greater height growth was mostly observed at the leading range and upward. Conversely, some species showed the opposite pattern probably due to the increase of drought (coastal Mediterranean area), precipitation and cloudiness (Alaska and British Columbia) and peculiarity (higher latitudes-lower elevations and vice versa) of western North America topography. This study highlights the role of the species ecological amplitude and traits, and geographic locations as the main factors determining the growth response and its magnitude to the recent global climate change.

Keywords: Height growth, global climate change, species range, species characteristics, species ecological amplitude, geographic locations, western North America

Procedia PDF Downloads 185
73 Multiscale Modelling of Textile Reinforced Concrete: A Literature Review

Authors: Anicet Dansou

Abstract:

Textile reinforced concrete (TRC)is increasingly used nowadays in various fields, in particular civil engineering, where it is mainly used for the reinforcement of damaged reinforced concrete structures. TRC is a composite material composed of multi- or uni-axial textile reinforcements coupled with a fine-grained cementitious matrix. The TRC composite is an alternative solution to the traditional Fiber Reinforcement Polymer (FRP) composite. It has good mechanical performance and better temperature stability but also, it makes it possible to meet the criteria of sustainable development better.TRCs are highly anisotropic composite materials with nonlinear hardening behavior; their macroscopic behavior depends on multi-scale mechanisms. The characterization of these materials through numerical simulation has been the subject of many studies. Since TRCs are multiscale material by definition, numerical multi-scale approaches have emerged as one of the most suitable methods for the simulation of TRCs. They aim to incorporate information pertaining to microscale constitute behavior, mesoscale behavior, and macro-scale structure response within a unified model that enables rapid simulation of structures. The computational costs are hence significantly reduced compared to standard simulation at a fine scale. The fine scale information can be implicitly introduced in the macro scale model: approaches of this type are called non-classical. A representative volume element is defined, and the fine scale information are homogenized over it. Analytical and computational homogenization and nested mesh methods belong to these approaches. On the other hand, in classical approaches, the fine scale information are explicitly introduced in the macro scale model. Such approaches pertain to adaptive mesh refinement strategies, sub-modelling, domain decomposition, and multigrid methods This research presents the main principles of numerical multiscale approaches. Advantages and limitations are identified according to several criteria: the assumptions made (fidelity), the number of input parameters required, the calculation costs (efficiency), etc. A bibliographic study of recent results and advances and of the scientific obstacles to be overcome in order to achieve an effective simulation of textile reinforced concrete in civil engineering is presented. A comparative study is further carried out between several methods for the simulation of TRCs used for the structural reinforcement of reinforced concrete structures.

Keywords: composites structures, multiscale methods, numerical modeling, textile reinforced concrete

Procedia PDF Downloads 108
72 The Extent of Virgin Olive-Oil Prices' Distribution Revealing the Behavior of Market Speculators

Authors: Fathi Abid, Bilel Kaffel

Abstract:

The olive tree, the olive harvest during winter season and the production of olive oil better known by professionals under the name of the crushing operation have interested institutional traders such as olive-oil offices and private companies such as food industry refining and extracting pomace olive oil as well as export-import public and private companies specializing in olive oil. The major problem facing producers of olive oil each winter campaign, contrary to what is expected, it is not whether the harvest will be good or not but whether the sale price will allow them to cover production costs and achieve a reasonable margin of profit or not. These questions are entirely legitimate if we judge by the importance of the issue and the heavy complexity of the uncertainty and competition made tougher by a high level of indebtedness and the experience and expertise of speculators and producers whose objectives are sometimes conflicting. The aim of this paper is to study the formation mechanism of olive oil prices in order to learn about speculators’ behavior and expectations in the market, how they contribute by their industry knowledge and their financial alliances and the size the financial challenge that may be involved for them to build private information hoses globally to take advantage. The methodology used in this paper is based on two stages, in the first stage we study econometrically the formation mechanisms of olive oil price in order to understand the market participant behavior by implementing ARMA, SARMA, GARCH and stochastic diffusion processes models, the second stage is devoted to prediction purposes, we use a combined wavelet- ANN approach. Our main findings indicate that olive oil market participants interact with each other in a way that they promote stylized facts formation. The unstable participant’s behaviors create the volatility clustering, non-linearity dependent and cyclicity phenomena. By imitating each other in some periods of the campaign, different participants contribute to the fat tails observed in the olive oil price distribution. The best prediction model for the olive oil price is based on a back propagation artificial neural network approach with input information based on wavelet decomposition and recent past history.

Keywords: olive oil price, stylized facts, ARMA model, SARMA model, GARCH model, combined wavelet-artificial neural network, continuous-time stochastic volatility mode

Procedia PDF Downloads 339
71 Using Structured Analysis and Design Technique Method for Unmanned Aerial Vehicle Components

Authors: Najeh Lakhoua

Abstract:

Introduction: Scientific developments and techniques for the systemic approach generate several names to the systemic approach: systems analysis, systems analysis, structural analysis. The main purpose of these reflections is to find a multi-disciplinary approach which organizes knowledge, creates universal language design and controls complex sets. In fact, system analysis is structured sequentially by steps: the observation of the system by various observers in various aspects, the analysis of interactions and regulatory chains, the modeling that takes into account the evolution of the system, the simulation and the real tests in order to obtain the consensus. Thus the system approach allows two types of analysis according to the structure and the function of the system. The purpose of this paper is to present an application of system analysis of Unmanned Aerial Vehicle (UAV) components in order to represent the architecture of this system. Method: There are various analysis methods which are proposed, in the literature, in to carry out actions of global analysis and different points of view as SADT method (Structured Analysis and Design Technique), Petri Network. The methodology adopted in order to contribute to the system analysis of an Unmanned Aerial Vehicle has been proposed in this paper and it is based on the use of SADT. In fact, we present a functional analysis based on the SADT method of UAV components Body, power supply and platform, computing, sensors, actuators, software, loop principles, flight controls and communications). Results: In this part, we present the application of SADT method for the functional analysis of the UAV components. This SADT model will be composed exclusively of actigrams. It starts with the main function ‘To analysis of the UAV components’. Then, this function is broken into sub-functions and this process is developed until the last decomposition level has been reached (levels A1, A2, A3 and A4). Recall that SADT techniques are semi-formal; however, for the same subject, different correct models can be built without having to know with certitude which model is the good or, at least, the best. In fact, this kind of model allows users a sufficient freedom in its construction and so the subjective factor introduces a supplementary dimension for its validation. That is why the validation step on the whole necessitates the confrontation of different points of views. Conclusion: In this paper, we presented an application of system analysis of Unmanned Aerial Vehicle components. In fact, this application of system analysis is based on SADT method (Structured Analysis Design Technique). This functional analysis proved the useful use of SADT method and its ability of describing complex dynamic systems.

Keywords: system analysis, unmanned aerial vehicle, functional analysis, architecture

Procedia PDF Downloads 204
70 Efficiency and Equity in Italian Secondary School

Authors: Giorgia Zotti

Abstract:

This research comprehensively investigates the multifaceted interplay determining school performance, individual backgrounds, and regional disparities within the landscape of Italian secondary education. Leveraging data gleaned from the INVALSI 2021-2022 database, the analysis meticulously scrutinizes two fundamental distributions of educational achievements: the standardized Invalsi test scores and official grades in Italian and Mathematics, focusing specifically on final-year secondary school students in Italy. Applying a comprehensive methodology, the study initially employs Data Envelopment Analysis (DEA) to assess school performances. This methodology involves constructing a production function encompassing inputs (hours spent at school) and outputs (Invalsi scores in Italian and Mathematics, along with official grades in Italian and Math). The DEA approach is applied in both of its versions: traditional and conditional. The latter incorporates environmental variables such as school type, size, demographics, technological resources, and socio-economic indicators. Additionally, the analysis delves into regional disparities by leveraging the Theil Index, providing insights into disparities within and between regions. Moreover, in the frame of the inequality of opportunity theory, the study quantifies the inequality of opportunity in students' educational achievements. The methodology applied is the Parametric Approach in the ex-ante version, considering diverse circumstances like parental education and occupation, gender, school region, birthplace, and language spoken at home. Consequently, a Shapley decomposition is applied to understand how much each circumstance affects the outcomes. The outcomes of this comprehensive investigation unveil pivotal determinants of school performance, notably highlighting the influence of school type (Liceo) and socioeconomic status. The research unveils regional disparities, elucidating instances where specific schools outperform others in official grades compared to Invalsi scores, shedding light on the intricate nature of regional educational inequalities. Furthermore, it emphasizes a heightened inequality of opportunity within the distribution of Invalsi test scores in contrast to official grades, underscoring pronounced disparities at the student level. This analysis provides insights for policymakers, educators, and stakeholders, fostering a nuanced understanding of the complexities within Italian secondary education.

Keywords: inequality, education, efficiency, DEA approach

Procedia PDF Downloads 75
69 Computational Characterization of Electronic Charge Transfer in Interfacial Phospholipid-Water Layers

Authors: Samira Baghbanbari, A. B. P. Lever, Payam S. Shabestari, Donald Weaver

Abstract:

Existing signal transmission models, although undoubtedly useful, have proven insufficient to explain the full complexity of information transfer within the central nervous system. The development of transformative models will necessitate a more comprehensive understanding of neuronal lipid membrane electrophysiology. Pursuant to this goal, the role of highly organized interfacial phospholipid-water layers emerges as a promising case study. A series of phospholipids in neural-glial gap junction interfaces as well as cholesterol molecules have been computationally modelled using high-performance density functional theory (DFT) calculations. Subsequent 'charge decomposition analysis' calculations have revealed a net transfer of charge from phospholipid orbitals through the organized interfacial water layer before ultimately finding its way to cholesterol acceptor molecules. The specific pathway of charge transfer from phospholipid via water layers towards cholesterol has been mapped in detail. Cholesterol is an essential membrane component that is overrepresented in neuronal membranes as compared to other mammalian cells; given this relative abundance, its apparent role as an electronic acceptor may prove to be a relevant factor in further signal transmission studies of the central nervous system. The timescales over which this electronic charge transfer occurs have also been evaluated by utilizing a system design that systematically increases the number of water molecules separating lipids and cholesterol. Memory loss through hydrogen-bonded networks in water can occur at femtosecond timescales, whereas existing action potential-based models are limited to micro or nanosecond scales. As such, the development of future models that attempt to explain faster timescale signal transmission in the central nervous system may benefit from our work, which provides additional information regarding fast timescale energy transfer mechanisms occurring through interfacial water. The study possesses a dataset that includes six distinct phospholipids and a collection of cholesterol. Ten optimized geometric characteristics (features) were employed to conduct binary classification through an artificial neural network (ANN), differentiating cholesterol from the various phospholipids. This stems from our understanding that all lipids within the first group function as electronic charge donors, while cholesterol serves as an electronic charge acceptor.

Keywords: charge transfer, signal transmission, phospholipids, water layers, ANN

Procedia PDF Downloads 73
68 An Improved Adaptive Dot-Shape Beamforming Algorithm Research on Frequency Diverse Array

Authors: Yanping Liao, Zenan Wu, Ruigang Zhao

Abstract:

Frequency diverse array (FDA) beamforming is a technology developed in recent years, and its antenna pattern has a unique angle-distance-dependent characteristic. However, the beam is always required to have strong concentration, high resolution and low sidelobe level to form the point-to-point interference in the concentrated set. In order to eliminate the angle-distance coupling of the traditional FDA and to make the beam energy more concentrated, this paper adopts a multi-carrier FDA structure based on proposed power exponential frequency offset to improve the array structure and frequency offset of the traditional FDA. The simulation results show that the beam pattern of the array can form a dot-shape beam with more concentrated energy, and its resolution and sidelobe level performance are improved. However, the covariance matrix of the signal in the traditional adaptive beamforming algorithm is estimated by the finite-time snapshot data. When the number of snapshots is limited, the algorithm has an underestimation problem, which leads to the estimation error of the covariance matrix to cause beam distortion, so that the output pattern cannot form a dot-shape beam. And it also has main lobe deviation and high sidelobe level problems in the case of limited snapshot. Aiming at these problems, an adaptive beamforming technique based on exponential correction for multi-carrier FDA is proposed to improve beamforming robustness. The steps are as follows: first, the beamforming of the multi-carrier FDA is formed under linear constrained minimum variance (LCMV) criteria. Then the eigenvalue decomposition of the covariance matrix is ​​performed to obtain the diagonal matrix composed of the interference subspace, the noise subspace and the corresponding eigenvalues. Finally, the correction index is introduced to exponentially correct the small eigenvalues ​​of the noise subspace, improve the divergence of small eigenvalues ​​in the noise subspace, and improve the performance of beamforming. The theoretical analysis and simulation results show that the proposed algorithm can make the multi-carrier FDA form a dot-shape beam at limited snapshots, reduce the sidelobe level, improve the robustness of beamforming, and have better performance.

Keywords: adaptive beamforming, correction index, limited snapshot, multi-carrier frequency diverse array, robust

Procedia PDF Downloads 130
67 Study of Biofouling Wastewater Treatment Technology

Authors: Sangho Park, Mansoo Kim, Kyujung Chae, Junhyuk Yang

Abstract:

The International Maritime Organization (IMO) recognized the problem of invasive species invasion and adopted the "International Convention for the Control and Management of Ships' Ballast Water and Sediments" in 2004, which came into force on September 8, 2017. In 2011, the IMO approved the "Guidelines for the Control and Management of Ships' Biofouling to Minimize the Transfer of Invasive Aquatic Species" to minimize the movement of invasive species by hull-attached organisms and required ships to manage the organisms attached to their hulls. Invasive species enter new environments through ships' ballast water and hull attachment. However, several obstacles to implementing these guidelines have been identified, including a lack of underwater cleaning equipment, regulations on underwater cleaning activities in ports, and difficulty accessing crevices in underwater areas. The shipping industry, which is the party responsible for understanding these guidelines, wants to implement them for fuel cost savings resulting from the removal of organisms attached to the hull, but they anticipate significant difficulties in implementing the guidelines due to the obstacles mentioned above. Robots or people remove the organisms attached to the hull underwater, and the resulting wastewater includes various species of organisms and particles of paint and other pollutants. Currently, there is no technology available to sterilize the organisms in the wastewater or stabilize the heavy metals in the paint particles. In this study, we aim to analyze the characteristics of the wastewater generated from the removal of hull-attached organisms and select the optimal treatment technology. The organisms in the wastewater generated from the removal of the attached organisms meet the biological treatment standard (D-2) using the sterilization technology applied in the ships' ballast water treatment system. The heavy metals and other pollutants in the paint particles generated during removal are treated using stabilization technologies such as thermal decomposition. The wastewater generated is treated using a two-step process: 1) development of sterilization technology through pretreatment filtration equipment and electrolytic sterilization treatment and 2) development of technology for removing particle pollutants such as heavy metals and dissolved inorganic substances. Through this study, we will develop a biological removal technology and an environmentally friendly processing system for the waste generated after removal that meets the requirements of the government and the shipping industry and lays the groundwork for future treatment standards.

Keywords: biofouling, ballast water treatment system, filtration, sterilization, wastewater

Procedia PDF Downloads 109
66 Transport Properties of Alkali Nitrites

Authors: Y. Mateyshina, A.Ulihin, N.Uvarov

Abstract:

Electrolytes with different type of charge carrier can find widely application in different using, e.g. sensors, electrochemical equipments, batteries and others. One of important components ensuring stable functioning of the equipment is electrolyte. Electrolyte has to be characterized by high conductivity, thermal stability, and wide electrochemical window. In addition to many advantageous characteristic for liquid electrolytes, the solid state electrolytes have good mechanical stability, wide working range of temperature range. Thus search of new system of solid electrolytes with high conductivity is an actual task of solid state chemistry. Families of alkali perchlorates and nitrates have been investigated by us earlier. In literature data about transport properties of alkali nitrites are absent. Nevertheless, alkali nitrites MeNO2 (Me= Li+, Na+, K+, Rb+ and Cs+), except for the lithium salt, have high-temperature phases with crystal structure of the NaCl-type. High-temperature phases of nitrites are orientationally disordered, i.e. non-spherical anions are reoriented over several equivalents directions in the crystal lattice. Pure lithium nitrite LiNO2 is characterized by ionic conductivity near 10-4 S/cm at 180°C and more stable as compared with lithium nitrate and can be used as a component for synthesis of composite electrolytes. In this work composite solid electrolytes in the binary system LiNO2 - A (A= MgO, -Al2O3, Fe2O3, CeO2, SnO2, SiO2) were synthesized and their structural, thermodynamic and electrical properties investigated. Alkali nitrite was obtained by exchange reaction from water solutions of barium nitrite and alkali sulfate. The synthesized salt was characterized by X-ray powder diffraction technique using D8 Advance X-Ray Diffractometer with Cu K radiation. Using thermal analysis, the temperatures of dehydration and thermal decomposition of salt were determined.. The conductivity was measured using a two electrode scheme in a forevacuum (6.7 Pa) with an HP 4284A (Precision LCR meter) in a frequency range 20 Hz < ν < 1 MHz. Solid composite electrolytes LiNO2 - A A (A= MgO, -Al2O3, Fe2O3, CeO2, SnO2, SiO2) have been synthesized by mixing of preliminary dehydrated components followed by sintering at 250°C. In the series of nitrite of alkaline metals Li+-Cs+, the conductivity varies not monotonically with increasing radius of cation. The minimum conductivity is observed for KNO2; however, with further increase in the radius of cation in the series, the conductivity tends to increase. The work was supported by the Russian Foundation for Basic research, grant #14-03-31442.

Keywords: conductivity, alkali nitrites, composite electrolytes, transport properties

Procedia PDF Downloads 319
65 An Explorative Analysis of Effective Project Management of Research and Research-Related Projects within a recently Formed Multi-Campus Technology University

Authors: Àidan Higgins

Abstract:

Higher education will be crucial in the coming decades in helping to make Ireland a nation is known for innovation, competitive enterprise, and ongoing academic success, as well as a desirable location to live and work with a high quality of life, vibrant culture, and inclusive social structures. Higher education institutions will actively connect with each student community, society, and business; they will help students develop a sense of place and identity in Ireland and provide the tools they need to contribute significantly to the global community. It will also serve as a catalyst for novel ideas through research, many of which will become the foundation for long-lasting inventive businesses in the future as part of the 2030 National Strategy on Education focuses on change and developing our education system with a focus on how we carry out Research. The emphasis is central to knowledge transfer and a consistent research framework with exploiting opportunities and having the necessary expertise. The newly formed Technological Universities (TU) in Ireland are based on a government initiative to create a new type of higher education institution that focuses on applied and industry-focused research and education. The basis of the TU is to bring together two or more existing institutes of technology to create a larger and more comprehensive institution that offers a wider range of programs and services to students and industry partners. The TU model aims to promote collaboration between academia, industry, and community organizations to foster innovation, research, and economic development. The TU model also aims to enhance the student experience by providing a more seamless pathway from undergraduate to postgraduate studies, as well as greater opportunities for work placements and engagement with industry partners. Additionally, the TUs are designed to provide a greater emphasis on applied research, technology transfer, and entrepreneurship, with the goal of fostering innovation and contributing to economic growth. A project is a collection of organised tasks carried out precisely to produce a singular output (product or service) within a given time frame. Project management is a set of activities that facilitates the successful implementation of a project. The significant differences between research and development projects are the (lack of) precise requirements and (the inability to) plan an outcome from the beginning of the project. The evaluation criteria for a research project must consider these and other "particularities" in works; for instance, proving something cannot be done may be a successful outcome. This study intends to explore how a newly established multi-campus technological university manages research projects effectively. The study will identify the potential and difficulties of managing research projects, the tools, resources and processes available in a multi-campus Technological University context and the methods and approaches employed to deal with these difficulties. Key stakeholders like project managers, academics, and administrators will be surveyed as part of the study, which will also involve an explorative investigation of current literature and data. The findings of this study will contribute significantly to creating best practices for project management in this setting and offer insightful information about the efficient management of research projects within a multi-campus technological university.

Keywords: project management, research and research-related projects, multi-campus technology university, processes

Procedia PDF Downloads 60
64 Shakespeare's Hamlet in Ballet: Transformation of an Archival Recording of a Neoclassical Ballet Performance into a Contemporary Transmodern Dance Video Applying Postmodern Concepts and Techniques

Authors: Svebor Secak

Abstract:

This four-year artistic research project hosted by the University of New England, Australia has set the goal to experiment with non-conventional ways of presenting a language-based narrative in dance using insights of recent theoretical writing on performance, addressing the research question: How to transform an archival recording of a neoclassical ballet performance into a new artistic dance video by implementing postmodern philosophical concepts? The Creative Practice component takes the form of a dance video Hamlet Revisited which is a reworking of the archival recording of the neoclassical ballet Hamlet, augmented by new material, produced using resources, technicians and dancers of the Croatian National Theatre in Zagreb. The methodology for the creation of Hamlet Revisited consisted of extensive field and desk research after which three dancers were shown the recording of original Hamlet and then created their artistic response to it based on their reception and appreciation of it. The dancers responded differently, based upon their diverse dancing backgrounds and life experiences. They began in the role of the audience observing video of the original ballet and transformed into the role of the choreographer-performer. Their newly recorded material was edited and juxtaposed with the archival recording of Hamlet and other relevant footage, allowing for postmodern features such as aleatoric content, synchronicity, eclecticism and serendipity, that way establishing communication on a receptive reader-response basis, thus blending the roles of the choreographer, performer and spectator, creating an original work of art whose significance lies in the relationship and communication between styles, old and new choreographic approaches, artists and audiences and the transformation of their traditional roles and relationships. In editing and collating, the following techniques were used with the intention to avoid the singular narrative: fragmentation, repetition, reverse-motion, multiplication of images, split screen, overlaying X-rays, image scratching, slow-motion, freeze-frame and simultaneity. Key postmodern concepts considered were: deconstruction, diffuse authorship, supplementation, simulacrum, self-reflexivity, questioning the role of the author, intertextuality and incredulity toward grand narratives - departing from the original story, thus personalising its ontological themes. From a broad brush of diverse concepts and techniques applied in an almost prescriptive manner, the project focuses on intertextuality that proves to be valid on at least two levels. The first is the possibility of a more objective analysis in combination with a semiotic structuralist approach moving from strict relationships between signs to a multiplication of signifiers, considering the dance text as an open construction, containing the elusive and enigmatic quality of art that leaves the interpretive position open. The second one is the creation of the new work where the author functions as the editor, aware and conscious of the interplay of disparate texts and their sources which co-act in the mind during the creative process. It is argued here that the eclectic combination of the old and new material through constant oscillations of different discourses upon the same topic resulted in a transmodern integrationist recent work of art that might be applied as a model for reconsidering existing choreographic creations.

Keywords: Ballet Hamlet, intertextuality, transformation, transmodern dance video

Procedia PDF Downloads 257
63 Cardiothoracic Ratio in Postmortem Computed Tomography: A Tool for the Diagnosis of Cardiomegaly

Authors: Alex Eldo Simon, Abhishek Yadav

Abstract:

This study aimed to evaluate the utility of postmortem computed tomography (CT) and heart weight measurements in the assessment of cardiomegaly in cases of sudden death due to cardiac origin by comparing the results of these two diagnostic methods. The study retrospectively analyzed postmortem computed tomography (PMCT) data from 54 cases of sudden natural death and compared the findings with those of the autopsy. The study involved measuring the cardiothoracic ratio (CTR) from coronal computed tomography (CT) images and determining the actual cardiac weight by weighing the heart during the autopsy. The inclusion criteria for the study were cases of sudden death suspected to be caused by cardiac pathology, while exclusion criteria included death due to unnatural causes such as trauma or poisoning, diagnosed natural causes of death related to organs other than the heart, and cases of decomposition. Sensitivity, specificity, and diagnostic accuracy were calculated, and to evaluate the accuracy of using the cardiothoracic ratio (CTR) to detect an enlarged heart, the study generated receiver operating characteristic (ROC) curves. The cardiothoracic ratio (CTR) is a radiological tool used to assess cardiomegaly by measuring the maximum cardiac diameter in relation to the maximum transverse diameter of the chest wall. The clinically used criteria for CTR have been modified from 0.50 to 0.57 for use in postmortem settings, where abnormalities can be detected by comparing CTR values to this threshold. A CTR value of 0.57 or higher is suggestive of hypertrophy but not conclusive. Similarly, heart weight is measured during the traditional autopsy, and a cardiac weight greater than 450 grams is defined as hypertrophy. Of the 54 cases evaluated, 22 (40.7%) had a cardiothoracic ratio (CTR) ranging from > 0.50 to equal 0.57, and 12 cases (22.2%) had a CTR greater than 0.57, which was defined as hypertrophy. The mean CTR was calculated as 0.52 ± 0.06. Among the 54 cases evaluated, the weight of the heart was measured, and the mean was calculated as 369.4 ± 99.9 grams. Out of the 54 cases evaluated, 12 were found to have hypertrophy as defined by PMCT, while only 9 cases were identified with hypertrophy in traditional autopsy. The sensitivity and specificity of the test were calculated as 55.56% and 84.44%, respectively. The sensitivity of the hypertrophy test was found to be 55.56% (95% CI: 26.66, 81.12¹), the specificity was 84.44% (95% CI: 71.22, 92.25¹), and the diagnostic accuracy was 79.63% (95% CI: 67.1, 88.23¹). The limitation of the study was a low sample size of only 54 cases, which may limit the generalizability of the findings. The comparison of the cardiothoracic ratio with heart weight in this study suggests that PMCT may serve as a screening tool for medico-legal autopsies when performed by forensic pathologists. However, it should be noted that the low sensitivity of the test (55.5%) may limit its diagnostic accuracy, and therefore, further studies with larger sample sizes and more diverse populations are needed to validate these findings.

Keywords: PMCT, virtopsy, CTR, cardiothoracic ratio

Procedia PDF Downloads 81
62 The Impact of Monetary Policy on Aggregate Market Liquidity: Evidence from Indian Stock Market

Authors: Byomakesh Debata, Jitendra Mahakud

Abstract:

The recent financial crisis has been characterized by massive monetary policy interventions by the Central bank, and it has amplified the importance of liquidity for the stability of the stock market. This paper empirically elucidates the actual impact of monetary policy interventions on stock market liquidity covering all National Stock Exchange (NSE) Stocks, which have been traded continuously from 2002 to 2015. The present study employs a multivariate VAR model along with VAR-granger causality test, impulse response functions, block exogeneity test, and variance decomposition to analyze the direction as well as the magnitude of the relationship between monetary policy and market liquidity. Our analysis posits a unidirectional relationship between monetary policy (call money rate, base money growth rate) and aggregate market liquidity (traded value, turnover ratio, Amihud illiquidity ratio, turnover price impact, high-low spread). The impulse response function analysis clearly depicts the influence of monetary policy on stock liquidity for every unit innovation in monetary policy variables. Our results suggest that an expansionary monetary policy increases aggregate stock market liquidity and the reverse is documented during the tightening of monetary policy. To ascertain whether our findings are consistent across all periods, we divided the period of study as pre-crisis (2002 to 2007) and post-crisis period (2007-2015) and ran the same set of models. Interestingly, all liquidity variables are highly significant in the post-crisis period. However, the pre-crisis period has witnessed a moderate predictability of monetary policy. To check the robustness of our results we ran the same set of VAR models with different monetary policy variables and found the similar results. Unlike previous studies, we found most of the liquidity variables are significant throughout the sample period. This reveals the predictability of monetary policy on aggregate market liquidity. This study contributes to the existing body of literature by documenting a strong predictability of monetary policy on stock liquidity in an emerging economy with an order driven market making system like India. Most of the previous studies have been carried out in developing economies with quote driven or hybrid market making system and their results are ambiguous across different periods. From an eclectic sense, this study may be considered as a baseline study to further find out the macroeconomic determinants of liquidity of stocks at individual as well as aggregate level.

Keywords: market liquidity, monetary policy, order driven market, VAR, vector autoregressive model

Procedia PDF Downloads 374
61 Bioincision of Gmelina Arborea Roxb. Heartwood with Inonotus Dryophilus (Berk.) Murr. for Improved Chemical Uptake and Penetration

Authors: A. O. Adenaiya, S. F. Curling, O. Y. Ogunsanwo, G . A. Ormondroyd

Abstract:

Treatment of wood with chemicals in order to prolong its service life may prove difficult in some refractory wood species. This impermeability in wood is usually due to biochemical changes which occur during heartwood formation. Bioincision, which is a short-term, controlled microbial decomposition of wood, is one of the promising approaches capable of improving the amenability of refractory wood to chemical treatments. Gmelina Arborea, a mainstay timber species in Nigeria, has impermeable heartwood due to the excessive tyloses which occlude its vessels. Therefore, the chemical uptake and penetration in Gmelina arborea heartwood bioincised with Inonotus dryophilus fungus was investigated. Five mature Gmelina Arborea trees were harvested at the Departmental plantation in Ajibode, Ibadan, Nigeria and a bolt of 300 cm was obtained from the basal portion of each tree. The heartwood portion of the bolts was extracted and converted into dimensions 20 mm x 20 mm x 60 mm and subsequently conditioned (200C at 65% Relative Humidity). Twenty wood samples each were bioincised with the white-rot fungus Inonotus dryophilus (ID, 999) for 3, 5, 7 and 9 weeks using standard procedure, while a set of sterile control samples were prepared. Ten of each bioincised and control sample were pressure-treated with 5% tanalith preservative, while the other ten of each bioincised and control samples were pressure-treated with a liquid dye for easy traceability of the chemical in the wood, both using a full cell treatment process. The bioincised and control samples were evaluated for their Weight Loss before chemical treatment (WL, %), Preservative Absorption (PA, Kg/m3), Preservative Retention (PR, Kg/m3), Axial Absorption (AA, Kg/m3), Lateral Absorption (LA, Kg/m3), Axial Penetration Depth (APD, mm), Radial Penetration Depth (RPD, mm), and Tangential Penetration Depth (TPD, mm). The data obtained were analyzed using ANOVA at α0.05. Results show that the weight loss was least in the samples bioincised for three weeks (0.09%) and highest after 7 weeks of bioincision (0.48%). The samples bioincised for 3 weeks had the least PA (106.72 Kg/m3) and PR (5.87 Kg/m3), while the highest PA (134.9 Kg/m3) and PR were observed after 7 weeks of bioincision (7.42 Kg/m3). The AA ranged from 27.28 Kg/m3 (3 weeks) to 67.05 Kg/m3 (5 weeks), while the LA was least after 5 weeks of incubation (28.1 Kg/m3) and highest after 9 weeks (71.74 Kg/m3). Significantly lower APD was observed in control samples (6.97 mm) than in the samples bioincised after 9weeks (19.22 mm). The RPD increased from 0.08 mm (control samples) to 3.48 mm (5 weeks), while TPD ranged from 0.38 mm (control samples) to 0.63 mm (9 weeks), implying that liquid flow in the wood was predominantly through the axial pathway. Bioincising G. arborea heartwood with I. dryophilus fungus for 9 weeks is capable of enhancing chemical uptake and deeper penetration of chemicals in the wood through the degradation of the occluding vessel tyloses, which is accompanied by a minimal degradation of the polymeric wood constituents.

Keywords: Bioincision, chemical uptake, penetration depth, refractory wood, tyloses

Procedia PDF Downloads 106
60 Simulation, Design, and 3D Print of Novel Highly Integrated TEG Device with Improved Thermal Energy Harvest Efficiency

Authors: Jaden Lu, Olivia Lu

Abstract:

Despite the remarkable advancement of solar cell technology, the challenge of optimizing total solar energy harvest efficiency persists, primarily due to significant heat loss. This excess heat not only diminishes solar panel output efficiency but also curtails its operational lifespan. A promising approach to address this issue is the conversion of surplus heat into electricity. In recent years, there is growing interest in the use of thermoelectric generators (TEG) as a potential solution. The integration of efficient TEG devices holds the promise of augmenting overall energy harvest efficiency while prolonging the longevity of solar panels. While certain research groups have proposed the integration of solar cells and TEG devices, a substantial gap between conceptualization and practical implementation remains, largely attributed to low thermal energy conversion efficiency of TEG devices. To bridge this gap and meet the requisites of practical application, a feasible strategy involves the incorporation of a substantial number of p-n junctions within a confined unit volume. However, the manufacturing of high-density TEG p-n junctions presents a formidable challenge. The prevalent solution often leads to large device sizes to accommodate enough p-n junctions, consequently complicating integration with solar cells. Recently, the adoption of 3D printing technology has emerged as a promising solution to address this challenge by fabricating high-density p-n arrays. Despite this, further developmental efforts are necessary. Presently, the primary focus is on the 3D printing of vertically layered TEG devices, wherein p-n junction density remains constrained by spatial limitations and the constraints of 3D printing techniques. This study proposes a novel device configuration featuring horizontally arrayed p-n junctions of Bi2Te3. The structural design of the device is subjected to simulation through the Finite Element Method (FEM) within COMSOL Multiphysics software. Various device configurations are simulated to identify optimal device structure. Based on the simulation results, a new TEG device is fabricated utilizing 3D Selective laser melting (SLM) printing technology. Fusion 360 facilitates the translation of the COMSOL device structure into a 3D print file. The horizontal design offers a unique advantage, enabling the fabrication of densely packed, three-dimensional p-n junction arrays. The fabrication process entails printing a singular row of horizontal p-n junctions using the 3D SLM printing technique in a single layer. Subsequently, successive rows of p-n junction arrays are printed within the same layer, interconnected by thermally conductive copper. This sequence is replicated across multiple layers, separated by thermal insulating glass. This integration created in a highly compact three-dimensional TEG device with high density p-n junctions. The fabricated TEG device is then attached to the bottom of the solar cell using thermal glue. The whole device is characterized, with output data closely matching with COMSOL simulation results. Future research endeavors will encompass the refinement of thermoelectric materials. This includes the advancement of high-resolution 3D printing techniques tailored to diverse thermoelectric materials, along with the optimization of material microstructures such as porosity and doping. The objective is to achieve an optimal and highly integrated PV-TEG device that can substantially increase the solar energy harvest efficiency.

Keywords: thermoelectric, finite element method, 3d print, energy conversion

Procedia PDF Downloads 62
59 An Energy Integration Study While Utilizing Heat of Flue Gas: Sponge Iron Process

Authors: Venkata Ramanaiah, Shabina Khanam

Abstract:

Enormous potential for saving energy is available in coal-based sponge iron plants as these are associated with the high percentage of energy wastage per unit sponge iron production. An energy integration option is proposed, in the present paper, to a coal based sponge iron plant of 100 tonnes per day production capacity, being operated in India using SL/RN (Stelco-Lurgi/Republic Steel-National Lead) process. It consists of the rotary kiln, rotary cooler, dust settling chamber, after burning chamber, evaporating cooler, electrostatic precipitator (ESP), wet scrapper and chimney as important equipment. Principles of process integration are used in the proposed option. It accounts for preheating kiln inlet streams like kiln feed and slinger coal up to 170ᴼC using waste gas exiting ESP. Further, kiln outlet stream is cooled from 1020ᴼC to 110ᴼC using kiln air. The working areas in the plant where energy is being lost and can be conserved are identified. Detailed material and energy balances are carried out around the sponge iron plant, and a modified model is developed, to find coal requirement of proposed option, based on hot utility, heat of reactions, kiln feed and air preheating, radiation losses, dolomite decomposition, the heat required to vaporize the coal volatiles, etc. As coal is used as utility and process stream, an iterative approach is used in solution methodology to compute coal consumption. Further, water consumption, operating cost, capital investment, waste gas generation, profit, and payback period of the modification are computed. Along with these, operational aspects of the proposed design are also discussed. To recover and integrate waste heat available in the plant, three gas-solid heat exchangers and four insulated ducts with one FD fan for each are installed additionally. Thus, the proposed option requires total capital investment of $0.84 million. Preheating of kiln feed, slinger coal and kiln air streams reduce coal consumption by 24.63% which in turn reduces waste gas generation by 25.2% in comparison to the existing process. Moreover, 96% reduction in water is also observed, which is the added advantage of the modification. Consequently, total profit is found as $2.06 million/year with payback period of 4.97 months only. The energy efficient factor (EEF), which is the % of the maximum energy that can be saved through design, is found to be 56.7%. Results of the proposed option are also compared with literature and found in good agreement.

Keywords: coal consumption, energy conservation, process integration, sponge iron plant

Procedia PDF Downloads 144
58 Finite Element Method (FEM) Simulation, design and 3D Print of Novel Highly Integrated PV-TEG Device with Improved Solar Energy Harvest Efficiency

Authors: Jaden Lu, Olivia Lu

Abstract:

Despite the remarkable advancement of solar cell technology, the challenge of optimizing total solar energy harvest efficiency persists, primarily due to significant heat loss. This excess heat not only diminishes solar panel output efficiency but also curtails its operational lifespan. A promising approach to address this issue is the conversion of surplus heat into electricity. In recent years, there is growing interest in the use of thermoelectric generators (TEG) as a potential solution. The integration of efficient TEG devices holds the promise of augmenting overall energy harvest efficiency while prolonging the longevity of solar panels. While certain research groups have proposed the integration of solar cells and TEG devices, a substantial gap between conceptualization and practical implementation remains, largely attributed to low thermal energy conversion efficiency of TEG devices. To bridge this gap and meet the requisites of practical application, a feasible strategy involves the incorporation of a substantial number of p-n junctions within a confined unit volume. However, the manufacturing of high-density TEG p-n junctions presents a formidable challenge. The prevalent solution often leads to large device sizes to accommodate enough p-n junctions, consequently complicating integration with solar cells. Recently, the adoption of 3D printing technology has emerged as a promising solution to address this challenge by fabricating high-density p-n arrays. Despite this, further developmental efforts are necessary. Presently, the primary focus is on the 3D printing of vertically layered TEG devices, wherein p-n junction density remains constrained by spatial limitations and the constraints of 3D printing techniques. This study proposes a novel device configuration featuring horizontally arrayed p-n junctions of Bi2Te3. The structural design of the device is subjected to simulation through the Finite Element Method (FEM) within COMSOL Multiphysics software. Various device configurations are simulated to identify optimal device structure. Based on the simulation results, a new TEG device is fabricated utilizing 3D Selective laser melting (SLM) printing technology. Fusion 360 facilitates the translation of the COMSOL device structure into a 3D print file. The horizontal design offers a unique advantage, enabling the fabrication of densely packed, three-dimensional p-n junction arrays. The fabrication process entails printing a singular row of horizontal p-n junctions using the 3D SLM printing technique in a single layer. Subsequently, successive rows of p-n junction arrays are printed within the same layer, interconnected by thermally conductive copper. This sequence is replicated across multiple layers, separated by thermal insulating glass. This integration created in a highly compact three-dimensional TEG device with high density p-n junctions. The fabricated TEG device is then attached to the bottom of the solar cell using thermal glue. The whole device is characterized, with output data closely matching with COMSOL simulation results. Future research endeavors will encompass the refinement of thermoelectric materials. This includes the advancement of high-resolution 3D printing techniques tailored to diverse thermoelectric materials, along with the optimization of material microstructures such as porosity and doping. The objective is to achieve an optimal and highly integrated PV-TEG device that can substantially increase the solar energy harvest efficiency.

Keywords: thermoelectric, finite element method, 3d print, energy conversion

Procedia PDF Downloads 67
57 A First-Principles Investigation of Magnesium-Hydrogen System: From Bulk to Nano

Authors: Paramita Banerjee, K. R. S. Chandrakumar, G. P. Das

Abstract:

Bulk MgH2 has drawn much attention for the purpose of hydrogen storage because of its high hydrogen storage capacity (~7.7 wt %) as well as low cost and abundant availability. However, its practical usage has been hindered because of its high hydrogen desorption enthalpy (~0.8 eV/H2 molecule), which results in an undesirable desorption temperature of 3000C at 1 bar H2 pressure. To surmount the limitations of bulk MgH2 for the purpose of hydrogen storage, a detailed first-principles density functional theory (DFT) based study on the structure and stability of neutral (Mgm) and positively charged (Mgm+) Mg nanoclusters of different sizes (m = 2, 4, 8 and 12), as well as their interaction with molecular hydrogen (H2), is reported here. It has been found that due to the absence of d-electrons within the Mg atoms, hydrogen remained in molecular form even after its interaction with neutral and charged Mg nanoclusters. Interestingly, the H2 molecules do not enter into the interstitial positions of the nanoclusters. Rather, they remain on the surface by ornamenting these nanoclusters and forming new structures with a gravimetric density higher than 15 wt %. Our observation is that the inclusion of Grimme’s DFT-D3 dispersion correction in this weakly interacting system has a significant effect on binding of the H2 molecules with these nanoclusters. The dispersion corrected interaction energy (IE) values (0.1-0.14 eV/H2 molecule) fall in the right energy window, that is ideal for hydrogen storage. These IE values are further verified by using high-level coupled-cluster calculations with non-iterative triples corrections i.e. CCSD(T), (which has been considered to be a highly accurate quantum chemical method) and thereby confirming the accuracy of our ‘dispersion correction’ incorporated DFT calculations. The significance of the polarization and dispersion energy in binding of the H2 molecules are confirmed by performing energy decomposition analysis (EDA). A total of 16, 24, 32 and 36 H2 molecules can be attached to the neutral and charged nanoclusters of size m = 2, 4, 8 and 12 respectively. Ab-initio molecular dynamics (AIMD) simulation shows that the outermost H2 molecules are desorbed at a rather low temperature viz. 150 K (-1230C) which is expected. However, complete dehydrogenation of these nanoclusters occur at around 1000C. Most importantly, the host nanoclusters remain stable up to ~500 K (2270C). All these results on the adsorption and desorption of molecular hydrogen with neutral and charged Mg nanocluster systems indicate towards the possibility of reducing the dehydrogenation temperature of bulk MgH2 by designing new Mg-based nano materials which will be able to adsorb molecular hydrogen via this weak Mg-H2 interaction, rather than the strong Mg-H bonding. Notwithstanding the fact that in practical applications, these interactions will be further complicated by the effect of substrates as well as interactions with other clusters, the present study has implications on our fundamental understanding to this problem.

Keywords: density functional theory, DFT, hydrogen storage, molecular dynamics, molecular hydrogen adsorption, nanoclusters, physisorption

Procedia PDF Downloads 415
56 Prediction of Live Birth in a Matched Cohort of Elective Single Embryo Transfers

Authors: Mohsen Bahrami, Banafsheh Nikmehr, Yueqiang Song, Anuradha Koduru, Ayse K. Vuruskan, Hongkun Lu, Tamer M. Yalcinkaya

Abstract:

In recent years, we have witnessed an explosion of studies aimed at using a combination of artificial intelligence (AI) and time-lapse imaging data on embryos to improve IVF outcomes. However, despite promising results, no study has used a matched cohort of transferred embryos which only differ in pregnancy outcome, i.e., embryos from a single clinic which are similar in parameters, such as: morphokinetic condition, patient age, and overall clinic and lab performance. Here, we used time-lapse data on embryos with known pregnancy outcomes to see if the rich spatiotemporal information embedded in this data would allow the prediction of the pregnancy outcome regardless of such critical parameters. Methodology—We did a retrospective analysis of time-lapse data from our IVF clinic utilizing Embryoscope 100% of the time for embryo culture to blastocyst stage with known clinical outcomes, including live birth vs nonpregnant (embryos with spontaneous abortion outcomes were excluded). We used time-lapse data from 200 elective single transfer embryos randomly selected from January 2019 to June 2021. Our sample included 100 embryos in each group with no significant difference in patient age (P=0.9550) and morphokinetic scores (P=0.4032). Data from all patients were combined to make a 4th order tensor, and feature extraction were subsequently carried out by a tensor decomposition methodology. The features were then used in a machine learning classifier to classify the two groups. Major Findings—The performance of the model was evaluated using 100 random subsampling cross validation (train (80%) - test (20%)). The prediction accuracy, averaged across 100 permutations, exceeded 80%. We also did a random grouping analysis, in which labels (live birth, nonpregnant) were randomly assigned to embryos, which yielded 50% accuracy. Conclusion—The high accuracy in the main analysis and the low accuracy in random grouping analysis suggest a consistent spatiotemporal pattern which is associated with pregnancy outcomes, regardless of patient age and embryo morphokinetic condition, and beyond already known parameters, such as: early cleavage or early blastulation. Despite small samples size, this ongoing analysis is the first to show the potential of AI methods in capturing the complex morphokinetic changes embedded in embryo time-lapse data, which contribute to successful pregnancy outcomes, regardless of already known parameters. The results on a larger sample size with complementary analysis on prediction of other key outcomes, such as: euploidy and aneuploidy of embryos will be presented at the meeting.

Keywords: IVF, embryo, machine learning, time-lapse imaging data

Procedia PDF Downloads 92
55 Electrodeposition of Silicon Nanoparticles Using Ionic Liquid for Energy Storage Application

Authors: Anjali Vanpariya, Priyanka Marathey, Sakshum Khanna, Roma Patel, Indrajit Mukhopadhyay

Abstract:

Silicon (Si) is a promising negative electrode material for lithium-ion batteries (LiBs) due to its low cost, non-toxicity, and a high theoretical capacity of 4200 mAhg⁻¹. The primary challenge of the application of Si-based LiBs is large volume expansion (~ 300%) during the charge-discharge process. Incorporation of graphene, carbon nanotubes (CNTs), morphological control, and nanoparticles was utilized as effective strategies to tackle volume expansion issues. However, molten salt methods can resolve the issue, but high-temperature requirement limits its application. For sustainable and practical approach, room temperature (RT) based methods are essentially required. Use of ionic liquids (ILs) for electrodeposition of Si nanostructures can possibly resolve the issue of temperature as well as greener media. In this work, electrodeposition of Si nanoparticles on gold substrate was successfully carried out in the presence of ILs media, 1-butyl-3-methylimidazolium-bis (trifluoromethyl sulfonyl) imide (BMImTf₂N) at room temperature. Cyclic voltammetry (CV) suggests the sequential reduction of Si⁴⁺ to Si²⁺ and then Si nanoparticles (SiNs). The structure and morphology of the electrodeposited SiNs were investigated by FE-SEM and observed interconnected Si nanoparticles of average particle size ⁓100-200 nm. XRD and XPS data confirm the deposition of Si on Au (111). The first discharge-charge capacity of Si anode material has been found to be 1857 and 422 mAhg⁻¹, respectively, at current density 7.8 Ag⁻¹. The irreversible capacity of the first discharge-charge process can be attributed to the solid electrolyte interface (SEI) formation via electrolyte decomposition, and trapped Li⁺ inserted into the inner pores of Si. Pulverization of SiNs results in the creation of a new active site, which facilitates the formation of new SEI in the subsequent cycles leading to fading in a specific capacity. After 20 cycles, charge-discharge profiles have been stabilized, and a reversible capacity of 150 mAhg⁻¹ is retained. Electrochemical impedance spectroscopy (EIS) data shows the decrease in Rct value from 94.7 to 47.6 kΩ after 50 cycles of charge-discharge, which demonstrates the improvements of the interfacial charge transfer kinetics. The decrease in the Warburg impedance after 50 cycles of charge-discharge measurements indicates facile diffusion in fragmented and smaller Si nanoparticles. In summary, Si nanoparticles deposited on gold substrate using ILs as media and characterized well with different analytical techniques. Synthesized material was successfully utilized for LiBs application, which is well supported by CV and EIS data.

Keywords: silicon nanoparticles, ionic liquid, electrodeposition, cyclic voltammetry, Li-ion battery

Procedia PDF Downloads 125