Search results for: typical (repetitive) large scale projects
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4165

Search results for: typical (repetitive) large scale projects

85 A Retrospective Study of Vaginal Stenosis Following Treatment of Cervical Cancers and the Effectiveness of Rehabilitation Interventions

Authors: Manjusha R. Vagal, Shyam K. Shrivastava, Umesh Mahantshetty, Sudeep Gupta, Supriya Chopra, Reena Engineer, Amita Maheshwari, Atul Buduk

Abstract:

Vaginal stenosis is a common side effect associated with pelvic radiotherapy in cervical cancer patients which contributes negatively to woman’s health and prevents adequate vaginal/cervical examination. Vaginal dilation with a dilator is routine practice and is internationally advocated as a prophylactic measure to preserve vaginal patency. This retrospective study was carried out with the aim to know the usefulness of vaginal dilation following pelvic radiation therapy in cervical cancer patients in India. Data from medical records of 183 cervical cancer patients, which met the study criteria, were collected related to the stage of the disease, treatment received, commencement period of dilation post radiation therapy, sexual status and side effects associated to dilation practice. Data related to vaginal dimensions as per the length of insertion of a small, medium and large dilator were collected on regular follow-ups until 36 months and/or more. Vaginal dimensions as measured with the length of medium dilator insertion were used for analysis of dilation therapy results using paired t-test. Patients who underwent vaginal dilation with dilator maintained vaginal patency, also the mean vaginal length significantly increased, from 8.02 cm ± 2.69 to 9.96 ± 2.89 cm with a p value <0.001. There was no significant difference found on vaginal patency with different intervals of initiation of dilation therapy. At the third year and more following dilation therapy, significant increase in vaginal length observed with a p value of 0.0001 in both sexually active and inactive patients. Compilation of vaginal dosage during brachytherapy was inadequate, and hence, the secondary objective of the study to determine the effect of radiotherapy on the outcome of rehabilitation intervention was not studied in detail. This retrospective study has found that dilation therapy with vaginal dilators post pelvic radiotherapy is effective in preventing vaginal stenosis and improving vaginal patency and cannot be substituted with vaginal intercourse. Sexual quality of life assessment in the Indian population needs much attention.

Keywords: Dilator, sexually active, vaginal dilation, vaginal stenosis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1470
84 Effects of Test Environment on the Sliding Wear Behaviour of Cast Iron, Zinc-Aluminium Alloy and Its Composite

Authors: Mohammad M. Khan, Gajendra Dixit

Abstract:

Partially lubricated sliding wear behaviour of a zinc-based alloy reinforced with 10wt% SiC particles has been studied as a function of applied load and solid lubricant particle size and has been compared with that of matrix alloy and conventionally used grey cast iron. The wear tests were conducted at the sliding velocities of 2.1m/sec in various partial lubricated conditions using pin on disc machine as per ASTM G-99-05. Base oil (SAE 20W-40) or mixture of the base oil with 5wt% graphite of particle sizes (7-10 µm) and (100 µm) were used for creating lubricated conditions. The matrix alloy revealed primary dendrites of a and eutectoid a + h and Î phases in the Inter dendritic regions. Similar microstructure has been depicted by the composite with an additional presence of the dispersoid SiC particles. In the case of cast iron, flakes of graphite were observed in the matrix; the latter comprised of (majority of) pearlite and (limited quantity of) ferrite. Results show a large improvement in wear resistance of the zinc-based alloy after reinforcement with SiC particles. The cast iron shows intermediate response between the matrix alloy and composite. The solid lubrication improved the wear resistance and friction behaviour of both the reinforced and base alloy. Moreover, minimum wear rate is obtained in oil+ 5wt % graphite (7-10 µm) lubricated environment for the matrix alloy and composite while for cast iron addition of solid lubricant increases the wear rate and minimum wear rate is obtained in case of oil lubricated environment. The cast iron experienced higher frictional heating than the matrix alloy and composite in all the cases especially at higher load condition. As far as friction coefficient is concerned, a mixed trend of behaviour was noted. The wear rate and frictional heating increased with load while friction coefficient was affected in an opposite manner. Test duration influenced the frictional heating and friction coefficient of the samples in a mixed manner.

Keywords: Solid lubricant, sliding wear grey cast iron, zinc based metal matrix composites.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1428
83 Performance Study of Neodymium Extraction by Carbon Nanotubes Assisted Emulsion Liquid Membrane Using Response Surface Methodology

Authors: Payman Davoodi-Nasab, Ahmad Rahbar-Kelishami, Jaber Safdari, Hossein Abolghasemi

Abstract:

The high purity rare earth elements (REEs) have been vastly used in the field of chemical engineering, metallurgy, nuclear energy, optical, magnetic, luminescence and laser materials, superconductors, ceramics, alloys, catalysts, and etc. Neodymium is one of the most abundant rare earths. By development of a neodymium–iron–boron (Nd–Fe–B) permanent magnet, the importance of neodymium has dramatically increased. Solvent extraction processes have many operational limitations such as large inventory of extractants, loss of solvent due to the organic solubility in aqueous solutions, volatilization of diluents, etc. One of the promising methods of liquid membrane processes is emulsion liquid membrane (ELM) which offers an alternative method to the solvent extraction processes. In this work, a study on Nd extraction through multi-walled carbon nanotubes (MWCNTs) assisted ELM using response surface methodology (RSM) has been performed. The ELM composed of diisooctylphosphinic acid (CYANEX 272) as carrier, MWCNTs as nanoparticles, Span-85 (sorbitan triooleate) as surfactant, kerosene as organic diluent and nitric acid as internal phase. The effects of important operating variables namely, surfactant concentration, MWCNTs concentration, and treatment ratio were investigated. Results were optimized using a central composite design (CCD) and a regression model for extraction percentage was developed. The 3D response surfaces of Nd(III) extraction efficiency were achieved and significance of three important variables and their interactions on the Nd extraction efficiency were found out. Results indicated that introducing the MWCNTs to the ELM process led to increasing the Nd extraction due to higher stability of membrane and mass transfer enhancement. MWCNTs concentration of 407 ppm, Span-85 concentration of 2.1 (%v/v) and treatment ratio of 10 were achieved as the optimum conditions. At the optimum condition, the extraction of Nd(III) reached the maximum of 99.03%.

Keywords: Emulsion liquid membrane, extraction of neodymium, multi-walled carbon nanotubes, response surface method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1258
82 Behavioral Mapping and Post-Occupancy Evaluation of Meeting-Point Design in an International Airport

Authors: Meng-Cong Zheng, Yu-Sheng Chen

Abstract:

The meeting behavior is a pervasive kind of interaction, which often occurs between the passenger and the shuttle. However, the meeting point set up at the Taoyuan International Airport is too far from the entry-exit, often causing passengers to stop searching near the entry-exit. When the number of people waiting for the rush hour increases, it often results in chaos in the waiting area. This study tried to find out what is the key factor to promote the rapid finding of each other between the passengers and the pick-ups. Then we implemented several design proposals to improve the meeting behavior of passengers and pick-ups based on behavior mapping and post-occupancy evaluation to enhance their meeting efficiency in unfamiliar environments. The research base is the reception hall of the second terminal of Taoyuan International Airport. Behavioral observation and mapping are implemented on the entry of inbound passengers into the welcome space, including the crowd distribution of the people who rely on the separation wall in the waiting area, the behavior of meeting and the interaction between the inbound passengers and the pick-ups. Then we redesign the space planning and signage design based on post-occupancy evaluation to verify the effectiveness of space plan and signage design. This study found that passengers ignore existing meeting-point designs which are placed on distant pillars at both ends. The position of the screen affects the area where the receiver is stranded, causing the pick-ups to block the passenger's moving line. The pick-ups prefer to wait where it is easy to watch incoming passengers and where it is closest to the mode of transport they take when leaving. Large visitors tend to gather next to landmarks, and smaller groups have a wide waiting area in the lobby. The location of the meeting point chosen by the pick-ups is related to the view of the incoming passenger. Finally, this study proposes an improved design of the meeting point, setting the traffic information in it, so that most passengers can see the traffic information when they enter the country. At the same time, we also redesigned the pick-ups desk to improve the efficiency of passenger meeting.

Keywords: Meeting point design, post-occupancy evaluation, behavioral mapping, international airport.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1034
81 Photocatalytic Active Surface of LWSCC Architectural Concretes

Authors: P. Novosad, L. Osuska, M. Tazky, T. Tazky

Abstract:

Current trends in the building industry are oriented towards the reduction of maintenance costs and the ecological benefits of buildings or building materials. Surface treatment of building materials with photocatalytic active titanium dioxide added into concrete can offer a good solution in this context. Architectural concrete has one disadvantage – dust and fouling keep settling on its surface, diminishing its aesthetic value and increasing maintenance e costs. Concrete surface – silicate material with open porosity – fulfils the conditions of effective photocatalysis, in particular, the self-cleaning properties of surfaces. This modern material is advantageous in particular for direct finishing and architectural concrete applications. If photoactive titanium dioxide is part of the top layers of road concrete on busy roads and the facades of the buildings surrounding these roads, exhaust fumes can be degraded with the aid of sunshine; hence, environmental load will decrease. It is clear that options for removing pollutants like nitrogen oxides (NOx) must be found. Not only do these gases present a health risk, they also cause the degradation of the surfaces of concrete structures. The photocatalytic properties of titanium dioxide can in the long term contribute to the enhanced appearance of surface layers and eliminate harmful pollutants dispersed in the air, and facilitate the conversion of pollutants into less toxic forms (e.g., NOx to HNO3). This paper describes verification of the photocatalytic properties of titanium dioxide and presents the results of mechanical and physical tests on samples of architectural lightweight self-compacting concretes (LWSCC). The very essence of the use of LWSCC is their rheological ability to seep into otherwise extremely hard accessible or inaccessible construction areas, or sections thereof where concrete compacting will be a problem, or where vibration is completely excluded. They are also able to create a solid monolithic element with a large variety of shapes; the concrete will at the same meet the requirements of both chemical aggression and the influences of the surrounding environment. Due to their viscosity, LWSCCs are able to imprint the formwork elements into their structure and thus create high quality lightweight architectural concretes.

Keywords: Photocatalytic concretes, titanium dioxide, architectural concretes, LWSCC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 767
80 Morphological Interaction of Porcine Oocyte and Cumulus Cells Study on in vitro Oocyte Maturation Using Electron Microscopy

Authors: M. Areekijseree, W. Pongsawat, M. Pumipaiboon, C. Thepsithar, S. Sengsai, T. Chuen-Im

Abstract:

Morphological interaction of porcine cumulus-oocyte complexes (pCOCs) was investigated on in vitro condition using electron microscope (SEM and TEM). The totals of 1,923 oocytes were round in shape, surrounded by Zona pellucida with layer of cumulus cells ranging between 59.29-202.14 μm in size. They were classified into intact-, multi-, partial cumulus cell layer oocyte, and completely denuded oocyte, at the percentage composition of 22.80% 32.70%, 18.60%, and 25.90 % respectively. The pCOCs classified as intact- and multi cumulus cell layer oocytes were further culturing at 37°C with 5% CO2, 95% air atmosphere and high humidity for 44 h in M199 with Earle’s salts supplemented with 10% HTFCS, 2.2 mg/mL NaHCO3, 1 M Hepes, 0.25 mM pyruvate, 15 μg/mL porcine follicle-stimulating hormone, 1 μg/mL LH, 1μg/mL estradiol with ethanol, and 50 μg/mL gentamycin sulfate. On electron microscope study, cumulus cells were found to stick their processes to secrete substance from the sac-shape end into Zona pellucida of the oocyte and also communicated with the neighboring cells through their microvilli on the beginning of incubation period. It is believed that the cumulus cells communicate with the oocyte by inserting the microvilli through this gap and embedded in the oocyte cytoplasm before secreting substance, through the sac-shape end of the microvilli, to inhibit primary oocyte development at the prophase I. Morphological changes of the complexes were observed after culturing for 24-44 h. One hundred percentages of the cumulus layers were expanded and cumulus cells were peeling off from the oocyte surface. In addition, the round-shape cumulus cells transformed themselves into either an elongate shape or a columnar shape, and no communication between cumulus neighboring cells. After 44 h of incubation time, diameter of oocytes surrounded by cumulus cells was larger than 0 h incubation. The effect of hormones in culture medium is exerted by their receptors present in porcine oocyte. It is likely that all morphological changes of the complexes after hormone treatment were to allow maturation of the oocyte. This study demonstrated that the association of hormones in M199 could promote porcine follicle activation in 44 h in vitro condition. This culture system should be useful for studying the regulation of early follicular growth and development, especially because these follicles represent a large source of oocytes that could be used in vitro for cell technology.

Keywords: Cumulus cells, electron microscopy (SEM and TEM), in vitro, porcine oocyte.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2461
79 Smart Sustainable Cities: An Integrated Planning Approach towards Sustainable Urban Energy Systems, India

Authors: Adinarayanane Ramamurthy, Monsingh D. Devadas

Abstract:

Cities denote instantaneously a challenge and an opportunity for climate change policy. Cities are the place where most energy services are needed because urbanization is closely linked to high population densities and concentration of economic activities and production (Urban energy demand). Consequently, it is critical to explain about the role of cities within the world-s energy systems and its correlation with the climate change issue. With more than half of the world-s population already living in urban areas, and that percentage expected to rise to 75 per cent by 2050, it is clear that the path to sustainable development must pass through cities. Cities expanding in size and population pose increased challenges to the environment, of which energy is part as a natural resource, and to the quality of life. Nowadays, most cities have already understood the importance of sustainability, both at their local scale as in terms of their contribution to sustainability at higher geographical scales. It requires the perception of a city as a complex and dynamic ecosystem, an open system, or cluster of systems, where the energy as well as the other natural resources is transformed to satisfy the needs of the different urban activities. In fact, buildings and transportation generally represent most of cities direct energy demand, i.e., between 60 per cent and 80 per cent of the overall consumption. Buildings, both residential and services are usually influenced by the local physical and social conditions. In terms of transport, the energy demand is also strongly linked with the specific characteristics of a city (urban mobility).The concept of a “smart city" builds on statistics as seven key axes of a city-s success in moving towards common platform (brain nerve)of sustainable urban energy systems. With the aforesaid knowledge, the authors have suggested a frame work to role of cities, as energy actors for smart city management. The authors have discusses the potential elements needed for energy in smart cities and also identified potential energy actions and relevant barriers. Furthermore, three levels of city smartness in cities actions to overcome market /institutional failures with a local approach are distinguished. The authors have made an attempt to conceive and implement concepts of city smartness by adopting the city or local government as nerve center through an integrated planning approach. Finally, concluding with recommendations for the organization of the Smart Sustainable Cities for positive changes of urban India.

Keywords: Urbanization, Urban Energy Demand, Sustainable Urban Energy Systems, Integrated Planning Approach, Smart Sustainable City.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2965
78 Modelling for Roof Failure Analysis in an Underground Cave

Authors: M. Belén Prendes-Gero, Celestino González-Nicieza, M. Inmaculada Alvarez-Fernández

Abstract:

Roof collapse is one of the problems with a higher frequency in most of the mines of all countries, even now. There are many reasons that may cause the roof to collapse, namely the mine stress activities in the mining process, the lack of vigilance and carelessness or the complexity of the geological structure and irregular operations. This work is the result of the analysis of one accident produced in the “Mary” coal exploitation located in northern Spain. In this accident, the roof of a crossroad of excavated galleries to exploit the “Morena” Layer, 700 m deep, collapsed. In the paper, the work done by the forensic team to determine the causes of the incident, its conclusions and recommendations are collected. Initially, the available documentation (geology, geotechnics, mining, etc.) and accident area were reviewed. After that, laboratory and on-site tests were carried out to characterize the behaviour of the rock materials and the support used (metal frames and shotcrete). With this information, different hypotheses of failure were simulated to find the one that best fits reality. For this work, the software of finite differences in three dimensions, FLAC 3D, was employed. The results of the study confirmed that the detachment was originated as a consequence of one sliding in the layer wall, due to the large roof span present in the place of the accident, and probably triggered as a consequence of the existence of a protection pillar insufficient. The results allowed to establish some corrective measures avoiding future risks. For example, the dimensions of the protection zones that must be remained unexploited and their interaction with the crossing areas between galleries, or the use of more adequate supports for these conditions, in which the significant deformations may discourage the use of rigid supports such as shotcrete. At last, a grid of seismic control was proposed as a predictive system. Its efficiency was tested along the investigation period employing three control equipment that detected new incidents (although smaller) in other similar areas of the mine. These new incidents show that the use of explosives produces vibrations which are a new risk factor to analyse in a next future.

Keywords: Forensic analysis, hypothesis modelling, roof failure, seismic monitoring.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 607
77 Experimental Investigation of Hydrogen Addition in the Intake Air of Compressed Engines Running on Biodiesel Blend

Authors: Hendrick Maxil Zárate Rocha, Ricardo da Silva Pereira, Manoel Fernandes Martins Nogueira, Carlos R. Pereira Belchior, Maria Emilia de Lima Tostes

Abstract:

This study investigates experimentally the effects of hydrogen addition in the intake manifold of a diesel generator operating with a 7% biodiesel-diesel oil blend (B7). An experimental apparatus setup was used to conduct performance and emissions tests in a single cylinder, air cooled diesel engine. This setup consisted of a generator set connected to a wirewound resistor load bank that was used to vary engine load. In addition, a flowmeter was used to determine hydrogen volumetric flowrate and a digital anemometer coupled with an air box to measure air flowrate. Furthermore, a digital precision electronic scale was used to measure engine fuel consumption and a gas analyzer was used to determine exhaust gas composition and exhaust gas temperature. A thermopar was installed near the exhaust collection to measure cylinder temperature. In-cylinder pressure was measured using an AVL Indumicro data acquisition system with a piezoelectric pressure sensor. An AVL optical encoder was installed in the crankshaft and synchronized with in-cylinder pressure in real time. The experimental procedure consisted of injecting hydrogen into the engine intake manifold at different mass concentrations of 2,6,8 and 10% of total fuel mass (B7 + hydrogen), which represented energy fractions of 5,15, 20 and 24% of total fuel energy respectively. Due to hydrogen addition, the total amount of fuel energy introduced increased and the generators fuel injection governor prevented any increases of engine speed. Several conclusions can be stated from the test results. A reduction in specific fuel consumption as a function of hydrogen concentration increase was noted. Likewise, carbon dioxide emissions (CO2), carbon monoxide (CO) and unburned hydrocarbons (HC) decreased as hydrogen concentration increased. On the other hand, nitrogen oxides emissions (NOx) increased due to average temperatures inside the cylinder being higher. There was also an increase in peak cylinder pressure and heat release rate inside the cylinder, since the fuel ignition delay was smaller due to hydrogen content increase. All this indicates that hydrogen promotes faster combustion and higher heat release rates and can be an important additive to all kind of fuels used in diesel generators.

Keywords: Diesel engine, hydrogen, dual fuel, combustion analysis, performance, emissions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1315
76 Performance Assessment of the Gold Coast Desalination Plant Offshore Multiport Brine Diffuser during ‘Hot Standby’ Operation

Authors: M. J. Baum, B. Gibbes, A. Grinham, S. Albert, D. Gale, P. Fisher

Abstract:

Alongside the rapid expansion of Seawater Reverse Osmosis technologies there is a concurrent increase in the production of hypersaline brine by-products. To minimize environmental impact, these by-products are commonly disposed into open-coastal environments via submerged diffuser systems as inclined dense jet outfalls. Despite the widespread implementation of this process, diffuser designs are typically based on small-scale laboratory experiments under idealistic quiescent conditions. Studies concerning diffuser performance in the field are limited. A set of experiments were conducted to assess the near field characteristics of brine disposal at the Gold Coast Desalination Plant offshore multiport diffuser. The aim of the field experiments was to determine the trajectory and dilution characteristics of the plume under various discharge configurations with production ranging 66 – 100% of plant operative capacity. The field monitoring system employed an unprecedented static array of temperature and electrical conductivity sensors in a three-dimensional grid surrounding a single diffuser port. Complimenting these measurements, Acoustic Doppler Current Profilers were also deployed to record current variability over the depth of the water column and wave characteristics. Recorded data suggested the open-coastal environment was highly active over the experimental duration with ambient velocities ranging 0.0 – 0.5 m∙s-1, with considerable variability over the depth of the water column observed. Variations in background electrical conductivity corresponding to salinity fluctuations of ± 1.7 g∙kg-1 were also observed. Increases in salinity were detected during plant operation and appeared to be most pronounced 10 – 30 m from the diffuser, consistent with trajectory predictions described by existing literature. Plume trajectories and respective dilutions extrapolated from salinity data are compared with empirical scaling arguments. Discharge properties were found to adequately correlate with modelling projections. Temporal and spatial variation of background processes and their subsequent influence upon discharge outcomes are discussed with a view to incorporating the influence of waves and ambient currents in the design of brine outfalls into the future.

Keywords: Brine disposal, desalination, field study, inclined dense jets, negatively buoyant discharge.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1066
75 Computer-Assisted Management of Building Climate and Microgrid with Model Predictive Control

Authors: Vinko Lešić, Mario Vašak, Anita Martinčević, Marko Gulin, Antonio Starčić, Hrvoje Novak

Abstract:

With 40% of total world energy consumption, building systems are developing into technically complex large energy consumers suitable for application of sophisticated power management approaches to largely increase the energy efficiency and even make them active energy market participants. Centralized control system of building heating and cooling managed by economically-optimal model predictive control shows promising results with estimated 30% of energy efficiency increase. The research is focused on implementation of such a method on a case study performed on two floors of our faculty building with corresponding sensors wireless data acquisition, remote heating/cooling units and central climate controller. Building walls are mathematically modeled with corresponding material types, surface shapes and sizes. Models are then exploited to predict thermal characteristics and changes in different building zones. Exterior influences such as environmental conditions and weather forecast, people behavior and comfort demands are all taken into account for deriving price-optimal climate control. Finally, a DC microgrid with photovoltaics, wind turbine, supercapacitor, batteries and fuel cell stacks is added to make the building a unit capable of active participation in a price-varying energy market. Computational burden of applying model predictive control on such a complex system is relaxed through a hierarchical decomposition of the microgrid and climate control, where the former is designed as higher hierarchical level with pre-calculated price-optimal power flows control, and latter is designed as lower level control responsible to ensure thermal comfort and exploit the optimal supply conditions enabled by microgrid energy flows management. Such an approach is expected to enable the inclusion of more complex building subsystems into consideration in order to further increase the energy efficiency.

Keywords: Energy-efficient buildings, Hierarchical model predictive control, Microgrid power flow optimization, Price-optimal building climate control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1520
74 Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison

Authors: Xiangtuo Chen, Paul-Henry Cournéde

Abstract:

Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.

Keywords: Crop yield prediction, crop model, sensitivity analysis, paramater estimation, particle swarm optimization, random forest.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1176
73 Strong Adhesion and High Wettability at Polyetheretherketone-Resin/Titanium-Dioxide Interface Obtained with Crystal-Orientation Control

Authors: Tomio Iwasaki, Yosuke Kawahito

Abstract:

The adhesion strength and wettability at the interfaces between a polyetheretherketone (PEEK) resin and titanium dioxide (TiO2) have become more important because direct joining of PEEK resin and titanium (Ti), whose surface has usually the oxide (TiO2), is needed not only in vehicles such as airplanes, automobiles, and space vehicles, but also in medical devices such as implants. To realize strong joint between the PEEK resin and TiO2, the dependence of the adhesion strength and wettability on crystal orientations of rutile TiO2 were investigated by using molecular simulations. Molecular dynamics simulations were conducted by combining quantum-mechanics equation of electrons with Newton’s equation of motion of nuclear coordinates (atomic coordinates). By putting a PEEK-resin sphere on a rutile TiO2 surface and by heating the system to 650 K, the contact angles at the interfaces were calculated to evaluate the wettability. After the system is cooled to 300 K from 650 K, to evaluate the adhesin strength, the adhesive fracture energy is calculated as the difference between the energy of the PEEK-TiO2 attached state and that of the PEEK-TiO2 detached state. The results of the contact angles showed that PEEK resin on the TiO2(100) and that on the TiO2(001) surface has low wettability with large contact angles. On the other hand, PEEK resin on the TiO2(110) surface has high wettability with a small contact angle. The results of the adhesive fracture energies showed that the adhesion at the PEEK-resin/TiO2(100) and PEEK-resin/TiO2(001) interfaces are weak. On the other hand, the adhesion at the PEEK-resin/TiO2(110) interface is strong. To clarify the reason that the higher wettability and stronger adhesion are obtained at the PEEK/TiO2(110) interface than at the at the PEEK/TiO2(100) and PEEK/TiO2(001) interfaces, atomic configurations at the interfaces were visualized. The atomic configuration at the PEEK/TiO2(110) interface showed that the lattice-matched coherent interface is realized, and the atomic density is high. On the other hand, the atomic configuration at the PEEK/TiO2(001) interface showed the lattice-unmatched incoherent interface. The atomic configuration at the PEEK/TiO2(100) interface showed that the atomic density is very low although the lattice-matched interface is realized. Therefore, the lattice matching and the high atomic density at the PEEK/TiO2(001) interface are considered to be dominant factors in the high wettability and strong adhesion.

Keywords: Adhesion, direct joining, PEEK, TiO2, wettability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 452
72 Influence of Deficient Materials on the Reliability of Reinforced Concrete Members

Authors: Sami W. Tabsh

Abstract:

The strength of reinforced concrete depends on the member dimensions and material properties. The properties of concrete and steel materials are not constant but random variables. The variability of concrete strength is due to batching errors, variations in mixing, cement quality uncertainties, differences in the degree of compaction and disparity in curing. Similarly, the variability of steel strength is attributed to the manufacturing process, rolling conditions, characteristics of base material, uncertainties in chemical composition, and the microstructure-property relationships. To account for such uncertainties, codes of practice for reinforced concrete design impose resistance factors to ensure structural reliability over the useful life of the structure. In this investigation, the effects of reductions in concrete and reinforcing steel strengths from the nominal values, beyond those accounted for in the structural design codes, on the structural reliability are assessed. The considered limit states are flexure, shear and axial compression based on the ACI 318-11 structural concrete building code. Structural safety is measured in terms of a reliability index. Probabilistic resistance and load models are compiled from the available literature. The study showed that there is a wide variation in the reliability index for reinforced concrete members designed for flexure, shear or axial compression, especially when the live-to-dead load ratio is low. Furthermore, variations in concrete strength have minor effect on the reliability of beams in flexure, moderate effect on the reliability of beams in shear, and sever effect on the reliability of columns in axial compression. On the other hand, changes in steel yield strength have great effect on the reliability of beams in flexure, moderate effect on the reliability of beams in shear, and mild effect on the reliability of columns in axial compression. Based on the outcome, it can be concluded that the reliability of beams is sensitive to changes in the yield strength of the steel reinforcement, whereas the reliability of columns is sensitive to variations in the concrete strength. Since the embedded target reliability in structural design codes results in lower structural safety in beams than in columns, large reductions in material strengths compromise the structural safety of beams much more than they affect columns.

Keywords: Code, flexure, limit states, random variables, reinforced concrete, reliability, reliability index, shear, structural safety.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2585
71 Self-Sensing Concrete Nanocomposites for Smart Structures

Authors: A. D'Alessandro, F. Ubertini, A. L. Materazzi

Abstract:

In the field of civil engineering, Structural Health Monitoring is a topic of growing interest. Effective monitoring instruments permit the control of the working conditions of structures and infrastructures, through the identification of behavioral anomalies due to incipient damages, especially in areas of high environmental hazards as earthquakes. While traditional sensors can be applied only in a limited number of points, providing a partial information for a structural diagnosis, novel transducers may allow a diffuse sensing. Thanks to the new tools and materials provided by nanotechnology, new types of multifunctional sensors are developing in the scientific panorama. In particular, cement-matrix composite materials capable of diagnosing their own state of strain and tension, could be originated by the addition of specific conductive nanofillers. Because of the nature of the material they are made of, these new cementitious nano-modified transducers can be inserted within the concrete elements, transforming the same structures in sets of widespread sensors. This paper is aimed at presenting the results of a research about a new self-sensing nanocomposite and about the implementation of smart sensors for Structural Health Monitoring. The developed nanocomposite has been obtained by inserting multi walled carbon nanotubes within a cementitious matrix. The insertion of such conductive carbon nanofillers provides the base material with piezoresistive characteristics and peculiar sensitivity to mechanical modifications. The self-sensing ability is achieved by correlating the variation of the external stress or strain with the variation of some electrical properties, such as the electrical resistance or conductivity. Through the measurement of such electrical characteristics, the performance and the working conditions of an element or a structure can be monitored. Among conductive carbon nanofillers, carbon nanotubes seem to be particularly promising for the realization of self-sensing cement-matrix materials. Some issues related to the nanofiller dispersion or to the influence of the nano-inclusions amount in the cement matrix need to be carefully investigated: the strain sensitivity of the resulting sensors is influenced by such factors. This work analyzes the dispersion of the carbon nanofillers, the physical properties of the fresh dough, the electrical properties of the hardened composites and the sensing properties of the realized sensors. The experimental campaign focuses specifically on their dynamic characterization and their applicability to the monitoring of full-scale elements. The results of the electromechanical tests with both slow varying and dynamic loads show that the developed nanocomposite sensors can be effectively used for the health monitoring of structures.

Keywords: Carbon nanotubes, self-sensing nanocomposites, smart cement-matrix sensors, structural health monitoring.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3460
70 Leveraging xAPI in a Corporate e-Learning Environment to Facilitate the Tracking, Modelling, and Predictive Analysis of Learner Behaviour

Authors: Libor Zachoval, Daire O Broin, Oisin Cawley

Abstract:

E-learning platforms, such as Blackboard have two major shortcomings: limited data capture as a result of the limitations of SCORM (Shareable Content Object Reference Model), and lack of incorporation of Artificial Intelligence (AI) and machine learning algorithms which could lead to better course adaptations. With the recent development of Experience Application Programming Interface (xAPI), a large amount of additional types of data can be captured and that opens a window of possibilities from which online education can benefit. In a corporate setting, where companies invest billions on the learning and development of their employees, some learner behaviours can be troublesome for they can hinder the knowledge development of a learner. Behaviours that hinder the knowledge development also raise ambiguity about learner’s knowledge mastery, specifically those related to gaming the system. Furthermore, a company receives little benefit from their investment if employees are passing courses without possessing the required knowledge and potential compliance risks may arise. Using xAPI and rules derived from a state-of-the-art review, we identified three learner behaviours, primarily related to guessing, in a corporate compliance course. The identified behaviours are: trying each option for a question, specifically for multiple-choice questions; selecting a single option for all the questions on the test; and continuously repeating tests upon failing as opposed to going over the learning material. These behaviours were detected on learners who repeated the test at least 4 times before passing the course. These findings suggest that gauging the mastery of a learner from multiple-choice questions test scores alone is a naive approach. Thus, next steps will consider the incorporation of additional data points, knowledge estimation models to model knowledge mastery of a learner more accurately, and analysis of the data for correlations between knowledge development and identified learner behaviours. Additional work could explore how learner behaviours could be utilised to make changes to a course. For example, course content may require modifications (certain sections of learning material may be shown to not be helpful to many learners to master the learning outcomes aimed at) or course design (such as the type and duration of feedback).

Keywords: Compliance Course, Corporate Training, Learner Behaviours, xAPI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 562
69 Gassing Tendency of Natural Ester Based Transformer Oils: Low Ethane Generation in Stray Gassing Behavior

Authors: Banti Sidhiwala, T. C. S. M. Gupta

Abstract:

Mineral oils of naphthenic and paraffinic type are in use as insulating liquids in the transformer applications to protect solid insulation from moisture and ensures effective heat transfer/cooling. The performance of these type of oils have been proven in the field over many decades and the condition monitoring and diagnosis of transformer performance have been successfully monitored through oil properties and dissolved gas analysis methods successfully. Different types of gases can represent various types of faults that may occur due to faulty components or unfavorable operating conditions. A large amount of database has been generated in the industry for dissolved gas analysis in mineral oil-based transformer oils, and various models have been developed to predict faults and analyze data. Additionally, oil specifications and standards have been updated to include stray gassing limits that cover low-temperature faults. This modification has become an effective preventative maintenance tool that can help greatly in understanding the reasons for breakdowns of electrical insulating materials and related components. Natural esters have seen a rise in popularity in recent years due to their "green" credentials. Some of its benefits include biodegradability, a higher fire point, improvement in load capability of transformer and improved solid insulation life than mineral oils. However, the stray gassing test shows that hydrogen and hydrocarbons like methane (CH4) and ethane (C2H6) show very high values which are much higher than the limits of mineral oil standards. Though the standards for these types of esters are yet to be evolved, the higher values of hydrocarbon gases that are available in the market is of concern which might be interpreted as a fault in transformer operation. The current paper focuses on developing a class of natural esters with low levels of stray gassing by American Society for Testing and Materials (ASTM) and International Electric Council (IEC) methods much lower values compared to the natural ester-based products reported in the literature. The experimental results of products are explained.

Keywords: Biodegradability, fire point, dissolved gas analysis, natural ester, stray gassing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 195
68 The Development and Testing of a Small Scale Dry Electrostatic Precipitator for the Removal of Particulate Matter

Authors: Derek Wardle, Tarik Al-Shemmeri, Neil Packer

Abstract:

This paper presents a small tube/wire type electrostatic precipitator (ESP). In the ESPs present form, particle charging and collecting voltages and airflow rates were individually varied throughout 200 ambient temperature test runs ranging from 10 to 30 kV in increments on 5 kV and 0.5 m/s to 1.5 m/s, respectively. It was repeatedly observed that, at input air velocities of between 0.5 and 0.9 m/s and voltage settings of 20 kV to 30 kV, the collection efficiency remained above 95%. The outcomes of preliminary tests at combustion flue temperatures are, at present, inconclusive although indications are that there is little or no drop in comparable performance during ideal test conditions. A limited set of similar tests was carried out during which the collecting electrode was grounded, having been disconnected from the static generator. The collecting efficiency fell significantly, and for that reason, this approach was not pursued further. The collecting efficiencies during ambient temperature tests were determined by mass balance between incoming and outgoing dry PM. The efficiencies of combustion temperature runs are determined by analysing the difference in opacity of the flue gas at inlet and outlet compared to a reference light source. In addition, an array of Leit tabs (carbon coated, electrically conductive adhesive discs) was placed at inlet and outlet for a number of four-day continuous ambient temperature runs. Analysis of the discs’ contamination was carried out using scanning electron microscopy and ImageJ computer software that confirmed collection efficiencies of over 99% which gave unequivocal support to all the previous tests. The average efficiency for these runs was 99.409%. Emissions collected from a woody biomass combustion unit, classified to a diameter of 100 µm, were used in all ambient temperature trials test runs apart from two which collected airborne dust from within the laboratory. Sawdust and wood pellets were chosen for laboratory and field combustion trials. Video recordings were made of three ambient temperature test runs in which the smoke from a wood smoke generator was drawn through the precipitator. Although these runs were visual indicators only, with no objective other than to display, they provided a strong argument for the device’s claimed efficiency, as no emissions were visible at exit when energised.  The theoretical performance of ESPs, when applied to the geometry and configuration of the tested model, was compared to the actual performance and was shown to be in good agreement with it.

Keywords: Electrostatic precipitators, air quality, particulates emissions, electron microscopy, ImageJ.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1155
67 Physiological and Psychological Influence on Office Workers during Demand Response

Authors: Megumi Nishida, Naoya Motegi, Takurou Kikuchi, Tomoko Tokumura

Abstract:

In recent years, the power system has been changed and a flexible power pricing system such as demand response has been sought in Japan. The demand response system works simply in the household sector and the owner as the decision-maker, can benefit from power saving. On the other hand, the execution of demand response in the office building is more complex than in the household because various people such as owners, building administrators and occupants are involved in the decision-making process. While the owners benefit from demand saving, the occupants are exposed to restricted benefits of a demand-saved environment. One of the reasons is that building systems are usually under centralized management and each occupant cannot choose freely whether to participate in demand response or not. In addition, it is unclear whether incentives give occupants the motivation to participate. However, the recent development of IT and building systems enables the personalized control of the office environment where each occupant can control the lighting level or temperature individually. Therefore, it can be possible to have a system which each occupant can make a decision of whether or not to participate in demand response in the office building. This study investigates personal responses to demand response requests, under the condition where each occupant can adjust their brightness individually in their workspace. Once workers participate in the demand response, their desk-lights are automatically turned off. The participation rates in the demand response events are compared among four groups, which are divided by different motivation, the presence, or absence of incentives and the method of participation. The result shows that there are significant differences of participation rates in demand response event between four groups. The method of participation has a large effect on the participation rate. The “Opt-out” groups where the occupants are automatically enrolled in a demand response event if they do not express non-participation have the highest participation rate in the four groups. Incentives also have an effect on the participation rate. This study also reports on the impact of low illumination office environment on the occupants, such as stress or fatigue. The electrocardiogram and the questionnaire are used to investigate the autonomic nervous activity and subjective fatigue symptoms of the occupants. There is no big difference between dim workspace during demand response event and bright workspace in autonomic nervous activity and fatigue.

Keywords: Demand response, illumination, questionnaire, electrocardiograph.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1577
66 Design and Development of Graphene Oxide Modified by Chitosan Nanosheets Showing pH-Sensitive Surface as a Smart Drug Delivery System for Controlled Release of Doxorubicin

Authors: Parisa Shirzadeh

Abstract:

Drug delivery systems in which drugs are traditionally used, multi-stage and at specified intervals by patients, do not meet the needs of the world's up-to-date drug delivery. In today's world, we are dealing with a huge number of recombinant peptide and protean drugs and analogues of hormones in the body, most of which are made with genetic engineering techniques. Most of these drugs are used to treat critical diseases such as cancer. Due to the limitations of the traditional method, researchers sought to find ways to solve the problems of the traditional method to a large extent. Following these efforts, controlled drug release systems were introduced, which have many advantages. Using controlled release of the drug in the body, the concentration of the drug is kept at a certain level, and in a short time, it is done at a higher rate. Graphene is a natural material that is biodegradable, non-toxic, natural and wide surfaces of graphene plates makes it more effective to modify graphene than carbon nanotubes. Graphene oxide is often synthesized using concentrated oxidizers such as sulfuric acid, nitric acid, and potassium permanganate based on Hummer method. graphene oxide is very hydrophilic and easily dissolves in water and creates a stable solution. Graphene oxide (GO) has been modified by chitosan (CS) covalently, developed for control release of doxorubicin (DOX). In this study, GO is produced by the hummer method under acidic conditions. Then, it is chlorinated by oxalyl chloride to increase its reactivity against amine. After that, in the presence of CS, the amino reaction was performed to form amide transplantation, and the DOX was connected to the carrier surface by π-π interaction in buffer phosphate. GO, GO-CS, and GO-CS-DOX were characterized by FT-IR and TGA to recognize new functional groups which show the new bonding of CS to GO, RAMA and SEM to recognize size of layers that show changing in size and number of layers. The ability to load and release is determined by UV-Visible spectroscopy. The loading result showed a high capacity of DOX absorption (99%) and pH dependence identified as a result of DOX release from GO-CS nanosheet at pH 5.3 and 7.4, which show a fast release rate in acidic conditions.

Keywords: Graphene oxide, chitosan, nanosheet, controlled drug release, doxorubicin.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 233
65 An Exploratory Study in Nursing Education: Factors Influencing Nursing Students’ Acceptance of Mobile Learning

Authors: R. Abdulrahman, A. Eardley, A. Soliman

Abstract:

The proliferation in the development of mobile learning (m-learning) has played a vital role in the rapidly growing electronic learning market. This relatively new technology can help to encourage the development of in learning and to aid knowledge transfer a number of areas, by familiarizing students with innovative information and communications technologies (ICT). M-learning plays a substantial role in the deployment of learning methods for nursing students by using the Internet and portable devices to access learning resources ‘anytime and anywhere’. However, acceptance of m-learning by students is critical to the successful use of m-learning systems. Thus, there is a need to study the factors that influence student’s intention to use m-learning. This paper addresses this issue. It outlines the outcomes of a study that evaluates the unified theory of acceptance and use of technology (UTAUT) model as applied to the subject of user acceptance in relation to m-learning activity in nurse education. The model integrates the significant components across eight prominent user acceptance models. Therefore, a standard measure is introduced with core determinants of user behavioural intention. The research model extends the UTAUT in the context of m-learning acceptance by modifying and adding individual innovativeness (II) and quality of service (QoS) to the original structure of UTAUT. The paper goes on to add the factors of previous experience (of using mobile devices in similar applications) and the nursing students’ readiness (to use the technology) to influence their behavioural intentions to use m-learning. This study uses a technique called ‘convenience sampling’ which involves student volunteers as participants in order to collect numerical data. A quantitative method of data collection was selected and involves an online survey using a questionnaire form. This form contains 33 questions to measure the six constructs, using a 5-point Likert scale. A total of 42 respondents participated, all from the Nursing Institute at the Armed Forces Hospital in Saudi Arabia. The gathered data were then tested using a research model that employs the structural equation modelling (SEM), including confirmatory factor analysis (CFA). The results of the CFA show that the UTAUT model has the ability to predict student behavioural intention and to adapt m-learning activity to the specific learning activities. It also demonstrates satisfactory, dependable and valid scales of the model constructs. This suggests further analysis to confirm the model as a valuable instrument in order to evaluate the user acceptance of m-learning activity.

Keywords: Mobile learning, nursing institute, unified theory of acceptance and use of technology model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1206
64 Effect of Non-Metallic Inclusion from the Continuous Casting Process on the Multi-Stage Forging Process and the Tensile Strength of the Bolt: A Case Study

Authors: Tomasz Dubiel, Tadeusz Balawender, Mirosław Osetek

Abstract:

The paper presents the influence of non-metallic inclusions on the multi-stage forging process and the mechanical properties of the dodecagon socket bolt used in the automotive industry. The detected metallurgical defect was so large that it directly influenced the mechanical properties of the bolt and resulted in failure to meet the requirements of the mechanical property class. In order to assess the defect, an X-ray examination and metallographic examination of the defective bolt were performed, showing exogenous non-metallic inclusion. The size of the defect on the cross section was 0.531 mm in width and 1.523 mm in length; the defect was continuous along the entire axis of the bolt. In analysis, a finite element method (FEM) simulation of the multi-stage forging process was designed, taking into account a non-metallic inclusion parallel to the sample axis, reflecting the studied case. The process of defect propagation due to material upset in the head area was analyzed. The final forging stage in shaping the dodecagonal socket and filling the flange area was particularly studied. The effect of the defect was observed to significantly reduce the effective cross-section as a result of the expansion of the defect perpendicular to the axis of the bolt. The mechanical properties of products with and without the defect were analyzed. In the first step, the hardness test confirmed that the required value for the mechanical class 8.8 of both bolt types was obtained. In the second step, the bolts were subjected to a static tensile test. The bolts without the defect gave a positive result, while all 10 bolts with the defect gave a negative result, achieving a tensile strength below the requirements. Tensile strength tests were confirmed by metallographic tests and FEM simulation with perpendicular inclusion spread in the area of the head. The bolts were damaged directly under the bolt head, which is inconsistent with the requirements of ISO 898-1. It has been shown that non-metallic inclusions with orientation in accordance with the axis of the bolt can directly cause loss of functionality and these defects should be detected even before assembling in the machine element.

Keywords: continuous casting, multi-stage forging, non-metallic inclusion, upset bolt head

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 559
63 Design of Identification Based Adaptive Control for Fermentation Process in Bioreactor

Authors: J. Ritonja

Abstract:

The biochemical technology has been developing extremely fast since the middle of the last century. The main reason for such development represents a requirement for large production of high-quality biologically manufactured products such as pharmaceuticals, foods, and beverages. The impact of the biochemical industry on the world economy is enormous. The great importance of this industry also results in intensive development in scientific disciplines relevant to the development of biochemical technology. In addition to developments in the fields of biology and chemistry, which enable to understand complex biochemical processes, development in the field of control theory and applications is also very important. In the paper, the control for the biochemical reactor for the milk fermentation was studied. During the fermentation process, the biophysical quantities must be precisely controlled to obtain the high-quality product. To control these quantities, the bioreactor’s stirring drive and/or heating system can be used. Available commercial biochemical reactors are equipped with open loop or conventional linear closed loop control system. Due to the outstanding parameters variations and the partial nonlinearity of the biochemical process, the results obtained with these control systems are not satisfactory. To improve the fermentation process, the self-tuning adaptive control system was proposed. The use of the self-tuning adaptive control is suggested because the parameters’ variations of the studied biochemical process are very slow in most cases. To determine the linearized mathematical model of the fermentation process, the recursive least square identification method was used. Based on the obtained mathematical model the linear quadratic regulator was tuned. The parameters’ identification and the controller’s synthesis are executed on-line and adapt the controller’s parameters to the fermentation process’ dynamics during the operation. The use of the proposed combination represents the original solution for the control of the milk fermentation process. The purpose of the paper is to contribute to the progress of the control systems for the biochemical reactors. The proposed adaptive control system was tested thoroughly. From the obtained results it is obvious that the proposed adaptive control system assures much better following of the reference signal as a conventional linear control system with fixed control parameters.

Keywords: Adaptive control, biochemical reactor, linear quadratic regulator, recursive least square identification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 894
62 Retrieval Augmented Generation against the Machine: Merging Human Cyber Security Expertise with Generative AI

Authors: Brennan Lodge

Abstract:

Amidst a complex regulatory landscape, Retrieval Augmented Generation (RAG) emerges as a transformative tool for Governance Risk and Compliance (GRC) officers. This paper details the application of RAG in synthesizing Large Language Models (LLMs) with external knowledge bases, offering GRC professionals an advanced means to adapt to rapid changes in compliance requirements. While the development for standalone LLMs is exciting, such models do have their downsides. LLMs cannot easily expand or revise their memory, and they cannot straightforwardly provide insight into their predictions, and may produce “hallucinations.” Leveraging a pre-trained seq2seq transformer and a dense vector index of domain-specific data, this approach integrates real-time data retrieval into the generative process, enabling gap analysis and the dynamic generation of compliance and risk management content. We delve into the mechanics of RAG, focusing on its dual structure that pairs parametric knowledge contained within the transformer model with non-parametric data extracted from an updatable corpus. This hybrid model enhances decision-making through context-rich insights, drawing from the most current and relevant information, thereby enabling GRC officers to maintain a proactive compliance stance. Our methodology aligns with the latest advances in neural network fine-tuning, providing a granular, token-level application of retrieved information to inform and generate compliance narratives. By employing RAG, we exhibit a scalable solution that can adapt to novel regulatory challenges and cybersecurity threats, offering GRC officers a robust, predictive tool that augments their expertise. The granular application of RAG’s dual structure not only improves compliance and risk management protocols but also informs the development of compliance narratives with pinpoint accuracy. It underscores AI’s emerging role in strategic risk mitigation and proactive policy formation, positioning GRC officers to anticipate and navigate the complexities of regulatory evolution confidently.

Keywords: Retrieval Augmented Generation, Governance Risk and Compliance, Cybersecurity, AI-driven Compliance, Risk Management, Generative AI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 128
61 Crash and Injury Characteristics of Riders in Motorcycle-Passenger Vehicle Crashes

Authors: Z. A. Ahmad Noor Syukri, A. J. Nawal Aswan, S. V. Wong

Abstract:

The motorcycle has become one of the most common type of vehicles used on the road, particularly in the Asia region, including Malaysia, due to its size-convenience and affordable price. This study focuses only on crashes involving motorcycles with passenger cars consisting 43 real world crashes obtained from in-depth crash investigation process from June 2016 till July 2017. The study collected and analyzed vehicle and site parameters obtained during crash investigation and injury information acquired from the patient-treating hospital. The investigation team, consisting of two personnel, is stationed at the Emergency Department of the treatment facility, and was dispatched to the crash scene once receiving notification of the related crashes. The injury information retrieved was coded according to the level of severity using the Abbreviated Injury Scale (AIS) and classified into different body regions. The data revealed that weekend crashes were significantly higher for the night time period and the crash occurrence was the highest during morning hours (commuting to work period) for weekdays. Bad weather conditions play a minimal effect towards the occurrence of motorcycle – passenger vehicle crashes and nearly 90% involved motorcycles with single riders. Riders up to 25 years old are heavily involved in crashes with passenger vehicles (60%), followed by 26-55 year age group with 35%. Male riders were dominant in each of the age segments. The majority of the crashes involved side impacts, followed by rear impacts and cars outnumbered the rest of the passenger vehicle types in terms of crash involvement with motorcycles. The investigation data also revealed that passenger vehicles were the most at-fault counterpart (62%) when involved in crashes with motorcycles and most of the crashes involved situations whereby both of the vehicles are travelling in the same direction and one of the vehicles is in a turning maneuver. More than 80% of the involved motorcycle riders had sustained yellow severity level during triage process. The study also found that nearly 30% of the riders sustained injuries to the lower extremities, while MAIS level 3 injuries were recorded for all body regions except for thorax region. The result showed that crashes in which the motorcycles were found to be at fault were more likely to occur during night and raining conditions. These types of crashes were also found to be more likely to involve other types of passenger vehicles rather than cars and possess higher likelihood in resulting higher ISS (>6) value to the involved rider. To reduce motorcycle fatalities, it first has to understand the characteristics concerned and focus may be given on crashes involving passenger vehicles as the most dominant crash partner on Malaysian roads.

Keywords: Motorcycle crash, passenger vehicle, in-depth crash investigation, injury mechanism.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1126
60 Trimmed Mean as an Adaptive Robust Estimator of a Location Parameter for Weibull Distribution

Authors: Carolina B. Baguio

Abstract:

One of the purposes of the robust method of estimation is to reduce the influence of outliers in the data, on the estimates. The outliers arise from gross errors or contamination from distributions with long tails. The trimmed mean is a robust estimate. This means that it is not sensitive to violation of distributional assumptions of the data. It is called an adaptive estimate when the trimming proportion is determined from the data rather than being fixed a “priori-. The main objective of this study is to find out the robustness properties of the adaptive trimmed means in terms of efficiency, high breakdown point and influence function. Specifically, it seeks to find out the magnitude of the trimming proportion of the adaptive trimmed mean which will yield efficient and robust estimates of the parameter for data which follow a modified Weibull distribution with parameter λ = 1/2 , where the trimming proportion is determined by a ratio of two trimmed means defined as the tail length. Secondly, the asymptotic properties of the tail length and the trimmed means are also investigated. Finally, a comparison is made on the efficiency of the adaptive trimmed means in terms of the standard deviation for the trimming proportions and when these were fixed a “priori". The asymptotic tail lengths defined as the ratio of two trimmed means and the asymptotic variances were computed by using the formulas derived. While the values of the standard deviations for the derived tail lengths for data of size 40 simulated from a Weibull distribution were computed for 100 iterations using a computer program written in Pascal language. The findings of the study revealed that the tail lengths of the Weibull distribution increase in magnitudes as the trimming proportions increase, the measure of the tail length and the adaptive trimmed mean are asymptotically independent as the number of observations n becomes very large or approaching infinity, the tail length is asymptotically distributed as the ratio of two independent normal random variables, and the asymptotic variances decrease as the trimming proportions increase. The simulation study revealed empirically that the standard error of the adaptive trimmed mean using the ratio of tail lengths is relatively smaller for different values of trimming proportions than its counterpart when the trimming proportions were fixed a 'priori'.

Keywords: Adaptive robust estimate, asymptotic efficiency, breakdown point, influence function, L-estimates, location parameter, tail length, Weibull distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2074
59 Digital Twins: Towards an Overarching Framework for the Built Environment

Authors: Astrid Bagireanu, Julio Bros-Williamson, Mila Duncheva, John Currie

Abstract:

Digital Twins (DTs) have entered the built environment from more established industries like aviation and manufacturing, although there has never been a common goal for utilising DTs at scale. Their assimilation into the built environment lacked its very own handover documentation: how should DTs be implemented into a project and what responsibilities should each project stakeholder hold in the realisation of a DT vision. What is needed is an approach to translate these requirements into actionable DT dimensions. This paper presents a foundation for an overarching framework specific to the built environment. For the purposes of this research, the project timeline is established by referencing the Royal Institute of British Architects (RIBA) Plan of Work from 2020, providing a foundation for delineating project stages. The RIBA Plan of Work consists of eight stages designed to inform on the definition, briefing, design, coordination, construction, handover, and use of a built asset. Similar project stages are utilised in other countries; therefore, the recommendations from the interviews presented in this paper are applicable internationally. Simultaneously, there is not a single mainstream software resource that leverages DT abilities. This ambiguity meets an unparalleled ambition from governments and industries worldwide to achieve a national grid of interconnected DTs. For the construction industry to access these benefits, it necessitates a defined starting point. This research aims to provide a comprehensive understanding of the potential applications and ramifications of DT in the context of the built environment. This paper is an integral part of a larger research aimed at developing a conceptual framework for the Architecture, Engineering, and Construction (AEC) sector following a conventional project timeline. Therefore, this paper plays a pivotal role in providing practical insights and a tangible foundation for developing a stage-by-stage approach to assimilate the potential of DT within the built environment. First, the research focuses on a review of relevant literature, albeit acknowledging the inherent constraint of limited sources available. Secondly, a qualitative study compiling the views of 14 DT experts is presented, concluding with an inductive analysis of the interview findings - ultimately highlighting the barriers and strengths of DT in the context of framework development. As parallel developments aim to progress net-zero-centred design and improve project efficiencies across the built environment, the limited resources available to support DTs should be leveraged to propel the industry to reach its digitalisation era, in which AEC stakeholders have a fundamental role in understanding this, from the earliest stages of a project.

Keywords: Digital twins, decision making, design, net-zero, built environment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 527
58 Towards End-To-End Disease Prediction from Raw Metagenomic Data

Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker

Abstract:

Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.

Keywords: Metagenomics, phenotype prediction, deep learning, embeddings, multiple instance learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 911
57 Domestic Violence against Children and Trafficking in Human Beings: Two Worrying Phenomena in Kosovo

Authors: Adile Shaqiri, Arjeta Shaqiri Latifi

Abstract:

Domestic violence, trafficking with human beings especially violence against children, is a worldwide problem. Hence, it remains one of the most widespread forms of violence in Kosovo and which often continues to be described as a "closed door issue". Recognition, acceptance and prioritization of cases of domestic violence definitely require a much greater awareness of individuals in institutions for the risks, consequences and costs that the lack of such a well-coordinated response brings to the country. Considering that children are the future and the wealth of the country, violence and neglect against them should be treated as carefully as possible. The purpose of this paper is to identify steps towards prevention of the domestic violence and trafficking with human beings, so that the reflection of the consequences and the psychological flow do not reflect to a large extent in society. In this study is described: How is the phenomenon of domestic violence related to trafficking in human beings? The methods used are: historical, comparative, qualitative. Data derived from the relevant institutions were presented, i.e., by the actors who are the first reactors as well as the policy makers. Although these phenomena are present in all countries of the world, Kosovo is no exception and therefore comparisons of the development of child abuse have been made with other countries in the region as well. Since Kosovo is a country in transition, a country with a relatively high level of education, low economic development, high unemployment, political instability, dysfunctional legal infrastructure, it can be concluded that the potential for the development of negative phenomena is present and inevitable. Thus, during the research, the stages of development of these phenomena are analyzed, determining the causes and consequences which come from abuse, neglect of children and the impact on trafficking in human beings. The Kosovar family (parental responsibility), culture and religion, social services, the dignity of the abused child, etc. were analyzed. The review was also done on the legislation, educational institutions (curricula), governmental and non-governmental institutions their responsibilities and cooperation towards combating child abuse and trafficking. It is worth noting that during the work on paper, recommendations and conclusions have been drawn where it is concluded that we need an environment with educational reforms, stability in the political environment, economic development, a review of social policies, greater awareness of society, more adequate information through media, so that information and awareness could penetrate even in the most remote places of Kosovo society.

Keywords: Awareness, education, information, society, violence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 685
56 Utilizing Fly Ash Cenosphere and Aerogel for Lightweight Thermal Insulating Cement-Based Composites

Authors: Asad Hanif, Pavithra Parthasarathy, Zongjin Li

Abstract:

Thermal insulating composites help to reduce the total power consumption in a building by creating a barrier between external and internal environment. Such composites can be used in the roofing tiles or wall panels for exterior surfaces. This study purposes to develop lightweight cement-based composites for thermal insulating applications. Waste materials like silica fume (an industrial by-product) and fly ash cenosphere (FAC) (hollow micro-spherical shells obtained as a waste residue from coal fired power plants) were used as partial replacement of cement and lightweight filler, respectively. Moreover, aerogel, a nano-porous material made of silica, was also used in different dosages for improved thermal insulating behavior, while poly vinyl alcohol (PVA) fibers were added for enhanced toughness. The raw materials including binders and fillers were characterized by X-Ray Diffraction (XRD), X-Ray Fluorescence spectroscopy (XRF), and Brunauer–Emmett–Teller (BET) analysis techniques in which various physical and chemical properties of the raw materials were evaluated like specific surface area, chemical composition (oxide form), and pore size distribution (if any). Ultra-lightweight cementitious composites were developed by varying the amounts of FAC and aerogel with 28-day unit weight ranging from 1551.28 kg/m3 to 1027.85 kg/m3. Excellent mechanical and thermal insulating properties of the resulting composites were obtained ranging from 53.62 MPa to 8.66 MPa compressive strength, 9.77 MPa to 3.98 MPa flexural strength, and 0.3025 W/m-K to 0.2009 W/m-K as thermal conductivity coefficient (QTM-500). The composites were also tested for peak temperature difference between outer and inner surfaces when subjected to heating (in a specially designed experimental set-up) by a 275W infrared lamp. The temperature difference up to 16.78 oC was achieved, which indicated outstanding properties of the developed composites to act as a thermal barrier for building envelopes. Microstructural studies were carried out by Scanning Electron Microscopy (SEM) and Energy Dispersive X-ray Spectroscopy (EDS) for characterizing the inner structure of the composite specimen. Also, the hydration products were quantified using the surface area mapping and line scale technique in EDS. The microstructural analyses indicated excellent bonding of FAC and aerogel in the cementitious system. Also, selective reactivity of FAC was ascertained from the SEM imagery where the partially consumed FAC shells were observed. All in all, the lightweight fillers, FAC, and aerogel helped to produce the lightweight composites due to their physical characteristics, while exceptional mechanical properties, owing to FAC partial reactivity, were achieved.

Keywords: Sustainable development, fly ash cenosphere, aerogel, lightweight, cement, composite.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2211