Search results for: nonlinear optimization with constraints
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5312

Search results for: nonlinear optimization with constraints

782 Exploring the Intrinsic Ecology and Suitable Density of Historic Districts Through a Comparative Analysis of Ancient and Modern Ecological Smart Practices

Authors: HU Changjuan, Gong Cong, Long Hao

Abstract:

Although urban ecological policies and the public's aspiration for livable environments have expedited the pace of ecological revitalization, historic districts that have evolved through natural ecological processes often become obsolete and less habitable amid rapid urbanization. This raises a critical question about historic districts inherently incapable of being ecological and livable. The thriving concept of ‘intrinsic ecology,’ characterized by its ability to transform city-district systems into healthy ecosystems with diverse environments, stable functions, and rapid restoration capabilities, holds potential for guiding the integration of ancient and modern ecological wisdom while supporting the dynamic involvement of cultures. This study explores the intrinsic ecology of historic districts from three aspects: 1) Population Density: By comparing the population density before urban population expansion to the present day, determine the reasonable population density for historic districts. 2) Building Density: Using the ‘Space-mate’ tool for comparative analysis, form a spatial matrix to explore the intrinsic ecology of building density in Chinese historic districts. 3) Green Capacity Ratio: By using ecological districts as control samples, conduct dual comparative analyses (related comparison and upgraded comparison) to determine the intrinsic ecological advantages of the two-dimensional and three-dimensional green volume in historic districts. The study inform a density optimization strategy that supports cultural, social, natural, and economic ecology, contributing to the creation of eco-historic districts.

Keywords: eco-historic districts, intrinsic ecology, suitable density, green capacity ratio.

Procedia PDF Downloads 7
781 Optimization Method of the Number of Berth at Bus Rapid Transit Stations Based on Passenger Flow Demand

Authors: Wei Kunkun, Cao Wanyang, Xu Yujie, Qiao Yuzhi, Liu Yingning

Abstract:

The reasonable design of bus parking spaces can improve the traffic capacity of the station and reduce traffic congestion. In order to reasonably determine the number of berths at BRT (Bus Rapid Transit) stops, it is based on the actual bus rapid transit station observation data, scheduling data, and passenger flow data. Optimize the number of station berths from the perspective of optimizing the balance of supply and demand at the site. Combined with the classical capacity calculation model, this paper first analyzes the important factors affecting the traffic capacity of BRT stops by using SPSS PRO and MATLAB programming software, namely the distribution of BRT stops and the distribution of BRT stop time. Secondly, the method of calculating the number of the classic human capital management (HCM) model is optimized based on the actual passenger demand of the station, and the method applicable to the actual number of station berths is proposed. Taking Gangding Station of Zhongshan Avenue Bus Rapid Transit Corridor in Guangzhou as an example, based on the calculation method proposed in this paper, the number of berths of sub-station 1, sub-station 2 and sub-station 3 is 2, which reduces the road space of the station by 33.3% compared with the previous berth 3 of each sub-station, and returns to social vehicles. Therefore, under the condition of ensuring the passenger flow demand of BRT stations, the road space of the station is reduced, and the road is returned to social vehicles, the traffic capacity of social vehicles is improved, and the traffic capacity and efficiency of the BRT corridor system are improved as a whole.

Keywords: urban transportation, bus rapid transit station, HCM model, capacity, number of berths

Procedia PDF Downloads 90
780 Rheological Study of Chitosan/Montmorillonite Nanocomposites: The Effect of Chemical Crosslinking

Authors: K. Khouzami, J. Brassinne, C. Branca, E. Van Ruymbeke, B. Nysten, G. D’Angelo

Abstract:

The development of hybrid organic-inorganic nanocomposites has recently attracted great interest. Typically, polymer silicates represent an emerging class of polymeric nanocomposites that offer superior material properties compared to each compound alone. Among these materials, complexes based on silicate clay and polysaccharides are one of the most promising nanocomposites. The strong electrostatic interaction between chitosan and montmorillonite can induce what is called physical hydrogel, where the coordination bonds or physical crosslinks may associate and dissociate reversibly and in a short time. These mechanisms could be the main origin of the uniqueness of their rheological behavior. However, owing to their structure intrinsically heterogeneous and/or the lack of dissipated energy, they are usually brittle, possess a poor toughness and may not have sufficient mechanical strength. Consequently, the properties of these nanocomposites cannot respond to some requirements of many applications in several fields. To address the issue of weak mechanical properties, covalent chemical crosslink bonds can be introduced to the physical hydrogel. In this way, quite homogeneous dually crosslinked microstructures with high dissipated energy and enhanced mechanical strength can be engineered. In this work, we have prepared a series of chitosan-montmorillonite nanocomposites chemically crosslinked by addition of poly (ethylene glycol) diglycidyl ether. This study aims to provide a better understanding of the mechanical behavior of dually crosslinked chitosan-based nanocomposites by relating it to their microstructures. In these systems, the variety of microstructures is obtained by modifying the number of cross-links. Subsequently, a superior uniqueness of the rheological properties of chemically crosslinked chitosan-montmorillonite nanocomposites is achieved, especially at the highest percentage of clay. Their rheological behaviors depend on the clay/chitosan ratio and the crosslinking. All specimens exhibit a viscous rheological behavior over the frequency range investigated. The flow curves of the nanocomposites show a Newtonian plateau at very low shear rates accompanied by a quite complicated nonlinear decrease with increasing the shear rate. Crosslinking induces a shear thinning behavior revealing the formation of network-like structures. Fitting shear viscosity curves via Ostward-De Waele equation disclosed that crosslinking and clay addition strongly affect the pseudoplasticity of the nanocomposites for shear rates γ ̇>20.

Keywords: chitosan, crossliking, nanocomposites, rheological properties

Procedia PDF Downloads 139
779 Application of Response Surface Methodology to Optimize the Factor Influencing the Wax Deposition of Malaysian Crude Oil

Authors: Basem Elarbe, Ibrahim Elganidi, Norida Ridzuan, Norhyati Abdullah

Abstract:

Wax deposition in production pipelines and transportation tubing from offshore to onshore is critical in the oil and gas industry due to low-temperature conditions. It may lead to a reduction in production, shut-in, plugging of pipelines and increased fluid viscosity. The most significant popular approach to solve this issue is by injection of a wax inhibitor into the channel. This research aims to determine the amount of wax deposition of Malaysian crude oil by estimating the effective parameters using (Design-Expert version 7.1.6) by response surface methodology (RSM) method. Important parameters affecting wax deposition such as cold finger temperature, inhibitor concentration and experimental duration were investigated. It can be concluded that SA-co-BA copolymer had a higher capability of reducing wax in different conditions where the minimum point of wax reduction was found at 300 rpm, 14℃, 1h, 1200 ppmThe amount of waxes collected for each parameter were 0.12g. RSM approach was applied using rotatable central composite design (CCD) to minimize the wax deposit amount. The regression model’s variance (ANOVA) results revealed that the R2 value of 0.9906, indicating that the model can be clarified 99.06% of the data variation, and just 0.94% of the total variation were not clarified by the model. Therefore, it indicated that the model is extremely significant, confirming a close agreement between the experimental and the predicted values. In addition, the result has shown that the amount of wax deposit decreased significantly with the increase of temperature and the concentration of poly (stearyl acrylate-co-behenyl acrylate) (SABA), which were set at 14°C and 1200 ppm, respectively. The amount of wax deposit was successfully reduced to the minimum value of 0.01 g after the optimization.

Keywords: wax deposition, SABA inhibitor, RSM, operation factors

Procedia PDF Downloads 274
778 Seismic Fragility Assessment of Continuous Integral Bridge Frames with Variable Expansion Joint Clearances

Authors: P. Mounnarath, U. Schmitz, Ch. Zhang

Abstract:

Fragility analysis is an effective tool for the seismic vulnerability assessment of civil structures in the last several years. The design of the expansion joints according to various bridge design codes is almost inconsistent, and only a few studies have focused on this problem so far. In this study, the influence of the expansion joint clearances between the girder ends and the abutment backwalls on the seismic fragility assessment of continuous integral bridge frames is investigated. The gaps (ranging from 60 mm, 150 mm, 250 mm and 350 mm) are designed by following two different bridge design code specifications, namely, Caltrans and Eurocode 8-2. Five bridge models are analyzed and compared. The first bridge model serves as a reference. This model uses three-dimensional reinforced concrete fiber beam-column elements with simplified supports at both ends of the girder. The other four models also employ reinforced concrete fiber beam-column elements but include the abutment backfill stiffness and four different gap values. The nonlinear time history analysis is performed. The artificial ground motion sets, which have the peak ground accelerations (PGAs) ranging from 0.1 g to 1.0 g with an increment of 0.05 g, are taken as input. The soil-structure interaction and the P-Δ effects are also included in the analysis. The component fragility curves in terms of the curvature ductility demand to the capacity ratio of the piers and the displacement demand to the capacity ratio of the abutment sliding bearings are established and compared. The system fragility curves are then obtained by combining the component fragility curves. Our results show that in the component fragility analysis, the reference bridge model exhibits a severe vulnerability compared to that of other sophisticated bridge models for all damage states. In the system fragility analysis, the reference curves illustrate a smaller damage probability in the earlier PGA ranges for the first three damage states, they then show a higher fragility compared to other curves in the larger PGA levels. In the fourth damage state, the reference curve has the smallest vulnerability. In both the component and the system fragility analysis, the same trend is found that the bridge models with smaller clearances exhibit a smaller fragility compared to that with larger openings. However, the bridge model with a maximum clearance still induces a minimum pounding force effect.

Keywords: expansion joint clearance, fiber beam-column element, fragility assessment, time history analysis

Procedia PDF Downloads 428
777 Optimization, Characterization and Stability of Trachyspermum copticum Essential Oil Loaded in Niosome Nanocarriers

Authors: Mohadese Hashemi, Elham Akhoundi Kharanaghi, Fatemeh Haghiralsadat, Mojgan Yazdani, Omid Javani, Mahboobe Sharafodini, Davood Rajabi

Abstract:

Niosomes are non-ionic surfactant vesicles in aqueous media resulting in closed bilayer structures that can be used as carriers of hydrophilic and hydrophobic compounds. The use of niosomes for encapsulation of essential oils (EOs) is an attractive new approach to overcome their physicochemical stability concerns include sensibility to oxygen, light, temperature, and volatility, and their reduced bioavailability which is due to low solubility in water. EOs are unstable and fragile volatile compounds which have strong interest in pharmaceutical due to their medicinal properties such as antiviral, anti-inflammatory, antifungal, and antioxidant activities without side effects. Trachyspermum copticum (ajwain) is an annual aromatic plant with important medicinal properties that grows widely around Mediterranean region and south-west Asian countries. The major components of the ajwain oil were reported as thymol, γ-terpinene, p-cymene, and carvacrol which provide antimicrobial and antioxidant activity. The aim of this work was to formulate ajwain essential oil-loaded niosomes to improve water solubility of natural product and evaluate its physico-chemical features and stability. Ajwain oil was obtained through steam distillation using a clevenger-type apparatus and GC/MS was applied to identify the main components of the essential oil. Niosomes were prepared by using thin film hydration method and nanoparticles were characterized for particle size, dispersity index, zeta potential, encapsulation efficiency, in vitro release, and morphology.

Keywords: trachyspermum copticum, ajwain, niosome, essential oil, encapsulation

Procedia PDF Downloads 475
776 Evaluation of Microbial Community, Biochemical and Physiological Properties of Korean Black Raspberry (Rubus coreanus Miquel) Vinegar Manufacturing Process

Authors: Nho-Eul Song, Sang-Ho Baik

Abstract:

Fermentation characteristics of black raspberry vinegar by using static cultures without any additives were has been investigated to establish of vinegar manufacturing conditions and improve the quality of vinegar by optimization the vinegar manufacturing process. The two vinegar manufacturing conditions were prepared; one-step fermentation condition only using mother vinegar that prepared naturally occurring black raspberry vinegar without starter yeast for alcohol fermentation (traditional method) and two-step fermentation condition using commercial wine yeast and mother vinegar for acetic acid fermentation. Approximately 12% ethanol was produced after 35 days fermentation with log 7.6 CFU/mL of yeast population in one-step fermentation, resulting sugar reduction from 14 to 6oBrix whereas in two-step fermentation, ethanol concentration was reached up to 8% after 27 days with continuous increasing yeast until log 7.0 CFU/mL. In addition, yeast and ethanol were decreased after day 60 accompanied with proliferation of acetic acid bacteria (log 5.8 CFU/mL) and titratable acidity; 4.4% in traditional method and 6% in two-step fermentation method. DGGE analysis showed that S. cerevisiae was detected until 77 days of traditional fermentation and gradually changed to AAB, Acetobacter pasteurianus, as dominant species and Komagataeibacter xylinus at the end of the fermentation. However, S. cerevisiae and A. pasteurianus was dominant in two-step fermentation process. The prepared two-step fermentation showed enhanced total polyphenol and flavonoid content significantly resulting in higher radical scavenging activity. Our studies firstly revealed the microbial community change with chemical change and demonstrated a suitable fermentation system for black raspberry vinegar by the static surface method.

Keywords: bacteria, black raspberry, vinegar fermentation, yeast

Procedia PDF Downloads 443
775 Parallel Fuzzy Rough Support Vector Machine for Data Classification in Cloud Environment

Authors: Arindam Chaudhuri

Abstract:

Classification of data has been actively used for most effective and efficient means of conveying knowledge and information to users. The prima face has always been upon techniques for extracting useful knowledge from data such that returns are maximized. With emergence of huge datasets the existing classification techniques often fail to produce desirable results. The challenge lies in analyzing and understanding characteristics of massive data sets by retrieving useful geometric and statistical patterns. We propose a supervised parallel fuzzy rough support vector machine (PFRSVM) for data classification in cloud environment. The classification is performed by PFRSVM using hyperbolic tangent kernel. The fuzzy rough set model takes care of sensitiveness of noisy samples and handles impreciseness in training samples bringing robustness to results. The membership function is function of center and radius of each class in feature space and is represented with kernel. It plays an important role towards sampling the decision surface. The success of PFRSVM is governed by choosing appropriate parameter values. The training samples are either linear or nonlinear separable. The different input points make unique contributions to decision surface. The algorithm is parallelized with a view to reduce training times. The system is built on support vector machine library using Hadoop implementation of MapReduce. The algorithm is tested on large data sets to check its feasibility and convergence. The performance of classifier is also assessed in terms of number of support vectors. The challenges encountered towards implementing big data classification in machine learning frameworks are also discussed. The experiments are done on the cloud environment available at University of Technology and Management, India. The results are illustrated for Gaussian RBF and Bayesian kernels. The effect of variability in prediction and generalization of PFRSVM is examined with respect to values of parameter C. It effectively resolves outliers’ effects, imbalance and overlapping class problems, normalizes to unseen data and relaxes dependency between features and labels. The average classification accuracy for PFRSVM is better than other classifiers for both Gaussian RBF and Bayesian kernels. The experimental results on both synthetic and real data sets clearly demonstrate the superiority of the proposed technique.

Keywords: FRSVM, Hadoop, MapReduce, PFRSVM

Procedia PDF Downloads 484
774 Optimization of a Bioremediation Strategy for an Urban Stream of Matanza-Riachuelo Basin

Authors: María D. Groppa, Andrea Trentini, Myriam Zawoznik, Roxana Bigi, Carlos Nadra, Patricia L. Marconi

Abstract:

In the present work, a remediation bioprocess based on the use of a local isolate of the microalgae Chlorella vulgaris immobilized in alginate beads is proposed. This process was shown to be effective for the reduction of several chemical and microbial contaminants present in Cildáñez stream, a water course that is part of the Matanza-Riachuelo Basin (Buenos Aires, Argentina). The bioprocess, involving the culture of the microalga in autotrophic conditions in a stirred-tank bioreactor supplied with a marine propeller for 6 days, allowed a significant reduction of Escherichia coli and total coliform numbers (over 95%), as well as of ammoniacal nitrogen (96%), nitrates (86%), nitrites (98%), and total phosphorus (53%) contents. Pb content was also significantly diminished after the bioprocess (95%). Standardized cytotoxicity tests using Allium cepa seeds and Cildáñez water pre- and post-remediation were also performed. Germination rate and mitotic index of onion seeds imbibed in Cildáñez water subjected to the bioprocess was similar to that observed in seeds imbibed in distilled water and significantly superior to that registered when untreated Cildáñez water was used for imbibition. Our results demonstrate the potential of this simple and cost-effective technology to remove urban-water contaminants, offering as an additional advantage the possibility of an easy biomass recovery, which may become a source of alternative energy.

Keywords: bioreactor, bioremediation, Chlorella vulgaris, Matanza-Riachuelo Basin, microalgae

Procedia PDF Downloads 241
773 On-Ice Force-Velocity Modeling Technical Considerations

Authors: Dan Geneau, Mary Claire Geneau, Seth Lenetsky, Ming -Chang Tsai, Marc Klimstra

Abstract:

Introduction— Horizontal force-velocity profiling (HFVP) involves modeling an athletes linear sprint kinematics to estimate valuable maximum force and velocity metrics. This approach to performance modeling has been used in field-based team sports and has recently been introduced to ice-hockey as a forward skating performance assessment. While preliminary data has been collected on ice, distance constraints of the on-ice test restrict the ability of the athletes to reach their maximal velocity which result in limits of the model to effectively estimate athlete performance. This is especially true of more elite athletes. This report explores whether athletes on-ice are able to reach a velocity plateau similar to what has been seen in overground trials. Fourteen male Major Junior ice-hockey players (BW= 83.87 +/- 7.30 kg, height = 188 ± 3.4cm cm, age = 18 ± 1.2 years n = 14) were recruited. For on-ice sprints, participants completed a standardized warm-up consisting of skating and dynamic stretching and a progression of three skating efforts from 50% to 95%. Following the warm-up, participants completed three on ice 45m sprints, with three minutes of rest in between each trial. For overground sprints, participants completed a similar dynamic warm-up to that of on-ice trials. Following the warm-up participants completed three 40m overground sprint trials. For each trial (on-ice and overground), radar was used to collect instantaneous velocity (Stalker ATS II, Texas, USA) aimed at the participant’s waist. Sprint velocities were modelled using custom Python (version 3.2) script using a mono-exponential function, similar to previous work. To determine if on-ice tirals were achieving a maximum velocity (plateau), minimum acceleration values of the modeled data at the end of the sprint were compared (using paired t-test) between on-ice and overground trials. Significant differences (P<0.001) between overground and on-ice minimum accelerations were observed. It was found that on-ice trials consistently reported higher final acceleration values, indicating a maximum maintained velocity (plateau) had not been reached. Based on these preliminary findings, it is suggested that reliable HFVP metrics cannot yet be collected from all ice-hockey populations using current methods. Elite male populations were not able to achieve a velocity plateau similar to what has been seen in overground trials, indicating the absence of a maximum velocity measure. With current velocity and acceleration modeling techniques, including a dependency of a velocity plateau, these results indicate the potential for error in on-ice HFVP measures. Therefore, these findings suggest that a greater on-ice sprint distance may be required or the need for other velocity modeling techniques, where maximal velocity is not required for a complete profile.   

Keywords: ice-hockey, sprint, skating, power

Procedia PDF Downloads 94
772 Teaching–Learning-Based Optimization: An Efficient Method for Chinese as a Second Language

Authors: Qi Wang

Abstract:

In the classroom, teachers have been trained to complete the target task within the limited lecture time, meanwhile learners need to receive a lot of new knowledge, however, most of the time the learners come without the proper pre-class preparation to efficiently take in the contents taught in class. Under this circumstance, teachers do have no time to check whether the learners fully understand the content or not, how the learners communicate in the different contexts, until teachers see the results when the learners are tested. In the past decade, the teaching of Chinese has taken a trend. Teaching focuses less on the use of proper grammatical terms/punctuation and is now placing a heavier focus on the materials from real life contexts. As a result, it has become a greater challenge to teachers, as this requires teachers to fully understand/prepare what they teach and explain the content with simple and understandable words to learners. On the other hand, the same challenge also applies to the learners, who come from different countries. As they have to use what they learnt, based on their personal understanding of the material to effectively communicate with others in the classroom, even in the contexts of a day to day communication. To reach this win-win stage, Feynman’s Technique plays a very important role. This practical report presents you how the Feynman’s Technique is applied into Chinese courses, both writing & oral, to motivate the learners to practice more on writing, reading and speaking in the past few years. Part 1, analysis of different teaching styles and different types of learners, to find the most efficient way to both teachers and learners. Part 2, based on the theory of Feynman’s Technique, how to let learners build the knowledge from knowing the name of something to knowing something, via different designed target tasks. Part 3. The outcomes show that Feynman’s Technique is the interaction of learning style and teaching style, the double-edged sword of Teaching & Learning Chinese as a Second Language.

Keywords: Chinese, Feynman’s technique, learners, teachers

Procedia PDF Downloads 146
771 Comparison of Methodologies to Compute the Probabilistic Seismic Hazard Involving Faults and Associated Uncertainties

Authors: Aude Gounelle, Gloria Senfaute, Ludivine Saint-Mard, Thomas Chartier

Abstract:

The long-term deformation rates of faults are not fully captured by Probabilistic Seismic Hazard Assessment (PSHA). PSHA that use catalogues to develop area or smoothed-seismicity sources is limited by the data available to constraint future earthquakes activity rates. The integration of faults in PSHA can at least partially address the long-term deformation. However, careful treatment of fault sources is required, particularly, in low strain rate regions, where estimated seismic hazard levels are highly sensitive to assumptions concerning fault geometry, segmentation and slip rate. When integrating faults in PSHA various constraints on earthquake rates from geologic and seismologic data have to be satisfied. For low strain rate regions where such data is scarce it would be especially challenging. Faults in PSHA requires conversion of the geologic and seismologic data into fault geometries, slip rates and then into earthquake activity rates. Several approaches exist for translating slip rates into earthquake activity rates. In the most frequently used approach, the background earthquakes are handled using a truncated approach, in which earthquakes with a magnitude lower or equal to a threshold magnitude (Mw) occur in the background zone, with a rate defined by the rate in the earthquake catalogue. Although magnitudes higher than the threshold are located on the fault with a rate defined using the average slip rate of the fault. As high-lighted by several research, seismic events with magnitudes stronger than the selected magnitude threshold may potentially occur in the background and not only at the fault, especially in regions of slow tectonic deformation. It also has been known that several sections of a fault or several faults could rupture during a single fault-to-fault rupture. It is then essential to apply a consistent modelling procedure to allow for a large set of possible fault-to-fault ruptures to occur aleatory in the hazard model while reflecting the individual slip rate of each section of the fault. In 2019, a tool named SHERIFS (Seismic Hazard and Earthquake Rates in Fault Systems) was published. The tool is using a methodology to calculate the earthquake rates in a fault system where the slip-rate budget of each fault is conversed into rupture rates for all possible single faults and faultto-fault ruptures. The objective of this paper is to compare the SHERIFS method with one other frequently used model to analyse the impact on the seismic hazard and through sensibility studies better understand the influence of key parameters and assumptions. For this application, a simplified but realistic case study was selected, which is in an area of moderate to hight seismicity (South Est of France) and where the fault is supposed to have a low strain.

Keywords: deformation rates, faults, probabilistic seismic hazard, PSHA

Procedia PDF Downloads 53
770 Mobile Crowdsensing Scheme by Predicting Vehicle Mobility Using Deep Learning Algorithm

Authors: Monojit Manna, Arpan Adhikary

Abstract:

In Mobile cloud sensing across the globe, an emerging paradigm is selected by the user to compute sensing tasks. In urban cities current days, Mobile vehicles are adapted to perform the task of data sensing and data collection for universality and mobility. In this work, we focused on the optimality and mobile nodes that can be selected in order to collect the maximum amount of data from urban areas and fulfill the required data in the future period within a couple of minutes. We map out the requirement of the vehicle to configure the maximum data optimization problem and budget. The Application implementation is basically set up to generalize a realistic online platform in which real-time vehicles are moving apparently in a continuous manner. The data center has the authority to select a set of vehicles immediately. A deep learning-based scheme with the help of mobile vehicles (DLMV) will be proposed to collect sensing data from the urban environment. From the future time perspective, this work proposed a deep learning-based offline algorithm to predict mobility. Therefore, we proposed a greedy approach applying an online algorithm step into a subset of vehicles for an NP-complete problem with a limited budget. Real dataset experimental extensive evaluations are conducted for the real mobility dataset in Rome. The result of the experiment not only fulfills the efficiency of our proposed solution but also proves the validity of DLMV and improves the quantity of collecting the sensing data compared with other algorithms.

Keywords: mobile crowdsensing, deep learning, vehicle recruitment, sensing coverage, data collection

Procedia PDF Downloads 67
769 The Role of Leadership in Enhancing Health Information Systems to Improve Patient Outcomes in China

Authors: Nisar Ahmad, Xuyi, Ali Akbar

Abstract:

As healthcare systems worldwide strive for improvement, the integration of advanced health information systems (HIS) has emerged as a pivotal strategy. This study aims to investigate the critical role of leadership in the implementation and enhancement of HIS in Chinese hospitals and how such leadership can drive improvements in patient outcomes and overall healthcare satisfaction. We propose a comprehensive study to be conducted across various hospitals in China, targeting healthcare professionals as the primary population. The research will leverage established theories of transformational leadership and technology acceptance to underpin the analysis. In our approach, data will be meticulously gathered through surveys and interviews, focusing on the experiences and perceptions of healthcare professionals regarding HIS implementation and its impact on patient care. The study will utilize SPSS and SmartPLS software for robust data analysis, ensuring precise and comprehensive insights into the correlation between leadership effectiveness and HIS success. We hypothesize that strong, visionary leadership is essential for the successful adoption and optimization of HIS, leading to enhanced patient outcomes and increased satisfaction with healthcare services. By applying advanced statistical methods, we aim to identify key leadership traits and practices that significantly contribute to these improvements. Our research will provide actionable insights for policymakers and healthcare administrators in China, offering evidence-based recommendations to foster leadership that champions HIS and drives continuous improvement in healthcare delivery. This study will contribute to the global discourse on health information systems, emphasizing the future role of leadership in transforming healthcare environments and outcomes.

Keywords: health information systems, leadership, patient outcomes, healthcare satisfaction

Procedia PDF Downloads 17
768 Performance Analysis of Search Medical Imaging Service on Cloud Storage Using Decision Trees

Authors: González A. Julio, Ramírez L. Leonardo, Puerta A. Gabriel

Abstract:

Telemedicine services use a large amount of data, most of which are diagnostic images in Digital Imaging and Communications in Medicine (DICOM) and Health Level Seven (HL7) formats. Metadata is generated from each related image to support their identification. This study presents the use of decision trees for the optimization of information search processes for diagnostic images, hosted on the cloud server. To analyze the performance in the server, the following quality of service (QoS) metrics are evaluated: delay, bandwidth, jitter, latency and throughput in five test scenarios for a total of 26 experiments during the loading and downloading of DICOM images, hosted by the telemedicine group server of the Universidad Militar Nueva Granada, Bogotá, Colombia. By applying decision trees as a data mining technique and comparing it with the sequential search, it was possible to evaluate the search times of diagnostic images in the server. The results show that by using the metadata in decision trees, the search times are substantially improved, the computational resources are optimized and the request management of the telemedicine image service is improved. Based on the experiments carried out, search efficiency increased by 45% in relation to the sequential search, given that, when downloading a diagnostic image, false positives are avoided in management and acquisition processes of said information. It is concluded that, for the diagnostic images services in telemedicine, the technique of decision trees guarantees the accessibility and robustness in the acquisition and manipulation of medical images, in improvement of the diagnoses and medical procedures in patients.

Keywords: cloud storage, decision trees, diagnostic image, search, telemedicine

Procedia PDF Downloads 195
767 Synthesis and Characterization of Cassava Starch-Zinc Nanocomposite Film for Food Packaging Application

Authors: Adeshina Fadeyibi

Abstract:

Application of pure thermoplastic film in food packaging is greatly limited because of its poor service performance, often enhanced by the addition of organic or inorganic particles in the range of 1–100 nm. Thus, this study was conducted to develop cassava starch zinc-nanocomposite films for applications in food packaging. Three blending ratios of 1000 g cassava starch, 45–55 % (w/w) glycerol and 0–2 % (w/w) zinc nanoparticles were formulated, mixed and mechanically homogenized to form the nanocomposite. Thermoplastic were prepared, from a dispersed mixture of 24 g of the nanocomposite and 600 ml of distilled water, and heated to 90oC for 30 minutes. Plastic molds of 350 ×180 mm dimension and 8, 10 and 12 mm depths were used for film casting and drying at 60oC and 80 % RH for 24 hour. The average thicknesses of the dried films were found to be 15, 16 and 17 µm. The films were characterized based on their barrier, thermal, mechanical and structural properties. The results show that the oxygen and water vapor barrier properties increased with glycerol concentration and decreased with thickness; but the full width at half maximum (FWHM) and d- spacing increased with thickness. The higher degree of d- spacing obtained is a consequence of higher polymer intercalation and exfoliation. Also, only 2 % weight degradation was observed when the films were exposed to temperature between 30–60oC; indicating that they are thermally stable and can be used for packaging applications in the tropics. The mechanical properties of the film were higher than that of the pure thermoplastic but comparable with the LDPE films. The information on the characterized attributes and optimization of the cassava starch zinc-nanocomposite films justifies their alternative application to pure thermoplastic and conventional films for food packaging.

Keywords: synthesis, characterization, casaava Starch, nanocomposite film, packaging

Procedia PDF Downloads 111
766 Thermolysin Entrapment in a Gold Nanoparticles/Polymer Composite: Construction of an Efficient Biosensor for Ochratoxin a Detection

Authors: Fatma Dridi, Mouna Marrakchi, Mohammed Gargouri, Alvaro Garcia Cruz, Sergei V. Dzyadevych, Francis Vocanson, Joëlle Saulnier, Nicole Jaffrezic-Renault, Florence Lagarde

Abstract:

An original method has been successfully developed for the immobilization of thermolysin onto gold interdigitated electrodes for the detection of ochratoxin A (OTA) in olive oil samples. A mix of polyvinyl alcohol (PVA), polyethylenimine (PEI) and gold nanoparticles (AuNPs) was used. Cross-linking sensors chip was made by using a saturated glutaraldehyde (GA) vapor atmosphere in order to render the two polymers water stable. Performance of AuNPs/ (PVA/PEI) modified electrode was compared to a traditional immobilized enzymatic method using bovine serum albumin (BSA). Atomic force microscopy (AFM) experiments were employed to provide a useful insight into the structure and morphology of the immobilized thermolysin composite membranes. The enzyme immobilization method influence the topography and the texture of the deposited layer. Biosensors optimization and analytical characteristics properties were studied. Under optimal conditions AuNPs/ (PVA/PEI) modified electrode showed a higher increment in sensitivity. A 700 enhancement factor could be achieved with a detection limit of 1 nM. The newly designed OTA biosensors showed a long-term stability and good reproducibility. The relevance of the method was evaluated using commercial doped olive oil samples. No pretreatment of the sample was needed for testing and no matrix effect was observed. Recovery values were close to 100% demonstrating the suitability of the proposed method for OTA screening in olive oil.

Keywords: thermolysin, A. ochratoxin , polyvinyl alcohol, polyethylenimine, gold nanoparticles, olive oil

Procedia PDF Downloads 581
765 Integrating Computational Modeling and Analysis with in Vivo Observations for Enhanced Hemodynamics Diagnostics and Prognosis

Authors: Shreyas S. Hegde, Anindya Deb, Suresh Nagesh

Abstract:

Computational bio-mechanics is developing rapidly as a non-invasive tool to assist the medical fraternity to help in both diagnosis and prognosis of human body related issues such as injuries, cardio-vascular dysfunction, atherosclerotic plaque etc. Any system that would help either properly diagnose such problems or assist prognosis would be a boon to the doctors and medical society in general. Recently a lot of work is being focused in this direction which includes but not limited to various finite element analysis related to dental implants, skull injuries, orthopedic problems involving bones and joints etc. Such numerical solutions are helping medical practitioners to come up with alternate solutions for such problems and in most cases have also reduced the trauma on the patients. Some work also has been done in the area related to the use of computational fluid mechanics to understand the flow of blood through the human body, an area of hemodynamics. Since cardio-vascular diseases are one of the main causes of loss of human life, understanding of the blood flow with and without constraints (such as blockages), providing alternate methods of prognosis and further solutions to take care of issues related to blood flow would help save valuable life of such patients. This project is an attempt to use computational fluid dynamics (CFD) to solve specific problems related to hemodynamics. The hemodynamics simulation is used to gain a better understanding of functional, diagnostic and theoretical aspects of the blood flow. Due to the fact that many fundamental issues of the blood flow, like phenomena associated with pressure and viscous forces fields, are still not fully understood or entirely described through mathematical formulations the characterization of blood flow is still a challenging task. The computational modeling of the blood flow and mechanical interactions that strongly affect the blood flow patterns, based on medical data and imaging represent the most accurate analysis of the blood flow complex behavior. In this project the mathematical modeling of the blood flow in the arteries in the presence of successive blockages has been analyzed using CFD technique. Different cases of blockages in terms of percentages have been modeled using commercial software CATIA V5R20 and simulated using commercial software ANSYS 15.0 to study the effect of varying wall shear stress (WSS) values and also other parameters like the effect of increase in Reynolds number. The concept of fluid structure interaction (FSI) has been used to solve such problems. The model simulation results were validated using in vivo measurement data from existing literature

Keywords: computational fluid dynamics, hemodynamics, blood flow, results validation, arteries

Procedia PDF Downloads 397
764 Modeling and Simulating Productivity Loss Due to Project Changes

Authors: Robert Pellerin, Michel Gamache, Remi Trudeau, Nathalie Perrier

Abstract:

The context of large engineering projects is particularly favorable to the appearance of engineering changes and contractual modifications. These elements are potential causes for claims. In this paper, we investigate one of the critical components of the claim management process: the calculation of the impacts of changes in terms of losses of productivity due to the need to accelerate some project activities. When project changes are initiated, delays can arise. Indeed, project activities are often executed in fast-tracking in an attempt to respect the completion date. But the acceleration of project execution and the resulting rework can entail important costs as well as induce productivity losses. In the past, numerous methods have been proposed to quantify the duration of delays, the gains achieved by project acceleration, and the loss of productivity. The calculation related to those changes can be divided into two categories: direct cost and indirect cost. The direct cost is easily quantifiable as opposed to indirect costs which are rarely taken into account during the calculation of the cost of an engineering change or contract modification despite several research projects have been made on this subject. However, proposed models have not been accepted by companies yet, nor they have been accepted in court. Those models require extensive data and are often seen as too specific to be used for all projects. These techniques are also ignoring the resource constraints and the interdependencies between the causes of delays and the delays themselves. To resolve this issue, this research proposes a simulation model that mimics how major engineering changes or contract modifications are handled in large construction projects. The model replicates the use of overtime in a reactive scheduling mode in order to simulate the loss of productivity present when a project change occurs. Multiple tests were conducted to compare the results of the proposed simulation model with statistical analysis conducted by other researchers. Different scenarios were also conducted in order to determine the impact the number of activities, the time of occurrence of the change, the availability of resources, and the type of project changes on productivity loss. Our results demonstrate that the number of activities in the project is a critical variable influencing the productivity of a project. When changes occur, the presence of a large number of activities leads to a much lower productivity loss than a small number of activities. The speed of reducing productivity for 30-job projects is about 25 percent faster than the reduction speed for 120-job projects. The moment of occurrence of a change also shows a significant impact on productivity. Indeed, the sooner the change occurs, the lower the productivity of the labor force. The availability of resources also impacts the productivity of a project when a change is implemented. There is a higher loss of productivity when the amount of resources is restricted.

Keywords: engineering changes, indirect costs overtime, productivity, scheduling, simulation

Procedia PDF Downloads 234
763 Influence of Biochar Application on Growth, Dry Matter Yield and Nutrition of Corn (Zea mays L.) Grown on Sandy Loam Soils of Gujarat, India

Authors: Pravinchandra Patel

Abstract:

Sustainable agriculture in sandy loam soil generally faces large constraints due to low water holding and nutrient retention capacity, and accelerated mineralization of soil organic matter. There is need to increase soil organic carbon in the soil for higher crop productivity and soil sustainability. Recently biochar is considered as sixth element and work as a catalyst for increasing crop yield, soil fertility, soil sustainability and mitigation of climate change. Biochar was generated at the Sansoli Farm of Anand Agricultural University, Gujarat, India by pyrolysis at temperatures (250-400°C) in absence of oxygen using slow chemical process (using two kilns) from corn stover (Zea mays, L), cluster bean stover (Cyamopsis tetragonoloba) and Prosopis julifera wood. There were 16 treatments; 4 organic sources (3 biochar; corn stover biochar (MS), cluster bean stover (CB) & Prosopis julifera wood (PJ) and one farmyard manure-FYM) with two rate of application (5 & 10 metric tons/ha), so there were eight treatments of organic sources. Eight organic sources was applied with the recommended dose of fertilizers (RDF) (80-40-0 kg/ha N-P-K) while remaining eight organic sources were kept without RDF. Application of corn stover biochar @ 10 metric tons/ha along with RDF (RDF+MS) increased dry matter (DM) yield, crude protein (CP) yield, chlorophyll content and plant height (at 30 and 60 days after sowing) than CB and PJ biochar and FYM. Nutrient uptake of P, K, Ca, Mg, S and Cu were significantly increased with the application of RDF + corn stover @ 10 metric tons/ha while uptake of N and Mn were significantly increased in RDF + corn stover @ 5 metric tons/ha. It was found that soil application of corn stover biochar @ 10 metric tons/ha along with the recommended dose of chemical fertilizers (RDF+MS ) exhibited the highest impact in obtaining significantly higher dry matter and crude protein yields and larger removal of nutrients from the soil and it also beneficial for built up nutrients in soil. It also showed significantly higher organic carbon content and cation exchange capacity in sandy loam soil. The lower dose of corn stover biochar @ 5 metric tons/ha (RDF+ MS) was also remained the second highest for increasing dry matter and crude protein yields of forage corn crop which ultimately resulted in larger removals of nutrients from the soil. This study highlights the importance of mixing of biochar along with recommended dose of fertilizers on its synergistic effect on sandy loam soil nutrient retention, organic carbon content and water holding capacity hence, the amendment value of biochar in sandy loam soil.

Keywords: biochar, corn yield, plant nutrient, fertility status

Procedia PDF Downloads 140
762 Clean Sky 2 Project LiBAT: Light Battery Pack for High Power Applications in Aviation – Simulation Methods in Early Stage Design

Authors: Jan Dahlhaus, Alejandro Cardenas Miranda, Frederik Scholer, Maximilian Leonhardt, Matthias Moullion, Frank Beutenmuller, Julia Eckhardt, Josef Wasner, Frank Nittel, Sebastian Stoll, Devin Atukalp, Daniel Folgmann, Tobias Mayer, Obrad Dordevic, Paul Riley, Jean-Marc Le Peuvedic

Abstract:

Electrical and hybrid aerospace technologies pose very challenging demands on the battery pack – especially with respect to weight and power. In the Clean Sky 2 research project LiBAT (funded by the EU), the consortium is currently building an ambitious prototype with state-of-the art cells that shows the potential of an intelligent pack design with a high level of integration, especially with respect to thermal management and power electronics. For the latter, innovative multi-level-inverter technology is used to realize the required power converting functions with reduced equipment. In this talk the key approaches and methods of the LiBat project will be presented and central results shown. Special focus will be set on the simulative methods used to support the early design and development stages from an overall system perspective. The applied methods can efficiently handle multiple domains and deal with different time and length scales, thus allowing the analysis and optimization of overall- or sub-system behavior. It will be shown how these simulations provide valuable information and insights for the efficient evaluation of concepts. As a result, the construction and iteration of hardware prototypes has been reduced and development cycles shortened.

Keywords: electric aircraft, battery, Li-ion, multi-level-inverter, Novec

Procedia PDF Downloads 159
761 Micro-Transformation Strategy Of Residential Transportation Space Based On The Demand Of Residents: Taking A Residential District In Wuhan, China As An Example

Authors: Hong Geng, Zaiyu Fan

Abstract:

With the acceleration of urbanization and motorization in China, the scale of cities and the travel distance of residents are constantly expanding, and the number of cars is continuously increasing, so the urban traffic problem is more and more serious. Traffic congestion, environmental pollution, energy consumption, travel safety and direct interference between traffic and other urban activities are increasingly prominent problems brought about by motorized development. This not only has a serious impact on the lives of the residents but also has a major impact on the healthy development of the city. The paper found that, in order to solve the development of motorization, a number of problems will arise; urban planning and traffic planning and design in residential planning often take into account the development of motorized traffic but neglects the demand for street life. This kind of planning has resulted in the destruction of the traditional communication space of the residential area, the pollution of noise and exhaust gas, and the potential safety risks of the residential area, which has disturbed the previously quiet and comfortable life of the residential area, resulting in the inconvenience of residents' life and the loss of street vitality. Based on these facts, this paper takes a residential area in Wuhan as the research object, through the actual investigation and research, from the perspective of micro-transformation analysis, combined with the concept of traffic micro-reconstruction governance. And research puts forward the residential traffic optimization strategies such as strengthening the interaction and connection between the residential area and the urban street system, street traffic classification and organization.

Keywords: micro-transformation, residential traffic, residents demand, traffic microcirculation

Procedia PDF Downloads 111
760 Cost-Benefit Analysis for the Optimization of Noise Abatement Treatments at the Workplace

Authors: Paolo Lenzuni

Abstract:

Cost-effectiveness of noise abatement treatments at the workplace has not yet received adequate consideration. Furthermore, most of the published work is focused on productivity, despite the poor correlation of this quantity with noise levels. There is currently no tool to estimate the social benefit associated to a specific noise abatement treatment, and no comparison among different options is accordingly possible. In this paper, we present an algorithm which has been developed to predict the cost-effectiveness of any planned noise control treatment in a workplace. This algorithm is based the estimates of hearing threshold shifts included in ISO 1999, and on compensations that workers are entitled to once their work-related hearing impairments have been certified. The benefits of a noise abatement treatment are estimated by means of the lower compensation costs which are paid to the impaired workers. Although such benefits have no real meaning in strictly monetary terms, they allow a reliable comparison between different treatments, since actual social costs can be assumed to be proportional to compensation costs. The existing European legislation on occupational exposure to noise it mandates that the noise exposure level be reduced below the upper action limit (85 dBA). There is accordingly little or no motivation for employers to sustain the extra costs required to lower the noise exposure below the lower action limit (80 dBA). In order to make this goal more appealing for employers, the algorithm proposed in this work also includes an ad-hoc element that promotes actions which bring the noise exposure down below 80 dBA. The algorithm has a twofold potential: 1) it can be used as a quality index to promote cost-effective practices; 2) it can be added to the existing criteria used by workers’ compensation authorities to evaluate the cost-effectiveness of technical actions, and support dedicated employers.

Keywords: cost-effectiveness, noise, occupational exposure, treatment

Procedia PDF Downloads 312
759 Design of an Innovative Geothermal Heat Pump with a PCM Thermal Storage

Authors: Emanuele Bonamente, Andrea Aquino

Abstract:

This study presents an innovative design for geothermal heat pumps with the goal of maximizing the system efficiency (COP - Coefficient of Performance), reducing the soil use (e.g. length/depth of geothermal boreholes) and initial investment costs. Based on experimental data obtained from a two-year monitoring of a working prototype implemented for a commercial building in the city of Perugia, Italy, an upgrade of the system is proposed and the performance is evaluated via CFD simulations. The prototype was designed to include a thermal heat storage (i.e. water), positioned between the boreholes and the heat pump, acting as a flywheel. Results from the monitoring campaign show that the system is still capable of providing the required heating and cooling energy with a reduced geothermal installation (approx. 30% of the standard length). In this paper, an optimization of the system is proposed, re-designing the heat storage to include phase change materials (PCMs). Two stacks of PCMs, characterized by melting temperatures equal to those needed to maximize the system COP for heating and cooling, are disposed within the storage. During the working cycle, the latent heat of the PCMs is used to heat (cool) the water used by the heat pump while the boreholes independently cool (heat) the storage. The new storage is approximately 10 times smaller and can be easily placed close to the heat pump in the technical room. First, a validation of the CFD simulation of the storage is performed against experimental data. The simulation is then used to test possible alternatives of the original design and it is finally exploited to evaluate the PCM-storage performance for two different configurations (i.e. single- and double-loop systems).

Keywords: geothermal heat pump, phase change materials (PCM), energy storage, renewable energies

Procedia PDF Downloads 305
758 Spectroscopic Study of Tb³⁺ Doped Calcium Aluminozincate Phosphor for Display and Solid-State Lighting Applications

Authors: Sumandeep Kaur, Allam Srinivasa Rao, Mula Jayasimhadri

Abstract:

In recent years, rare earth (RE) ions doped inorganic luminescent materials are seeking great attention due to their excellent physical and chemical properties. These materials offer high thermal and chemical stability and exhibit good luminescence properties due to the presence of RE ions. The luminescent properties of these materials are attributed to their intra-configurational f-f transitions in RE ions. A series of Tb³⁺ doped calcium aluminozincate has been synthesized via sol-gel method. The structural and morphological studies have been carried out by recording X-ray diffraction patterns and SEM image. The luminescent spectra have been recorded for a comprehensive study of their luminescence properties. The XRD profile reveals the single-phase orthorhombic crystal structure with an average crystallite size of 65 nm as calculated by using DebyeScherrer equation. The SEM image exhibits completely random, irregular morphology of micron size particles of the prepared samples. The optimization of luminescence has been carried out by varying the dopant Tb³⁺ concentration within the range from 0.5 to 2.0 mol%. The as-synthesized phosphors exhibit intense emission at 544 nm pumped at 478 nm excitation wavelength. The optimized Tb³⁺ concentration has been found to be 1.0 mol% in the present host lattice. The decay curves show bi-exponential fitting for the as-synthesized phosphor. The colorimetric studies show green emission with CIE coordinates (0.334, 0.647) lying in green region for the optimized Tb³⁺ concentration. This report reveals the potential utility of Tb³⁺ doped calcium aluminozincate phosphors for display and solid-state lighting devices.

Keywords: concentration quenching, phosphor, photoluminescence, XRD

Procedia PDF Downloads 143
757 A Case Study of Remote Location Viewing, and Its Significance in Mobile Learning

Authors: James Gallagher, Phillip Benachour

Abstract:

As location aware mobile technologies become ever more omnipresent, the prospect of exploiting their context awareness to enforce learning approaches thrives. Utilizing the growing acceptance of ubiquitous computing, and the steady progress both in accuracy and battery usage of pervasive devices, we present a case study of remote location viewing, how the application can be utilized to support mobile learning in situ using an existing scenario. Through the case study we introduce a new innovative application: Mobipeek based around a request/response protocol for the viewing of a remote location and explore how this can apply both as part of a teacher lead activity and informal learning situations. The system developed allows a user to select a point on a map, and send a request. Users can attach messages alongside time and distance constraints. Users within the bounds of the request can respond with an image, and accompanying message, providing context to the response. This application can be used alongside a structured learning activity such as the use of mobile phone cameras outdoors as part of an interactive lesson. An example of a learning activity would be to collect photos in the wild about plants, vegetation, and foliage as part of a geography or environmental science lesson. Another example could be to take photos of architectural buildings and monuments as part of an architecture course. These images can be uploaded then displayed back in the classroom for students to share their experiences and compare their findings with their peers. This can help to fosters students’ active participation while helping students to understand lessons in a more interesting and effective way. Mobipeek could augment the student learning experience by providing further interaction with other peers in a remote location. The activity can be part of a wider study between schools in different areas of the country enabling the sharing and interaction between more participants. Remote location viewing can be used to access images in a specific location. The choice of location will depend on the activity and lesson. For example architectural buildings of a specific period can be shared between two or more cities. The augmentation of the learning experience can be manifested in the different contextual and cultural influences as well as the sharing of images from different locations. In addition to the implementation of Mobipeek, we strive to analyse this application, and a subset of other possible and further solutions targeted towards making learning more engaging. Consideration is given to the benefits of such a system, privacy concerns, and feasibility of widespread usage. We also propose elements of “gamification”, in an attempt to further the engagement derived from such a tool and encourage usage. We conclude by identifying limitations, both from a technical, and a mobile learning perspective.

Keywords: context aware, location aware, mobile learning, remote viewing

Procedia PDF Downloads 284
756 Review on Recent Dynamics and Constraints of Affordable Housing Provision in Nigeria: A Case of Growing Economic Precarity

Authors: Ikenna Stephen Ezennia, Sebnem Onal Hoscara

Abstract:

Successive governments in Nigeria are faced with the pressing problem of how to house an ever-expanding urban population, usually low-income earners. The question of housing and affordability presents a complex challenge for these governments, as the commodification of housing links it inextricably to markets and capital flows. Therefore, placing it as at the center of the government’s agenda. However, the provision of decent and affordable housing for average Nigerians has remained an illusion, despite copious schemes, policies and programs initiated and carried out by various successive governments. Similarly, this phenomenon has also been observed in many countries of Africa, which is largely a result of economic unpredictability, lack of housing finance and insecurity, among other factors peculiar to a struggling economy. This study reviews recent dynamics and factors challenging the provision and development of affordable housing for the low income urban populace of Nigeria. Thus, the aim of the study is to present a comprehensive approach for understanding recent trends in the provision of affordable housing for Nigerians. The approach is based on a new paradigm of research: transdisciplinarity; a form of inquiry that crosses the boundaries of different disciplines. Therefore, the review takes a retrospective gaze at the various housing development programs/schemes/policies taken by successive governments of Nigeria within the last few decades and exams recent efforts geared towards eradicating the problems of housing delivery. Sources of data included relevant English language articles and the results of literature search of Elsevier Science Direct, ISI Web of Knowledge, Pro Quest Central, Scopus, and Google Scholar. The findings reveal that factors such as; rapid urbanization, inadequate planning and land use control, lack of adequate and favorable finance, high prices of land, high prices of building material, youth/touts harassment of developers, poor urban infrastructure, multiple taxation, and risk share are the major factors posing as a hindrance to adequate housing delivery. The results show that the majority of Nigeria’s affordable housing schemes, programs and policies are in most cases poorly implemented and abandoned without proper coordination. Consequently, the study concludes that the affordable housing delivery strategies in Nigeria are an epitome of lip service politics by successive governments; and the current trend of leaving housing provision to the vagaries of market forces cannot be expected to support affordable housing especially for the low income urban populace.

Keywords: affordable housing, housing delivery, national housing policy, urban poor

Procedia PDF Downloads 208
755 Pump-as-Turbine: Testing and Characterization as an Energy Recovery Device, for Use within the Water Distribution Network

Authors: T. Lydon, A. McNabola, P. Coughlan

Abstract:

Energy consumption in the water distribution network (WDN) is a well established problem equating to the industry contributing heavily to carbon emissions, with 0.9 kg CO2 emitted per m3 of water supplied. It is indicated that 85% of energy wasted in the WDN can be recovered by installing turbines. Existing potential in networks is present at small capacity sites (5-10 kW), numerous and dispersed across networks. However, traditional turbine technology cannot be scaled down to this size in an economically viable fashion, thus alternative approaches are needed. This research aims to enable energy recovery potential within the WDN by exploring the potential of pumps-as-turbines (PATs), to realise this potential. PATs are estimated to be ten times cheaper than traditional micro-hydro turbines, presenting potential to contribute to an economically viable solution. However, a number of technical constraints currently prohibit their widespread use, including the inability of a PAT to control pressure, difficulty in the selection of PATs due to lack of performance data and a lack of understanding on how PATs can cater for fluctuations as extreme as +/- 50% of the average daily flow, characteristic of the WDN. A PAT prototype is undergoing testing in order to identify the capabilities of the technology. Results of preliminary testing, which involved testing the efficiency and power potential of the PAT for varying flow and pressure conditions, in order to develop characteristic and efficiency curves for the PAT and a baseline understanding of the technologies capabilities, are presented here: •The limitations of existing selection methods which convert BEP from pump operation to BEP in turbine operation was highlighted by the failure of such methods to reflect the conditions of maximum efficiency of the PAT. A generalised selection method for the WDN may need to be informed by an understanding of impact of flow variations and pressure control on system power potential capital cost, maintenance costs, payback period. •A clear relationship between flow and efficiency rate of the PAT has been established. The rate of efficiency reductions for flows +/- 50% BEP is significant and more extreme for deviations in flow above the BEP than below, but not dissimilar to the reaction of efficiency of other turbines. •PAT alone is not sufficient to regulate pressure, yet the relationship of pressure across the PAT is foundational in exploring ways which PAT energy recovery systems can maintain required pressure level within the WDN. Efficiencies of systems of PAT energy recovery systems operating conditions of pressure regulation, which have been conceptualise in current literature, need to be established. Initial results guide the focus of forthcoming testing and exploration of PAT technology towards how PATs can form part of an efficiency energy recovery system.

Keywords: energy recovery, pump-as-turbine, water distribution network, water distribution network

Procedia PDF Downloads 250
754 Reading Strategies of Generation X and Y: A Survey on Learners' Skills and Preferences

Authors: Kateriina Rannula, Elle Sõrmus, Siret Piirsalu

Abstract:

Mixed generation classroom is a phenomenon that current higher education establishments are faced with daily trying to meet the needs of modern labor market with its emphasis on lifelong learning and retraining. Representatives of mainly X and Y generations in one classroom acquiring higher education is a challenge to lecturers considering all the characteristics that differ one generation from another. The importance of outlining different strategies and considering the needs of the students lies in the necessity for everyone to acquire the maximum of the provided knowledge as well as to understand each other to study together in one classroom and successfully cooperate in future workplaces. In addition to different generations, there are also learners with different native languages which have an impact on reading and understanding texts in third languages, including possible translation. Current research aims to investigate, describe and compare reading strategies among the representatives of generation X and Y. Hypotheses were formulated - representatives of generation X and Y use different reading strategies which is also different among first and third year students of the before mentioned generations. Current study is an empirical, qualitative study. To achieve the aim of the research, relevant literature was analyzed and a semi-structured questionnaire conducted among the first and third year students of Tallinn Health Care College. Questionnaire consisted of 25 statements on the text reading strategies, 3 multiple choice questions on preferences considering the design and medium of the text, and three open questions on the translation process when working with a text in student’s third language. The results of the questionnaire were categorized, analyzed and compared. Both, generation X and Y described their reading strategies to be 'scanning' and 'surfing'. Compared to generation X, first year generation Y learners valued interactivity and nonlinear texts. Students frequently used strategies of skimming, scanning, translating and highlighting together with relevant-thinking and assistance-seeking. Meanwhile, the third-year generation Y students no longer frequently used translating, resourcing and highlighting while Generation X learners still incorporated these strategies. Knowing about different needs of the generations currently inside the classrooms and on the labor market enables us with tools to provide sustainable education and grants the society a work force that is more flexible and able to move between professions. Future research should be conducted in order to investigate the amount of learning and strategy- adoption between generations. As for reading, main suggestions arising from the research are as follows: make a variety of materials available to students; allow them to select what they want to read and try to make those materials visually attractive, relevant, and appropriately challenging for learners considering the differences of generations.

Keywords: generation X, generation Y, learning strategies, reading strategies

Procedia PDF Downloads 175
753 Computational Homogenization of Thin Walled Structures: On the Influence of the Global vs Local Applied Plane Stress Condition

Authors: M. Beusink, E. W. C. Coenen

Abstract:

The increased application of novel structural materials, such as high grade asphalt, concrete and laminated composites, has sparked the need for a better understanding of the often complex, non-linear mechanical behavior of such materials. The effective macroscopic mechanical response is generally dependent on the applied load path. Moreover, it is also significantly influenced by the microstructure of the material, e.g. embedded fibers, voids and/or grain morphology. At present, multiscale techniques are widely adopted to assess micro-macro interactions in a numerically efficient way. Computational homogenization techniques have been successfully applied over a wide range of engineering cases, e.g. cases involving first order and second order continua, thin shells and cohesive zone models. Most of these homogenization methods rely on Representative Volume Elements (RVE), which model the relevant microstructural details in a confined volume. Imposed through kinematical constraints or boundary conditions, a RVE can be subjected to a microscopic load sequence. This provides the RVE's effective stress-strain response, which can serve as constitutive input for macroscale analyses. Simultaneously, such a study of a RVE gives insight into fine scale phenomena such as microstructural damage and its evolution. It has been reported by several authors that the type of boundary conditions applied to the RVE affect the resulting homogenized stress-strain response. As a consequence, dedicated boundary conditions have been proposed to appropriately deal with this concern. For the specific case of a planar assumption for the analyzed structure, e.g. plane strain, axisymmetric or plane stress, this assumption needs to be addressed consistently in all considered scales. Although in many multiscale studies a planar condition has been employed, the related impact on the multiscale solution has not been explicitly investigated. This work therefore focuses on the influence of the planar assumption for multiscale modeling. In particular the plane stress case is highlighted, by proposing three different implementation strategies which are compatible with a first-order computational homogenization framework. The first method consists of applying classical plane stress theory at the microscale, whereas with the second method a generalized plane stress condition is assumed at the RVE level. For the third method, the plane stress condition is applied at the macroscale by requiring that the resulting macroscopic out-of-plane forces are equal to zero. These strategies are assessed through a numerical study of a thin walled structure and the resulting effective macroscale stress-strain response is compared. It is shown that there is a clear influence of the length scale at which the planar condition is applied.

Keywords: first-order computational homogenization, planar analysis, multiscale, microstrucutures

Procedia PDF Downloads 228