Search results for: efficient score function
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11271

Search results for: efficient score function

981 Prospects and Challenges of Sports Culture in India: A Case Study of Gujarat

Authors: Jay Raval

Abstract:

Sports and physical fitness have been a vital component of our civilization. It is such a power which, motivates and inspires every individual, communities and even countries to be aware of the physical and mental health. All though, sports play vital role in the overall development of the nation, but in the developing countries such as India, this culture of sports is yet to be motivated. However, in India lack of sporting culture has held back the growth of a similar industry in the past, despite the growing awareness and interest in various different sports besides cricket. Hence, due to a lack of sporting culture, corporate investments in India’s sports have traditionally been limited to only non-profit corporate social responsibility activities and initiatives. From past couple of years, India has come up with new initiatives such as Indian Premier League (Cricket), Hockey India League, Indian Badminton League, Pro Kabaddi League, and Indian Super League (Football) which help to boost Indian sports culture and thereby increase economy of the country. Out of 29 states of India, among all of those competitive states, Gujarat is showing very rapid increase in sports participation. Khel Mahakumbh, the competition conducted for the last six years has been a giant step in this direction and covers rural and urban areas of Gujarat. The objective of the research is to address the overall development of the sports system. Sports system includes infrastructure, coaches, resources, and participants. The current existing system is not disabled friendly. This research paper highlights adequate steps in order to improve and sort out pressing issues in the sports system. Education system is highly academic-centric with a definite trend towards reducing school sports and extra-curricular sports in the Gujarat state. Constituents of this research work make an attempt to evaluate the framework of the Olympic Charter, the Sports Authority of India, the Indian Olympics Association and the National Sports Federations. It explores the areas that need to be revamped, rejuvenated and reoriented to function in an open, democratic, equitable, transparent and accountable manner. Research is based on mixed method approach. It is used for the data collection which includes the personal interviews, document analysis and the use of news article. Quality assurance is also tested by conducting the trustworthiness of the paper. Mixed method helps to strengthen the analysis part and give strong base for the discussion during the analysis.

Keywords: physical development, sports authority of India, sports policy, women empowerment

Procedia PDF Downloads 135
980 The Significance of Urban Space in Death Trilogy of Alejandro González Iñárritu

Authors: Marta Kaprzyk

Abstract:

The cinema of Alejandro González Iñárritu hasn’t been subjected to a lot of detailed analysis yet, what makes it an exceptionally interesting research material. The purpose of this presentation is to discuss the significance of urban space in three films of this Mexican director, that forms Death Trilogy: ‘Amores Perros’ (2000), ‘21 Grams’ (2003) and ‘Babel’ (2006). The fact that in the aforementioned movies the urban space itself becomes an additional protagonist with its own identity, psychology and the ability to transform and affect other characters, in itself warrants for independent research and analysis. Independently, such mode of presenting urban space has another function; it enables the director to complement the rest of characters. The basis for methodology of this description of cinematographic space is to treat its visual layer as a point of departure for a detailed analysis. At the same time, the analysis itself will be supported by recognised academic theories concerning special issues, which are transformed here into essential tools necessary to describe the world (mise-en-scène) created by González Iñárritu. In ‘Amores perros’ the Mexico City serves as a scenery – a place full of contradictions- in the movie depicted as a modern conglomerate and an urban jungle, as well as a labyrinth of poverty and violence. In this work stylistic tropes can be found in an intertextual dialogue of the director with photographies of Nan Goldin and Mary Ellen Mark. The story recounted in ‘21 Grams’, the most tragic piece in the trilogy, is characterised by almost hyperrealistic sadism. It takes place in Memphis, which on the screen turns into an impersonal formation full of heterotopias described by Michel Foucault and non-places, as defined by Marc Augé in his essay. By contrast, the main urban space in ‘Babel’ is Tokio, which seems to perfectly correspond with the image of places discussed by Juhani Pallasmaa in his works concerning the reception of the architecture by ‘pathological senses’ in the modern (or, even more adequately, postmodern) world. It’s portrayed as a city full of buildings that look so surreal, that they seem to be completely unsuitable for the humans to move between them. Ultimately, the aim of this paper is to demonstrate the coherence of the manner in which González Iñárritu designs urban spaces in his Death Trilogy. In particular, the author attempts to examine the imperative role of the cities that form three specific microcosms in which the protagonists of the Mexican director live their overwhelming tragedies.

Keywords: cinematographic space, Death Trilogy, film Studies, González Iñárritu Alejandro, urban space

Procedia PDF Downloads 325
979 Ultra-Tightly Coupled GNSS/INS Based on High Degree Cubature Kalman Filtering

Authors: Hamza Benzerrouk, Alexander Nebylov

Abstract:

In classical GNSS/INS integration designs, the loosely coupled approach uses the GNSS derived position and the velocity as the measurements vector. This design is suboptimal from the standpoint of preventing GNSSoutliers/outages. The tightly coupled GPS/INS navigation filter mixes the GNSS pseudo range and inertial measurements and obtains the vehicle navigation state as the final navigation solution. The ultra‐tightly coupled GNSS/INS design combines the I (inphase) and Q(quadrature) accumulator outputs in the GNSS receiver signal tracking loops and the INS navigation filter function intoa single Kalman filter variant (EKF, UKF, SPKF, CKF and HCKF). As mentioned, EKF and UKF are the most used nonlinear filters in the literature and are well adapted to inertial navigation state estimation when integrated with GNSS signal outputs. In this paper, it is proposed to move a step forward with more accurate filters and modern approaches called Cubature and High Degree cubature Kalman Filtering methods, on the basis of previous results solving the state estimation based on INS/GNSS integration, Cubature Kalman Filter (CKF) and High Degree Cubature Kalman Filter with (HCKF) are the references for the recent developed generalized Cubature rule based Kalman Filter (GCKF). High degree cubature rules are the kernel of the new solution for more accurate estimation with less computational complexity compared with the Gauss-Hermite Quadrature (GHQKF). Gauss-Hermite Kalman Filter GHKF which is not selected in this work because of its limited real-time implementation in high-dimensional state-spaces. In ultra tightly or a deeply coupled GNSS/INS system is dynamics EKF is used with transition matrix factorization together with GNSS block processing which is well described in the paper and assumes available the intermediary frequency IF by using a correlator samples with a rate of 500 Hz in the presented approach. GNSS (GPS+GLONASS) measurements are assumed available and modern SPKF with Cubature Kalman Filter (CKF) are compared with new versions of CKF called high order CKF based on Spherical-radial cubature rules developed at the fifth order in this work. Estimation accuracy of the high degree CKF is supposed to be comparative to GHKF, results of state estimation are then observed and discussed for different initialization parameters. Results show more accurate navigation state estimation and more robust GNSS receiver when Ultra Tightly Coupled approach applied based on High Degree Cubature Kalman Filter.

Keywords: GNSS, INS, Kalman filtering, ultra tight integration

Procedia PDF Downloads 276
978 Feedback from a Service Evaluation of a Modified Intrauterine Device Insertor: A First Step to a Changement of the Standard of Iud Insertion Procedure

Authors: Desjardin, Michaels, Martinez, Ulmann

Abstract:

Copper IUD is one of the most efficient and cost-effective contraception. However, pain at insertion hampers the use of this method. This is especially unfortunate in nulliparous women, often younger, who are excellent candidates for this contraception, including Emergency Contraception. Standard insertion procedure of a copper IUD usually involves measurement of uterine cavity with an hysterometer and the use of a tenaculum in order to facilitate device insertion. Both procedures lead to patient pain which often constitutes a limitation of the method. To overcome these issues, we have developed a modified insertor combined with a copper IUD. The singular design of the inserter includes a flexible inflatable membrane technology allowing an easy access to the uterine cavity even in case of abnormal uterine positions or narrow cervical canal. Moreover, this inserter makes possible a direct IUD insertion with no hysterometry and no need for tenaculum. To assess device effectiveness and patient-reported pain, a study was conducted at two clinics in Fance with 31 individuals who wanted to use a copper IUD as contraceptive method. IUD insertions have been performed by four healthcare providers. Operators completed questionnaire and evaluated effectiveness of the procedure (including IUD correct fundal placement and other usability questions) as their satisfaction. Patient also completed questionnaire and pain during procedure was measured on a 10-cm Visual Analogue Scale (VAS). Analysis of the questionnaires indicates that correct IUD placement took place in more than 93% of women, which is a standard efficacy rate. It also demonstrates that IUD insertion resulted in no, light or moderate pain predominantly in nulliparous women. No insertion resulted in severe pain (none above 6cm on a 10-cm VAS). This translated by a high level of satisfaction from both patients and practitioners. In addition, this modified inserter allowed a simplification of the insertion procedure: correct fundal placement was ensured with no need for hysterometry (100%) prior to insertion nor for cervical tenaculum to pull on the cervix (90%). Avoidance of both procedures contributed to the decrease in pain during insertion. Taken together, the results of the study demonstrate that this device constitutes a significant advance in the use of copper IUDs for any woman. It allows a simplification of the insertion procedure: there is no need for pre-insertion hysterometry and no need for traction on the cervix with tenaculum. Increased comfort during insertion should allow a wider use of the method for nulliparous women and for emergency contraception. In addition, pain is often underestimated by practitioners, but fear of pain is obviously one of the blocking factors as indicated by the analysis of the questionnaire. This evaluation brings interesting information on the use of this modified inserter for standard copper IUD and promising perspectives to set up a changement in the standard of IUD insertion procedure.

Keywords: contraceptio, IUD, innovation, pain

Procedia PDF Downloads 76
977 Effect of Irrigation and Hydrogel on the Water Use Efficiency of Zeto-Tiled Green-Gram Relay System in the Eastern Indo Gangetic-Plain

Authors: Benukar Biswas, S. Banerjee, P. K. Bandhyopadhyaya, S. K. Patra, S. Sarkar

Abstract:

Jute can be sown as relay crop in between the lines of 15-20 days old green gram for additional pulse yield without reducing the yield of jute. The main problem of this system is water use efficiency (WUE). The increase in water productivity and reduction in production cost were reported in the zero-tilled crop. The hydrogel can hold water up to 400 times of its weight and can release 95 % of the retained water. The present field study was carried out during 2015-16 at BCKV (tropical sub-humid, 1560 mm annual rainfall, 22058/ N, 88051/ E, 9.75 m AMSL, sandy loam soil, aeric Haplaquept, pH 6.75, organic carbon 5.4 g kg-1, available N 85 kg ha-1, P2O5 15.3 kg ha-1 and K2O 40 kg ha-1) with four levels of irrigation regimes: no irrigation - RF, cumulative pan evaporation 250mm (CPE250), CPE125 and CPE83 and three levels of hydrogel: no hydrogel (H0), 2.5 kg ha-1 (H2.5) and 5 kg ha-1 (H5). Throughout the crop growing period a linear positive relationship remained between Leaf Area Index (LAI) and evapotranspiration rate. The strength of the relationship between ETa and LAI started increasing and reached its peak at 7 WAS (R2=0.78) when green gram was at its maturity, and both the crops covered the nearly entire base area. This relation starts weakening from 13 WAS due to jute leaf shading. A linear relationship between system yield and ET was also obtained in the present study. The variation in system yield might be predicted 75% with ET alone. Effective rainfall was reduced with increasing irrigation frequency due to enhanced water supply in contrast to hydrogel application due to the difference in water storage capacity. Irrigation contributed a major source of variability of ET. Higher irrigation frequency resulted in higher ET loss ranging from 574 mm in RF to 764 mm in CPE83. Hydrogel application also increased water storage on a sustained basis and supplied to crops resulting higher ET from 639 mm in H0 to 671mm in H5. WUE ranged between 0.4 kg m-3 (RF) to 0.63 kg m-3 (CPE83 H5). WUE increased with increased application of irrigation water from 0.42 kg m-3 in RF to 0.57 kg m-3 in CPE 83. Hydrogel application significantly improves the WUE from 0.45 kg m-3 in H0 to 0.50 in H2.5 and 0.54 in H5. Under relatively dry root zone (RF), both evaporation and transpiration remain at suboptimal level resulting in lower ET as well as lower system yield. Green gram – jute relay system can be water use efficient with 38% higher yield with application of hydrogel @ 2.5 kg ha-1 under deficit irrigation regime of CPE 125 over rainfed system without application of the gel. Application of gel conditioner improved water storage, checked excess water loss from the system, and mitigated ET demand of the relay system for a longer time. Hence, irrigation frequency was reduced from five times at CPE 83 to only three times in CPE 125.

Keywords: zero tillage, deficit irrigation, hydrogel, relay system

Procedia PDF Downloads 229
976 Data-Driven Surrogate Models for Damage Prediction of Steel Liquid Storage Tanks under Seismic Hazard

Authors: Laura Micheli, Majd Hijazi, Mahmoud Faytarouni

Abstract:

The damage reported by oil and gas industrial facilities revealed the utmost vulnerability of steel liquid storage tanks to seismic events. The failure of steel storage tanks may yield devastating and long-lasting consequences on built and natural environments, including the release of hazardous substances, uncontrolled fires, and soil contamination with hazardous materials. It is, therefore, fundamental to reliably predict the damage that steel liquid storage tanks will likely experience under future seismic hazard events. The seismic performance of steel liquid storage tanks is usually assessed using vulnerability curves obtained from the numerical simulation of a tank under different hazard scenarios. However, the computational demand of high-fidelity numerical simulation models, such as finite element models, makes the vulnerability assessment of liquid storage tanks time-consuming and often impractical. As a solution, this paper presents a surrogate model-based strategy for predicting seismic-induced damage in steel liquid storage tanks. In the proposed strategy, the surrogate model is leveraged to reduce the computational demand of time-consuming numerical simulations. To create the data set for training the surrogate model, field damage data from past earthquakes reconnaissance surveys and reports are collected. Features representative of steel liquid storage tank characteristics (e.g., diameter, height, liquid level, yielding stress) and seismic excitation parameters (e.g., peak ground acceleration, magnitude) are extracted from the field damage data. The collected data are then utilized to train a surrogate model that maps the relationship between tank characteristics, seismic hazard parameters, and seismic-induced damage via a data-driven surrogate model. Different types of surrogate algorithms, including naïve Bayes, k-nearest neighbors, decision tree, and random forest, are investigated, and results in terms of accuracy are reported. The model that yields the most accurate predictions is employed to predict future damage as a function of tank characteristics and seismic hazard intensity level. Results show that the proposed approach can be used to estimate the extent of damage in steel liquid storage tanks, where the use of data-driven surrogates represents a viable alternative to computationally expensive numerical simulation models.

Keywords: damage prediction , data-driven model, seismic performance, steel liquid storage tanks, surrogate model

Procedia PDF Downloads 141
975 Assessment of the Landscaped Biodiversity in the National Park of Tlemcen (Algeria) Using Per-Object Analysis of Landsat Imagery

Authors: Bencherif Kada

Abstract:

In the forest management practice, landscape and Mediterranean forest are never posed as linked objects. But sustainable forestry requires the valorization of the forest landscape, and this aim involves assessing the spatial distribution of biodiversity by mapping forest landscaped units and subunits and by monitoring the environmental trends. This contribution aims to highlight, through object-oriented classifications, the landscaped biodiversity of the National Park of Tlemcen (Algeria). The methodology used is based on ground data and on the basic processing units of object-oriented classification, that are segments, so-called image-objects, representing a relatively homogenous units on the ground. The classification of Landsat Enhanced Thematic Mapper plus (ETM+) imagery is performed on image objects and not on pixels. Advantages of object-oriented classification are to make full use of meaningful statistic and texture calculation, uncorrelated shape information (e.g., length-to-width ratio, direction, and area of an object, etc.), and topological features (neighbor, super-object, etc.), and the close relation between real-world objects and image objects. The results show that per object classification using the k-nearest neighbor’s method is more efficient than per pixel one. It permits to simplify of the content of the image while preserving spectrally and spatially homogeneous types of land covers such as Aleppo pine stands, cork oak groves, mixed groves of cork oak, holm oak, and zen oak, mixed groves of holm oak and thuja, water plan, dense and open shrub-lands of oaks, vegetable crops or orchard, herbaceous plants, and bare soils. Texture attributes seem to provide no useful information, while spatial attributes of shape and compactness seem to be performant for all the dominant features, such as pure stands of Aleppo pine and/or cork oak and bare soils. Landscaped sub-units are individualized while conserving the spatial information. Continuously dominant dense stands over a large area were formed into a single class, such as dense, fragmented stands with clear stands. Low shrublands formations and high wooded shrublands are well individualized but with some confusion with enclaves for the former. Overall, a visual evaluation of the classification shows that the classification reflects the actual spatial state of the study area at the landscape level.

Keywords: forest, oaks, remote sensing, diversity, shrublands

Procedia PDF Downloads 114
974 Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison

Authors: Xiangtuo Chen, Paul-Henry Cournéde

Abstract:

Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.

Keywords: crop yield prediction, crop model, sensitivity analysis, paramater estimation, particle swarm optimization, random forest

Procedia PDF Downloads 226
973 Successful Excision of Lower Lip Mucocele Using 2780 nm Er,Cr:YSGG Laser

Authors: Lubna M. Al-Otaibi

Abstract:

Mucocele is a common benign neoplasm of the oral cavity and the most common after fibroma. The lesion develops as a result of retention or extravasation of mucous material from minor salivary glands. Extravasation type of mucocele results from trauma and mostly occurs in the lower lip of young patients. The various treatment options available for the treatment of mucocele are associated with a relatively high incidence of recurrence making surgical intervention necessary for a permanent cure. The conventional surgical procedure, however, arouses apprehension in the patient and is associated with bleeding and postoperative pain. Recently, treatment of mucocele with lasers has become a viable treatment option. Various types of lasers are being used and are preferable over the conventional surgical procedure as they provide good hemostasis, reduced postoperative swelling and pain, reduced bacterial population, lesser need for suturing, faster healing and low recurrence rates. Er,Cr:YSGG is a solid-state laser with great affinity to water molecule. Its hydrokinetic cutting action allows it to work effectively on hydrated tissues without any thermal damage. However, up to date, only a few studies have reported its use in the removal of lip mucocele, especially in children. In this case, a 6 year old female patient with history of trauma to the lower lip presented with a soft, sessile, whitish-bluish 4 mm papule. The lesion was present for approximately four months and was fluctuant in size. The child developed a habit of biting the lesion causing injury, bleeding and discomfort. Surgical excision under local anaesthesia was performed using 2780 nm Er,Cr:YSGG Laser (WaterLase iPlus, Irvine, CA) with a Gold handpiece and MZ6 tip (3.5w, 50 Hz, 20% H2O, 20% Air, S mode). The tip was first applied in contact mode with focused beam using the Circumferential Incision Technique (CIT) to excise the tissue followed by the removal of the underlying causative minor salivary gland. Bleeding was stopped using Laser Dry Bandage setting (0.5w, 50 Hz, 1% H2O, 20% Air, S mode) and no suturing was needed. Safety goggles were worn and high-speed suction was used for smoke evacuation. Mucocele excision using 2780 nm Er,Cr:YSGG laser was rapid, easy to perform with excellent precision and allowed for histopathological examination of the excised tissue. The patient was comfortable and there were minimum bleeding and no sutures, postoperative pain, scarring or recurrence. Laser assisted mucocele excision appears to have efficient and reliable benefits in young patients and should be considered as an alternative to conventional surgical and non-surgical techniques.

Keywords: Erbium, excision, laser, lip, mucocele

Procedia PDF Downloads 229
972 Application of Principal Component Analysis and Ordered Logit Model in Diabetic Kidney Disease Progression in People with Type 2 Diabetes

Authors: Mequanent Wale Mekonen, Edoardo Otranto, Angela Alibrandi

Abstract:

Diabetic kidney disease is one of the main microvascular complications caused by diabetes. Several clinical and biochemical variables are reported to be associated with diabetic kidney disease in people with type 2 diabetes. However, their interrelations could distort the effect estimation of these variables for the disease's progression. The objective of the study is to determine how the biochemical and clinical variables in people with type 2 diabetes are interrelated with each other and their effects on kidney disease progression through advanced statistical methods. First, principal component analysis was used to explore how the biochemical and clinical variables intercorrelate with each other, which helped us reduce a set of correlated biochemical variables to a smaller number of uncorrelated variables. Then, ordered logit regression models (cumulative, stage, and adjacent) were employed to assess the effect of biochemical and clinical variables on the order-level response variable (progression of kidney function) by considering the proportionality assumption for more robust effect estimation. This retrospective cross-sectional study retrieved data from a type 2 diabetic cohort in a polyclinic hospital at the University of Messina, Italy. The principal component analysis yielded three uncorrelated components. These are principal component 1, with negative loading of glycosylated haemoglobin, glycemia, and creatinine; principal component 2, with negative loading of total cholesterol and low-density lipoprotein; and principal component 3, with negative loading of high-density lipoprotein and a positive load of triglycerides. The ordered logit models (cumulative, stage, and adjacent) showed that the first component (glycosylated haemoglobin, glycemia, and creatinine) had a significant effect on the progression of kidney disease. For instance, the cumulative odds model indicated that the first principal component (linear combination of glycosylated haemoglobin, glycemia, and creatinine) had a strong and significant effect on the progression of kidney disease, with an effect or odds ratio of 0.423 (P value = 0.000). However, this effect was inconsistent across levels of kidney disease because the first principal component did not meet the proportionality assumption. To address the proportionality problem and provide robust effect estimates, alternative ordered logit models, such as the partial cumulative odds model, the partial adjacent category model, and the partial continuation ratio model, were used. These models suggested that clinical variables such as age, sex, body mass index, medication (metformin), and biochemical variables such as glycosylated haemoglobin, glycemia, and creatinine have a significant effect on the progression of kidney disease.

Keywords: diabetic kidney disease, ordered logit model, principal component analysis, type 2 diabetes

Procedia PDF Downloads 33
971 Epigenetic and Archeology: A Quest to Re-Read Humanity

Authors: Salma A. Mahmoud

Abstract:

Epigenetic, or alteration in gene expression influenced by extragenetic factors, has emerged as one of the most promising areas that will address some of the gaps in our current knowledge in understanding patterns of human variation. In the last decade, the research investigating epigenetic mechanisms in many fields has flourished and witnessed significant progress. It paved the way for a new era of integrated research especially between anthropology/archeology and life sciences. Skeletal remains are considered the most significant source of information for studying human variations across history, and by utilizing these valuable remains, we can interpret the past events, cultures and populations. In addition to archeological, historical and anthropological importance, studying bones has great implications in other fields such as medicine and science. Bones also can hold within them the secrets of the future as they can act as predictive tools for health, society characteristics and dietary requirements. Bones in their basic forms are composed of cells (osteocytes) that are affected by both genetic and environmental factors, which can only explain a small part of their variability. The primary objective of this project is to examine the epigenetic landscape/signature within bones of archeological remains as a novel marker that could reveal new ways to conceptualize chronological events, gender differences, social status and ecological variations. We attempted here to address discrepancies in common variants such as methylome as well as novel epigenetic regulators such as chromatin remodelers, which to our best knowledge have not yet been investigated by anthropologists/ paleoepigenetists using plethora of techniques (biological, computational, and statistical). Moreover, extracting epigenetic information from bones will highlight the importance of osseous material as a vector to study human beings in several contexts (social, cultural and environmental), and strengthen their essential role as model systems that can be used to investigate and construct various cultural, political and economic events. We also address all steps required to plan and conduct an epigenetic analysis from bone materials (modern and ancient) as well as discussing the key challenges facing researchers aiming to investigate this field. In conclusion, this project will serve as a primer for bioarcheologists/anthropologists and human biologists interested in incorporating epigenetic data into their research programs. Understanding the roles of epigenetic mechanisms in bone structure and function will be very helpful for a better comprehension of their biology and highlighting their essentiality as interdisciplinary vectors and a key material in archeological research.

Keywords: epigenetics, archeology, bones, chromatin, methylome

Procedia PDF Downloads 104
970 Virtual Metering and Prediction of Heating, Ventilation, and Air Conditioning Systems Energy Consumption by Using Artificial Intelligence

Authors: Pooria Norouzi, Nicholas Tsang, Adam van der Goes, Joseph Yu, Douglas Zheng, Sirine Maleej

Abstract:

In this study, virtual meters will be designed and used for energy balance measurements of an air handling unit (AHU). The method aims to replace traditional physical sensors in heating, ventilation, and air conditioning (HVAC) systems with simulated virtual meters. Due to the inability to manage and monitor these systems, many HVAC systems have a high level of inefficiency and energy wastage. Virtual meters are implemented and applied in an actual HVAC system, and the result confirms the practicality of mathematical sensors for alternative energy measurement. While most residential buildings and offices are commonly not equipped with advanced sensors, adding, exploiting, and monitoring sensors and measurement devices in the existing systems can cost thousands of dollars. The first purpose of this study is to provide an energy consumption rate based on available sensors and without any physical energy meters. It proves the performance of virtual meters in HVAC systems as reliable measurement devices. To demonstrate this concept, mathematical models are created for AHU-07, located in building NE01 of the British Columbia Institute of Technology (BCIT) Burnaby campus. The models will be created and integrated with the system’s historical data and physical spot measurements. The actual measurements will be investigated to prove the models' accuracy. Based on preliminary analysis, the resulting mathematical models are successful in plotting energy consumption patterns, and it is concluded confidently that the results of the virtual meter will be close to the results that physical meters could achieve. In the second part of this study, the use of virtual meters is further assisted by artificial intelligence (AI) in the HVAC systems of building to improve energy management and efficiency. By the data mining approach, virtual meters’ data is recorded as historical data, and HVAC system energy consumption prediction is also implemented in order to harness great energy savings and manage the demand and supply chain effectively. Energy prediction can lead to energy-saving strategies and considerations that can open a window in predictive control in order to reach lower energy consumption. To solve these challenges, the energy prediction could optimize the HVAC system and automates energy consumption to capture savings. This study also investigates AI solutions possibility for autonomous HVAC efficiency that will allow quick and efficient response to energy consumption and cost spikes in the energy market.

Keywords: virtual meters, HVAC, artificial intelligence, energy consumption prediction

Procedia PDF Downloads 100
969 Achieving Net Zero Energy Building in a Hot Climate Using Integrated Photovoltaic and Parabolic Trough Collectors

Authors: Adel A. Ghoneim

Abstract:

In most existing buildings in hot climate, cooling loads lead to high primary energy consumption and consequently high CO2 emissions. These can be substantially decreased with integrated renewable energy systems. Kuwait is characterized by its dry hot long summer and short warm winter. Kuwait receives annual total radiation more than 5280 MJ/m2 with approximately 3347 h of sunshine. Solar energy systems consist of PV modules and parabolic trough collectors are considered to satisfy electricity consumption, domestic water heating, and cooling loads of an existing building. This paper presents the results of an extensive program of energy conservation and energy generation using integrated photovoltaic (PV) modules and parabolic trough collectors (PTC). The program conducted on an existing institutional building intending to convert it into a Net-Zero Energy Building (NZEB) or near net Zero Energy Building (nNZEB). The program consists of two phases; the first phase is concerned with energy auditing and energy conservation measures at minimum cost and the second phase considers the installation of photovoltaic modules and parabolic trough collectors. The 2-storey building under consideration is the Applied Sciences Department at the College of Technological Studies, Kuwait. Single effect lithium bromide water absorption chillers are implemented to provide air conditioning load to the building. A numerical model is developed to evaluate the performance of parabolic trough collectors in Kuwait climate. Transient simulation program (TRNSYS) is adapted to simulate the performance of different solar system components. In addition, a numerical model is developed to assess the environmental impacts of building integrated renewable energy systems. Results indicate that efficient energy conservation can play an important role in converting the existing buildings into NZEBs as it saves a significant portion of annual energy consumption of the building. The first phase results in an energy conservation of about 28% of the building consumption. In the second phase, the integrated PV completely covers the lighting and equipment loads of the building. On the other hand, parabolic trough collectors of optimum area of 765 m2 can satisfy a significant portion of the cooling load, i.e about73% of the total building cooling load. The annual avoided CO2 emission is evaluated at the optimum conditions to assess the environmental impacts of renewable energy systems. The total annual avoided CO2 emission is about 680 metric ton/year which confirms the environmental impacts of these systems in Kuwait.

Keywords: building integrated renewable systems, Net-Zero energy building, solar fraction, avoided CO2 emission

Procedia PDF Downloads 600
968 Geostatistical Analysis of Contamination of Soils in an Urban Area in Ghana

Authors: S. K. Appiah, E. N. Aidoo, D. Asamoah Owusu, M. W. Nuonabuor

Abstract:

Urbanization remains one of the unique predominant factors which is linked to the destruction of urban environment and its associated cases of soil contamination by heavy metals through the natural and anthropogenic activities. These activities are important sources of toxic heavy metals such as arsenic (As), cadmium (Cd), chromium (Cr), copper (Cu), iron (Fe), manganese (Mn), and lead (Pb), nickel (Ni) and zinc (Zn). Often, these heavy metals lead to increased levels in some areas due to the impact of atmospheric deposition caused by their proximity to industrial plants or the indiscriminately burning of substances. Information gathered on potentially hazardous levels of these heavy metals in soils leads to establish serious health and urban agriculture implications. However, characterization of spatial variations of soil contamination by heavy metals in Ghana is limited. Kumasi is a Metropolitan city in Ghana, West Africa and is challenged with the recent spate of deteriorating soil quality due to rapid economic development and other human activities such as “Galamsey”, illegal mining operations within the metropolis. The paper seeks to use both univariate and multivariate geostatistical techniques to assess the spatial distribution of heavy metals in soils and the potential risk associated with ingestion of sources of soil contamination in the Metropolis. Geostatistical tools have the ability to detect changes in correlation structure and how a good knowledge of the study area can help to explain the different scales of variation detected. To achieve this task, point referenced data on heavy metals measured from topsoil samples in a previous study, were collected at various locations. Linear models of regionalisation and coregionalisation were fitted to all experimental semivariograms to describe the spatial dependence between the topsoil heavy metals at different spatial scales, which led to ordinary kriging and cokriging at unsampled locations and production of risk maps of soil contamination by these heavy metals. Results obtained from both the univariate and multivariate semivariogram models showed strong spatial dependence with range of autocorrelations ranging from 100 to 300 meters. The risk maps produced show strong spatial heterogeneity for almost all the soil heavy metals with extremely risk of contamination found close to areas with commercial and industrial activities. Hence, ongoing pollution interventions should be geared towards these highly risk areas for efficient management of soil contamination to avert further pollution in the metropolis.

Keywords: coregionalization, heavy metals, multivariate geostatistical analysis, soil contamination, spatial distribution

Procedia PDF Downloads 292
967 Identification of Natural Liver X Receptor Agonists as the Treatments or Supplements for the Management of Alzheimer and Metabolic Diseases

Authors: Hsiang-Ru Lin

Abstract:

Cholesterol plays an essential role in the regulation of the progression of numerous important diseases including atherosclerosis and Alzheimer disease so the generation of suitable cholesterol-lowering reagents is urgent to develop. Liver X receptor (LXR) is a ligand-activated transcription factor whose natural ligands are cholesterols, oxysterols and glucose. Once being activated, LXR can transactivate the transcription action of various genes including CYP7A1, ABCA1, and SREBP1c, involved in the lipid metabolism, glucose metabolism and inflammatory pathway. Essentially, the upregulation of ABCA1 facilitates cholesterol efflux from the cells and attenuates the production of beta-amyloid (ABeta) 42 in brain so LXR is a promising target to develop the cholesterol-lowering reagents and preventative treatment of Alzheimer disease. Engelhardia roxburghiana is a deciduous tree growing in India, China, and Taiwan. However, its chemical composition is only reported to exhibit antitubercular and anti-inflammatory effects. In this study, four compounds, engelheptanoxides A, C, engelhardiol A, and B isolated from the root of Engelhardia roxburghiana were evaluated for their agonistic activity against LXR by the transient transfection reporter assays in the HepG2 cells. Furthermore, their interactive modes with LXR ligand binding pocket were generated by molecular modeling programs. By using the cell-based biological assays, engelheptanoxides A, C, engelhardiol A, and B showing no cytotoxic effect against the proliferation of HepG2 cells, exerted obvious LXR agonistic effects with similar activity as T0901317, a novel synthetic LXR agonist. Further modeling studies including docking and SAR (structure-activity relationship) showed that these compounds can locate in LXR ligand binding pocket in the similar manner as T0901317. Thus, LXR is one of nuclear receptors targeted by pharmaceutical industry for developing treatments of Alzheimer and atherosclerosis diseases. Importantly, the cell-based assays, together with molecular modeling studies suggesting a plausible binding mode, demonstrate that engelheptanoxides A, C, engelhardiol A, and B function as LXR agonists. This is the first report to demonstrate that the extract of Engelhardia roxburghiana contains LXR agonists. As such, these active components of Engelhardia roxburghiana or subsequent analogs may show important therapeutic effects through selective modulation of the LXR pathway.

Keywords: Liver X receptor (LXR), Engelhardia roxburghiana, CYP7A1, ABCA1, SREBP1c, HepG2 cells

Procedia PDF Downloads 417
966 Adaptation of the Scenario Test for Greek-speaking People with Aphasia: Reliability and Validity Study

Authors: Marina Charalambous, Phivos Phylactou, Thekla Elriz, Loukia Psychogios, Jean-Marie Annoni

Abstract:

Background: Evidence-based practices for the evaluation and treatment of people with aphasia (PWA) in Greek are mainly impairment-based. Functional and multimodal communication is usually under assessed and neglected by clinicians. This study explores the adaptation and psychometric testing of the Greek (GR) version of The Scenario Test. The Scenario Test assesses the everyday functional communication of PWA in an interactive multimodal communication setting with the support of an active communication facilitator. Aims: To define the reliability and validity of The Scenario Test GR and discuss its clinical value. Methods & Procedures: The Scenario Test-GR was administered to 54 people with chronic stroke (6+ months post-stroke): 32 PWA and 22 people with stroke without aphasia. Participants were recruited from Greece and Cyprus. All measures were performed in an interview format. Standard psychometric criteria were applied to evaluate reliability (internal consistency, test-retest, and interrater reliability) and validity (construct and known – groups validity) of the Scenario Test GR. Video analysis was performed for the qualitative examination of the communication modes used. Outcomes & Results: The Scenario Test-GR shows high levels of reliability and validity. High scores of internal consistency (Cronbach’s α = .95), test-retest reliability (ICC = .99), and interrater reliability (ICC = .99) were found. Interrater agreement in scores on individual items fell between good and excellent levels of agreement. Correlations with a tool measuring language function in aphasia (the Aphasia Severity Rating Scale of the Boston Diagnostic Aphasia Examination), a measure of functional communication (the Communicative Effectiveness Index), and two instruments examining the psychosocial impact of aphasia (the Stroke and Aphasia Quality of Life questionnaire and the Aphasia Impact Questionnaire) revealed good convergent validity (all ps< .05). Results showed good known – groups validity (Mann-Whitney U = 96.5, p < .001), with significantly higher scores for participants without aphasia compared to those with aphasia. Conclusions: The psychometric qualities of The Scenario Test-GR support the reliability and validity of the tool for the assessment of functional communication for Greek-speaking PWA. The Scenario Test-GR can be used to assess multimodal functional communication, orient aphasia rehabilitation goal setting towards the activity and participation level, and be used as an outcome measure of everyday communication. Future studies will focus on the measurement of sensitivity to change in PWA with severe non-fluent aphasia.

Keywords: the scenario test GR, functional communication assessment, people with aphasia (PWA), tool validation

Procedia PDF Downloads 123
965 Urban Waste Management for Health and Well-Being in Lagos, Nigeria

Authors: Bolawole F. Ogunbodede, Mokolade Johnson, Adetunji Adejumo

Abstract:

High population growth rate, reactive infrastructure provision, inability of physical planning to cope with developmental pace are responsible for waste water crisis in the Lagos Metropolis. Septic tank is still the most prevalent waste-water holding system. Unfortunately, there is a dearth of septage treatment infrastructure. Public waste-water treatment system statistics relative to the 23 million people in Lagos State is worrisome. 1.85 billion Cubic meters of wastewater is generated on daily basis and only 5% of the 26 million population is connected to public sewerage system. This is compounded by inadequate budgetary allocation and erratic power supply in the last two decades. This paper explored community participatory waste-water management alternative at Oworonshoki Municipality in Lagos. The study is underpinned by decentralized Waste-water Management systems in built-up areas. The initiative accommodates 5 step waste-water issue including generation, storage, collection, processing and disposal through participatory decision making in two Oworonshoki Community Development Association (CDA) areas. Drone assisted mapping highlighted building footage. Structured interviews and focused group discussion of land lord associations in the CDA areas provided collaborator platform for decision-making. Water stagnation in primary open drainage channels and natural retention ponds in framing wetlands is traceable to frequent of climate change induced tidal influences in recent decades. Rise in water table resulting in septic-tank leakage and water pollution is reported to be responsible for the increase in the water born infirmities documented in primary health centers. This is in addition to unhealthy dumping of solid wastes in the drainage channels. The effect of uncontrolled disposal system renders surface waters and underground water systems unsafe for human and recreational use; destroys biotic life; and poisons the fragile sand barrier-lagoon urban ecosystems. Cluster decentralized system was conceptualized to service 255 households. Stakeholders agreed on public-private partnership initiative for efficient wastewater service delivery.

Keywords: health, infrastructure, management, septage, well-being

Procedia PDF Downloads 167
964 Servitization in Machine and Plant Engineering: Leveraging Generative AI for Effective Product Portfolio Management Amidst Disruptive Innovations

Authors: Till Gramberg

Abstract:

In the dynamic world of machine and plant engineering, stagnation in the growth of new product sales compels companies to reconsider their business models. The increasing shift toward service orientation, known as "servitization," along with challenges posed by digitalization and sustainability, necessitates an adaptation of product portfolio management (PPM). Against this backdrop, this study investigates the current challenges and requirements of PPM in this industrial context and develops a framework for the application of generative artificial intelligence (AI) to enhance agility and efficiency in PPM processes. The research approach of this study is based on a mixed-method design. Initially, qualitative interviews with industry experts were conducted to gain a deep understanding of the specific challenges and requirements in PPM. These interviews were analyzed using the Gioia method, painting a detailed picture of the existing issues and needs within the sector. This was complemented by a quantitative online survey. The combination of qualitative and quantitative research enabled a comprehensive understanding of the current challenges in the practical application of machine and plant engineering PPM. Based on these insights, a specific framework for the application of generative AI in PPM was developed. This framework aims to assist companies in implementing faster and more agile processes, systematically integrating dynamic requirements from trends such as digitalization and sustainability into their PPM process. Utilizing generative AI technologies, companies can more quickly identify and respond to trends and market changes, allowing for a more efficient and targeted adaptation of the product portfolio. The study emphasizes the importance of an agile and reactive approach to PPM in a rapidly changing environment. It demonstrates how generative AI can serve as a powerful tool to manage the complexity of a diversified and continually evolving product portfolio. The developed framework offers practical guidelines and strategies for companies to improve their PPM processes by leveraging the latest technological advancements while maintaining ecological and social responsibility. This paper significantly contributes to deepening the understanding of the application of generative AI in PPM and provides a framework for companies to manage their product portfolios more effectively and adapt to changing market conditions. The findings underscore the relevance of continuous adaptation and innovation in PPM strategies and demonstrate the potential of generative AI for proactive and future-oriented business management.

Keywords: servitization, product portfolio management, generative AI, disruptive innovation, machine and plant engineering

Procedia PDF Downloads 70
963 Strategies for Improving and Sustaining Quality in Higher Education

Authors: Anshu Radha Aggarwal

Abstract:

Higher Education (HE) in the India has experienced a series of remarkable changes over the last fifteen years as successive governments have sought to make the sector more efficient and more accountable for investment of public funds. Rapid expansion in student numbers and pressures to widen Participation amongst non-traditional students are key challenges facing HE. Learning outcomes can act as a benchmark for assuring quality and efficiency in HE and they also enable universities to describe courses in an unambiguous way so as to demystify (and open up) education to a wider audience. This paper examines how learning outcomes are used in HE and evaluates the implications for curriculum design and student learning. There has been huge expansion in the field of higher education, both technical and non-technical, in India during the last two decades, and this trend is continuing. It is expected that another about 400 colleges and 300 universities will be created by the end of the 13th Plan Period. This has lead to many concerns about the quality of education and training of our students. Many studies have brought the issues ailing our curricula, delivery, monitoring and assessment. Govt. of India, (via MHRD, UGC, NBA,…) has initiated several steps to bring improvement in quality of higher education and training, such as National Skills Qualification Framework, making accreditation of institutions mandatory in order to receive Govt. grants, and so on. Moreover, Outcome-based Education and Training (OBET) has also been mandated and encouraged in the teaching/learning institutions. MHRD, UGC and NBAhas made accreditation of schools, colleges and universities mandatory w.e.f Jan 2014. Outcome-based Education and Training (OBET) approach is learner-centric, whereas the traditional approach has been teacher-centric. OBET is a process which involves the re-orientation/restructuring the curriculum, implementation, assessment/measurements of educational goals, and achievement of higher order learning, rather than merely clearing/passing the university examinations. OBET aims to bring about these desired changes within the students, by increasing knowledge, developing skills, influencing attitudes and creating social-connect mind-set. This approach has been adopted by several leading universities and institutions around the world in advanced countries. Objectives of this paper is to highlight the issues concerning quality in higher education and quality frameworks, to deliberate on the various education and training models, to explain the outcome-based education and assessment processes, to provide an understanding of the NAAC and outcome-based accreditation criteria and processes and to share best-practice outcomes-based accreditation system and process.

Keywords: learning outcomes, curriculum development, pedagogy, outcome based education

Procedia PDF Downloads 517
962 Policy Initiatives That Increase Mass-Market Participation of Fuel Cell Electric Vehicles

Authors: Usman Asif, Klaus Schmidt

Abstract:

In recent years, the development of alternate fuel vehicles has helped to reduce carbon emissions worldwide. As the number of vehicles will continue to increase in the future, the energy demand will also increase. Therefore, we must consider automotive technologies that are efficient and less harmful to the environment in the long run. Battery Electric Vehicles (BEVs) have gained popularity in recent years because of their lower maintenance, lower fuel costs, and lower carbon emissions. Nevertheless, BEVs show several disadvantages, such as slow charging times and lower range than traditional combustion-powered vehicles. These factors keep many people from switching to BEVs. The authors of this research believe that these limitations can be overcome by using fuel cell technology. Fuel cell technology converts chemical energy into electrical energy from hydrogen power and therefore serves as fuel to power the motor and thus replacing heavy lithium batteries that are expensive and hard to recycle. Also, in contrast to battery-powered electric vehicle technology, Fuel Cell Electric Vehicles (FCEVs) offer higher ranges and lower fuel-up times and therefore are more competitive with electric vehicles. However, FCEVs have not gained the same popularity as electric vehicles due to stringent legal frameworks, underdeveloped infrastructure, high fuel transport, and storage costs plus the expense of fuel cell technology itself. This research will focus on the legal frameworks for hydrogen-powered vehicles, and how a change in these policies may affect and improve hydrogen fueling infrastructure and lower hydrogen transport and storage costs. These policies may also facilitate reductions in fuel cell technology costs. In order to attain a better framework, a number of countries have developed conceptual roadmaps. These roadmaps have set out a series of objectives to increase the access of FCEVs to their respective markets. This research will specifically focus on policies in Japan, Europe, and the USA in their attempt to shape the automotive industry of the future. The researchers also suggest additional policies that may help to accelerate the advancement of FCEVs to mass-markets. The approach was to provide a solid literature review using resources from around the globe. After a subsequent analysis and synthesis of this review, the authors concluded that in spite of existing legal challenges that have hindered the advancement of fuel-cell technology in the automobile industry in the past, new initiatives that enhance and advance the very same technology in the future are underway.

Keywords: fuel cell electric vehicles, fuel cell technology, legal frameworks, policies and regulations

Procedia PDF Downloads 109
961 Development of Agomelatine Loaded Proliposomal Powders for Improved Intestinal Permeation: Effect of Surface Charge

Authors: Rajasekhar Reddy Poonuru, Anusha Parnem

Abstract:

Purpose: To formulate proliposome powder of agomelatine, an antipsychotic drug, and to evaluate physicochemical, in vitro characters and effect of surface charge on ex vivo intestinal permeation. Methods: Film deposition technique was employed to develop proliposomal powders of agomelatin with varying molar ratios of lipid Hydro Soy PC L-α-phosphatidylcholine (HSPC) and cholesterol with fixed sum of drug. With the aim to derive free flowing and stable proliposome powder, fluid retention potential of various carriers was examined. Liposome formation and number of vesicles formed for per mm3 up on hydration, vesicle size, and entrapment efficiency was assessed to deduce an optimized formulation. Sodium cholate added to optimized formulation to induce surface charge on formed vesicles. Solid-state characterization (FTIR, DSC, and XRD) was performed with the intention to assess native crystalline and chemical behavior of drug. The in vitro dissolution test of optimized formulation along with pure drug was evaluated to estimate dissolution efficiency (DE) and relative dissolution rate (RDR). Effective permeability co-efficient (Peff(rat)) in rat and enhancement ratio (ER) of drug from formulation and pure drug dispersion were calculated from ex vivo permeation studies in rat ileum. Results: Proliposomal powder formulated with equimolar ratio of HSPC and cholesterol ensued in higher no. of vesicles (3.95) with 90% drug entrapment up on hydration. Neusilin UFL2 was elected as carrier because of its high fluid retention potential (4.5) and good flow properties. Proliposome powder exhibited augmentation in DE (60.3 ±3.34) and RDR (21.2±01.02) of agomelation over pure drug. Solid state characterization studies demonstrated the transformation of native crystalline form of drug to amorphous and/or molecular state, which was in correlation with results obtained from in vitro dissolution test. The elevated Peff(rat) of 46.5×10-4 cm/sec and ER of 2.65 of drug from charge induced proliposome formulation with respect to pure drug dispersion was assessed from ex vivo intestinal permeation studies executed in ileum of wistar rats. Conclusion: Improved physicochemical characters and ex vivo intestinal permeation of drug from charge induced proliposome powder with Neusilin UFL2 unravels the potentiality of this system in enhancing oral delivery of agomelatin.

Keywords: agomelatin, proliposome, sodium cholate, neusilin

Procedia PDF Downloads 129
960 Mapping Forest Biodiversity Using Remote Sensing and Field Data in the National Park of Tlemcen (Algeria)

Authors: Bencherif Kada

Abstract:

In forest management practice, landscape and Mediterranean forest are never posed as linked objects. But sustainable forestry requires the valorization of the forest landscape and this aim involves assessing the spatial distribution of biodiversity by mapping forest landscaped units and subunits and by monitoring the environmental trends. This contribution aims to highlight, through object-oriented classifications, the landscaped biodiversity of the National Park of Tlemcen (Algeria). The methodology used is based on ground data and on the basic processing units of object-oriented classification that are segments, so-called image-objects, representing a relatively homogenous units on the ground. The classification of Landsat Enhanced Thematic Mapper plus (ETM+) imagery is performed on image objects, and not on pixels. Advantages of object-oriented classification are to make full use of meaningful statistic and texture calculation, uncorrelated shape information (e.g., length-to-width ratio, direction and area of an object, etc.) and topological features (neighbor, super-object, etc.), and the close relation between real-world objects and image objects. The results show that per object classification using the k-nearest neighbor’s method is more efficient than per pixel one. It permits to simplify the content of the image while preserving spectrally and spatially homogeneous types of land covers such as Aleppo pine stands, cork oak groves, mixed groves of cork oak, holm oak and zen oak, mixed groves of holm oak and thuja, water plan, dense and open shrub-lands of oaks, vegetable crops or orchard, herbaceous plants and bare soils. Texture attributes seem to provide no useful information while spatial attributes of shape, compactness seem to be performant for all the dominant features, such as pure stands of Aleppo pine and/or cork oak and bare soils. Landscaped sub-units are individualized while conserving the spatial information. Continuously dominant dense stands over a large area were formed into a single class, such as dense, fragmented stands with clear stands. Low shrublands formations and high wooded shrublands are well individualized but with some confusion with enclaves for the former. Overall, a visual evaluation of the classification shows that the classification reflects the actual spatial state of the study area at the landscape level.

Keywords: forest, oaks, remote sensing, biodiversity, shrublands

Procedia PDF Downloads 24
959 From Servicescape to Servicespace: Qualitative Research in a Post-Cartesian Retail Context

Authors: Chris Houliez

Abstract:

This study addresses the complex dynamics of the modern retail environment, focusing on how the ubiquitous nature of mobile communication technologies has reshaped the shopper experience and tested the limits of the conventional "servicescape" concept commonly used to describe retail experiences. The objective is to redefine the conceptualization of retail space by introducing an approach to space that aligns with a retail context where physical and digital interactions are increasingly intertwined. To offer a more shopper-centric understanding of the retail experience, this study draws from phenomenology, particularly Henri Lefebvre’s work on the production of space. The presented protocol differs from traditional methodologies by not making assumptions about what constitutes a retail space. Instead, it adopts a perspective based on Lefebvre’s seminal work, which posits that space is not a three-dimensional container commonly referred to as “servicescape” but is actively produced through shoppers’ spatial practices. This approach allows for an in-depth exploration of the retail experience by capturing the everyday spatial practices of shoppers without preconceived notions of what constitutes a retail space. The designed protocol was tested with eight participants during 209 hours of day-long field trips, immersing the researcher into the shopper's lived experience by combining multiple data collection methods, including participant observation, videography, photography, and both pre-fieldwork and post-fieldwork interviews. By giving equal importance to both locations and connections, this study unpacked various spatial practices that contribute to the production of retail space. These findings highlight the relative inadequacy of some traditional retail space conceptualizations, which often fail to capture the fluid nature of contemporary shopping experiences. The study's emphasis on the customization process, through which shoppers optimize their retail experience by producing a “fully lived retail space,” offers a more comprehensive understanding of consumer shopping behavior in the digital age. In conclusion, this research presents a significant shift in the conceptualization of retail space. By employing a phenomenological approach rooted in Lefebvre’s theory, the study provides a more efficient framework to understand the retail experience in the age of mobile communication technologies. Although this research is limited by its small sample size and the demographic profile of participants, it offers valuable insights into the spatial practices of modern shoppers and their implications for retail researchers and retailers alike.

Keywords: shopper behavior, mobile telecommunication technologies, qualitative research, servicescape, servicespace

Procedia PDF Downloads 13
958 An Energy and Economic Comparison of Solar Thermal Collectors for Domestic Hot Water Applications

Authors: F. Ghani, T. S. O’Donovan

Abstract:

Today, the global solar thermal market is dominated by two collector types; the flat plate and evacuated tube collector. With regards to the number of installations worldwide, the evacuated tube collector is the dominant variant primarily due to the Chinese market but the flat plate collector dominates both the Australian and European markets. The market share of the evacuated tube collector is, however, growing in Australia due to a common belief that this collector type is ‘more efficient’ and, therefore, the better choice for hot water applications. In this study, we investigate this issue further to assess the validity of this statement. This was achieved by methodically comparing the performance and economics of several solar thermal systems comprising of; a low-performance flat plate collector, a high-performance flat collector, and an evacuated tube collector coupled with a storage tank and pump. All systems were simulated using the commercial software package Polysun for four climate zones in Australia to take into account different weather profiles in the study and subjected to a thermal load equivalent to a household comprising of four people. Our study revealed that the energy savings and payback periods varied significantly for systems operating under specific environmental conditions. Solar fractions ranged between 58 and 100 per cent, while payback periods range between 3.8 and 10.1 years. Although the evacuated tube collector was found to operate with a marginally higher thermal efficiency over the selective surface flat plate collector due to reduced ambient heat loss, the high-performance flat plate collector outperformed the evacuated tube collector on thermal yield. This result was obtained as the flat plate collector possesses a significantly higher absorber to gross collector area ratio over the evacuated tube collector. Furthermore, it was found for Australian regions operating with a high average solar radiation intensity and ambient temperature, the lower performance collector is the preferred choice due to favorable economics and reduced stagnation temperature. Our study has provided additional insight into the thermal performance and economics of the two prevalent solar thermal collectors currently available. A computational investigation has been carried out specifically for the Australian climate due to its geographic size and significant variation in weather. For domestic hot water applications were fluid temperatures between 50 and 60 degrees Celsius are sought, the flat plate collector is both technically and economically favorable over the evacuated tube collector. This research will be useful to system design engineers, solar thermal manufacturers, and those involved in policy to encourage the implementation of solar thermal systems into the hot water market.

Keywords: solar thermal, energy analysis, flat plate, evacuated tube, collector performance

Procedia PDF Downloads 207
957 Biofiltration Odour Removal at Wastewater Treatment Plant Using Natural Materials: Pilot Scale Studies

Authors: D. Lopes, I. I. R. Baptista, R. F. Vieira, J. Vaz, H. Varela, O. M. Freitas, V. F. Domingues, R. Jorge, C. Delerue-Matos, S. A. Figueiredo

Abstract:

Deodorization is nowadays a need in wastewater treatment plants. Nitrogen and sulphur compounds, volatile fatty acids, aldehydes and ketones are responsible for the unpleasant odours, being ammonia, hydrogen sulphide and mercaptans the most common pollutants. Although chemical treatments of the air extracted are efficient, these are more expensive than biological treatments, namely due the use of chemical reagents (commonly sulphuric acid, sodium hypochlorite and sodium hydroxide). Biofiltration offers the advantage of avoiding the use of reagents (only in some cases, nutrients are added in order to increase the treatment efficiency) and can be considered a sustainable process when the packing medium used is of natural origin. In this work the application of some natural materials locally available was studied both at laboratory and pilot scale, in a real wastewater treatment plant. The materials selected for this study were indigenous Portuguese forest materials derived from eucalyptus and pinewood, such as woodchips and bark, and coconut fiber was also used for comparison purposes. Their physico-chemical characterization was performed: density, moisture, pH, buffer and water retention capacity. Laboratory studies involved batch adsorption studies for ammonia and hydrogen sulphide removal and evaluation of microbiological activity. Four pilot-scale biofilters (1 cubic meter volume) were installed at a local wastewater treatment plant treating odours from the effluent receiving chamber. Each biofilter contained a different packing material consisting of mixtures of eucalyptus bark, pine woodchips and coconut fiber, with added buffering agents and nutrients. The odour treatment efficiency was monitored over time, as well as other operating parameters. The operation at pilot scale suggested that between the processes involved in biofiltration - adsorption, absorption and biodegradation - the first dominates at the beginning, while the biofilm is developing. When the biofilm is completely established, and the adsorption capacity of the material is reached, biodegradation becomes the most relevant odour removal mechanism. High odour and hydrogen sulphide removal efficiencies were achieved throughout the testing period (over 6 months), confirming the suitability of the materials selected, and mixtures thereof prepared, for biofiltration applications.

Keywords: ammonia hydrogen sulphide and removal, biofiltration, natural materials, odour control in wastewater treatment plants

Procedia PDF Downloads 299
956 Biosorption of Nickel by Penicillium simplicissimum SAU203 Isolated from Indian Metalliferous Mining Overburden

Authors: Suchhanda Ghosh, A. K. Paul

Abstract:

Nickel, an industrially important metal is not mined in India, due to the lack of its primary mining resources. But, the chromite deposits occurring in the Sukinda and Baula-Nuasahi region of Odhisa, India, is reported to contain around 0.99% of nickel entrapped in the goethite matrix of the lateritic iron rich ore. Weathering of the dumped chromite mining overburden often leads to the contamination of the ground as well as the surface water with toxic nickel. Microbes inherent to this metal contaminated environment are reported to be capable of removal as well as detoxification of various metals including nickel. Nickel resistant fungal isolates obtained in pure form from the metal rich overburden were evaluated for their potential to biosorb nickel by using their dried biomass. Penicillium simplicissimum SAU203 was the best nickel biosorbant among the 20 fungi tested and was capable to sorbing 16.85 mg Ni/g biomass from a solution containing 50 mg/l of Ni. The identity of the isolate was confirmed using 18S rRNA gene analysis. The sorption capacity of the isolate was further standardized following Langmuir and Freundlich adsorption isotherm models and the results reflected energy efficient sorption. Fourier-transform infrared spectroscopy studies of the nickel loaded and control biomass in a comparative basis revealed the involvement of hydroxyl, amine and carboxylic groups in Ni binding. The sorption process was also optimized for several standard parameters like initial metal ion concentration, initial sorbet concentration, incubation temperature and pH, presence of additional cations and pre-treatment of the biomass by different chemicals. Optimisation leads to significant improvements in the process of nickel biosorption on to the fungal biomass. P. simplicissimum SAU203 could sorb 54.73 mg Ni/g biomass with an initial Ni concentration of 200 mg/l in solution and 21.8 mg Ni/g biomass with an initial biomass concentration of 1g/l solution. Optimum temperature and pH for biosorption was recorded to be 30°C and pH 6.5 respectively. Presence of Zn and Fe ions improved the sorption of Ni(II), whereas, cobalt had a negative impact. Pre-treatment of biomass with various chemical and physical agents has affected the proficiency of Ni sorption by P. simplicissimum SAU203 biomass, autoclaving as well as treatment of biomass with 0.5 M sulfuric acid and acetic acid reduced the sorption as compared to the untreated biomass, whereas, NaOH and Na₂CO₃ and Twin 80 (0.5 M) treated biomass resulted in augmented metal sorption. Hence, on the basis of the present study, it can be concluded that P. simplicissimum SAU203 has the potential for the removal as well as detoxification of nickel from contaminated environments in general and particularly from the chromite mining areas of Odhisa, India.

Keywords: nickel, fungal biosorption, Penicillium simplicissimum SAU203, Indian chromite mines, mining overburden

Procedia PDF Downloads 186
955 Analyzing Transit Network Design versus Urban Dispersion

Authors: Hugo Badia

Abstract:

This research answers which is the most suitable transit network structure to serve specific demand requirements in an increasing urban dispersion process. Two main approaches of network design are found in the literature. On the one hand, a traditional answer, widespread in our cities, that develops a high number of lines to connect most of origin-destination pairs by direct trips; an approach based on the idea that users averse to transfers. On the other hand, some authors advocate an alternative design characterized by simple networks where transfer is essential to complete most of trips. To answer which of them is the best option, we use a two-step methodology. First, by means of an analytical model, three basic network structures are compared: a radial scheme, starting point for the other two structures, a direct trip-based network, and a transfer-based one, which represent the two alternative transit network designs. The model optimizes the network configuration with regard to the total cost for each structure. For a scenario of dispersion, the best alternative is the structure with the minimum cost. This dispersion degree is defined in a simple way considering that only a central area attracts all trips. If this area is small, we have a high concentrated mobility pattern; if this area is too large, the city is highly decentralized. In this first step, we can determine the area of applicability for each structure in function to that urban dispersion degree. The analytical results show that a radial structure is suitable when the demand is so centralized, however, when this demand starts to scatter, new transit lines should be implemented to avoid transfers. If the urban dispersion advances, the introduction of more lines is no longer a good alternative, in this case, the best solution is a change of structure, from direct trips to a network based on transfers. The area of applicability of each network strategy is not constant, it depends on the characteristics of demand, city and transport technology. In the second step, we translate analytical results to a real case study by the relationship between the parameters of dispersion of the model and direct measures of dispersion in a real city. Two dimensions of the urban sprawl process are considered: concentration, defined by Gini coefficient, and centralization by area based centralization index. Once it is estimated the real dispersion degree, we are able to identify in which area of applicability the city is located. In summary, from a strategic point of view, we can obtain with this methodology which is the best network design approach for a city, comparing the theoretical results with the real dispersion degree.

Keywords: analytical network design model, network structure, public transport, urban dispersion

Procedia PDF Downloads 227
954 Investigating Best Practice Energy Efficiency Policies and Programs, and Their Replication Potential for Residential Sector of Saudi Arabia

Authors: Habib Alshuwaikhat, Nahid Hossain

Abstract:

Residential sector consumes more than half of the produced electricity in Saudi Arabia, and fossil fuel is the main source of energy to meet growing household electricity demand in the Kingdom. Several studies forecasted and expressed concern that unless the domestic energy demand growth is controlled, it will reduce Saudi Arabia’s crude oil export capacity within a decade and the Kingdom is likely to be incapable of exporting crude oil within next three decades. Though the Saudi government has initiated to address the domestic energy demand growth issue, the demand side energy management policies and programs are focused on industrial and commercial sectors. It is apparent that there is an urgent need to develop a comprehensive energy efficiency strategy for addressing efficient energy use in residential sector in the Kingdom. Then again as Saudi Arabia is at its primary stage in addressing energy efficiency issues in its residential sector, there is a scope for the Kingdom to learn from global energy efficiency practices and design its own energy efficiency policies and programs. However, in order to do that sustainable, it is essential to address local contexts of energy efficiency. It is also necessary to find out the policies and programs that will fit to the local contexts. Thus the objective of this study was set to identify globally best practice energy efficiency policies and programs in residential sector that have replication potential in Saudi Arabia. In this regard two sets of multi-criteria decision analysis matrices were developed to evaluate the energy efficiency policies and programs. The first matrix was used to evaluate the global energy efficiency policies and programs, and the second matrix was used to evaluate the replication potential of global best practice energy efficiency policies and programs for Saudi Arabia. Wuppertal Institute’s guidelines for energy efficiency policy evaluation were used to develop the matrices, and the different attributes of the matrices were set through available literature review. The study reveals that the best practice energy efficiency policies and programs with good replication potential for Saudi Arabia are those which have multiple components to address energy efficiency and are diversified in their characteristics. The study also indicates the more diversified components are included in a policy and program, the more replication potential it has for the Kingdom. This finding is consistent with other studies, where it is observed that in order to be successful in energy efficiency practices, it is required to introduce multiple policy components in a cluster rather than concentrate on a single policy measure. The developed multi-criteria decision analysis matrices for energy efficiency policy and program evaluation could be utilized to assess the replication potential of other globally best practice energy efficiency policies and programs for the residential sector of the Kingdom. In addition it has potential to guide Saudi policy makers to adopt and formulate its own energy efficiency policies and programs for Saudi Arabia.

Keywords: Saudi Arabia, residential sector, energy efficiency, policy evaluation

Procedia PDF Downloads 493
953 Management of Caverno-Venous Leakage: A Series of 133 Patients with Symptoms, Hemodynamic Workup, and Results of Surgery

Authors: Allaire Eric, Hauet Pascal, Floresco Jean, Beley Sebastien, Sussman Helene, Virag Ronald

Abstract:

Background: Caverno-venous leakage (CVL) is devastating, although barely known disease, the first cause of major physical impairment in men under 25, and responsible for 50% of resistances to phosphodiesterase 5-inhibitors (PDE5-I), affecting 30 to 40% of users in this medication class. In this condition, too early blood drainage from corpora cavernosa prevents penile rigidity and penetration during sexual intercourse. The role of conservative surgery in this disease remains controversial. Aim: Assess complications and results of combined open surgery and embolization for CVL. Method: Between June 2016 and September 2021, 133 consecutive patients underwent surgery in our institution for CVL, causing severe erectile dysfunction (ED) resistance to oral medical treatment. Procedures combined vein embolization and ligation with microsurgical techniques. We performed a pre-and post-operative clinical (Erection Harness Scale: EHS) hemodynamic evaluation by duplex sonography in all patients. Before surgery, the CVL network was visualized by computed tomography cavernography. Penile EMG was performed in case of diabetes or suspected other neurological conditions. All patients were optimized for hormonal status—data we prospectively recorded. Results: Clinical signs suggesting CVL were ED since age lower than 25, loss of erection when changing position, penile rigidity varying according to the position. Main complications were minor pulmonary embolism in 2 patients, one after airline travel, one with Factor V Leiden heterozygote mutation, one infection and three hematomas requiring reoperation, one decreased gland sensitivity lasting for more than one year. Mean pre-operative pharmacologic EHS was 2.37+/-0.64, mean pharmacologic post-operative EHS was 3.21+/-0.60, p<0.0001 (paired t-test). The mean EHS variation was 0.87+/-0.74. After surgery, 81.5% of patients had a pharmacologic EHS equal to or over 3, allowing for intercourse with penetration. Three patients (2.2%) experienced lower post-operative EHS. The main cause of failure was leakage from the deep dorsal aspect of the corpus cavernosa. In a 14 months follow-up, 83.2% of patients had a clinical EHS equal to or over 3, allowing for sexual intercourse with penetration, one-third of them without any medication. 5 patients had a penile implant after unsuccessful conservative surgery. Conclusion: Open surgery combined with embolization for CVL is an efficient approach to CVL causing severe erectile dysfunction.

Keywords: erectile dysfunction, cavernovenous leakage, surgery, embolization, treatment, result, complications, penile duplex sonography

Procedia PDF Downloads 143
952 Downward Vertical Evacuation for Disabilities People from Tsunami Using Escape Bunker Technology

Authors: Febrian Tegar Wicaksana, Niqmatul Kurniati, Surya Nandika

Abstract:

Indonesia is one of the countries that have great number of disaster occurrence and threat because it is located in not only between three tectonic plates such as Eurasia plates, Indo-Australia plates and Pacific plates, but also in the Ring of Fire path, like earthquake, Tsunami, volcanic eruption and many more. Recently, research shows that there are potential areas that will be devastated by Tsunami in southern coast of Java. Tsunami is a series of waves in a body of water caused by the displacement of a large volume of water, generally in an ocean. When the waves enter shallow water, they may rise to several feet or, in rare cases, tens of feet, striking the coast with devastating force. The parameter for reference such as magnitude, the depth of epicentre, distance between epicentres with land, the depth of every points, when reached the shore and the growth of waves. Interaction between parameters will bring the big variance of Tsunami wave. Based on that, we can formulate preparation that needed for disaster mitigation strategies. The mitigation strategies will take the important role in an effort to reduce the number of victims and damage in the area. It will reduce the number of victim and casualties. Reducing is directed to the most difficult mobilization casualties in the tsunami disaster area like old people, sick people and disabilities people. Until now, the method that used for rescuing people from Tsunami is basic horizontal evacuation. This evacuation system is not optimal because it needs so long time and it cannot be used by people with disabilities. The writers propose to create a vertical evacuation model with an escape bunker system. This bunker system is chosen because the downward vertical evacuation is considered more efficient and faster. Especially in coastal areas without any highlands surround it. The downward evacuation system is better than upward evacuation because it can avoid the risk of erosion at the ground around the structure which can affect the building. The structure of the bunker and the evacuation process while, and even after, disaster are the main priority to be considered. The power of bunker has quake’s resistance, the durability from water stream, variety of interaction to the ground, and waterproof design. When the situation is back to normal, victim and casualties can go into the safer place. The bunker will be located near the hospital and public places, and will have wide entrance supported by large slide in it so it will ease the disabilities people. The technology of the escape bunker system is expected to reduce the number of victims who have low mobility in the Tsunami.

Keywords: escape bunker, tsunami, vertical evacuation, mitigation, disaster management

Procedia PDF Downloads 490