Search results for: multiple linear regression
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3550

Search results for: multiple linear regression

220 Recycling in Bogotá: A SWOT Analysis of Three Associations to Evaluate the Integrating the Informal Sector into Solid Waste Management

Authors: Clara Inés Pardo Martínez, William H. Alfonso Piña

Abstract:

In emerging economies, recycling is an opportunity for the cities to increase the lifespan of sanitary landfills, reduce the costs of the solid waste management, decrease the environmental problems of the waste treatment through reincorporate waste in the productive cycle and protect and develop people’s livelihoods of informal waste pickers. However, few studies have analysed the possibilities and strategies to integrate formal and informal sectors in the solid waste management for the benefit of both. This study seek to make a strength, weakness, opportunity, and threat (SWOT) analysis in three recycling associations of Bogotá with the aim to understand and determine the situation of recycling from perspective of informal sector in its transition to enter as authorized waste providers. Data used in the analysis are derived from multiple strategies such as literature review, the Bogota’s recycling database, focus group meetings, governmental reports, national laws and regulations and specific interviews with key stakeholders. Results of this study show as the main stakeholders of formal and informal sector of waste management can identify the internal and internal conditions of recycling in Bogotá. Several strategies were designed based on the SWOTs determined, could be useful for Bogotá to advance and promote recycling as a key strategy for integrated sustainable waste management in the city.

Keywords: Bogotá, recycling, solid waste management, SWOT analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7862
219 A Study of the Built Environment Design Elements Embedded into the Multiple Criteria Strategic Planning Model for an Urban Renewal

Authors: Wann-Ming Wey

Abstract:

The link between urban planning and design principles and the built environment of an urban renewal area is of interest to the field of urban studies. During the past decade, there has also been increasing interest in urban planning and design; this interest is motivated by the possibility that design policies associated with the built environment can be used to control, manage, and shape individual activity and behavior. However, direct assessments and design techniques of the links between how urban planning design policies influence individuals are still rare in the field. Recent research efforts in urban design have focused on the idea that land use and design policies can be used to increase the quality of design projects for an urban renewal area-s built environment. The development of appropriate design techniques for the built environment is an essential element of this research. Quality function deployment (QFD) is a powerful tool for improving alternative urban design and quality for urban renewal areas, and for procuring a citizen-driven quality system. In this research, we propose an integrated framework based on QFD and an Analytic Network Process (ANP) approach to determine the Alternative Technical Requirements (ATRs) to be considered in designing an urban renewal planning and design alternative. We also identify the research designs and methodologies that can be used to evaluate the performance of urban built environment projects. An application in an urban renewal built environment planning and design project evaluation is presented to illustrate the proposed framework.

Keywords: Analytic Network Process, Built Environment, Quality Function Deployment, Urban Design, Urban Renewal.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2050
218 A Case Study on the Efficacy of Technical Laboratory Safety in Polytechnic

Authors: Zulhisyam Salleh, Erita M. Mazlan, Saiful A. Mazlan, Norzainariah A. Hassan, Fizatul A. Patakor

Abstract:

Technical laboratories are typically considered as highly hazardous places in the polytechnic institution when addressing the problems of high incidences and fatality rates. In conjunction with several topics covered in the technical curricular, safety and health precaution should be highlighted in order to connect to few key ideas of being safe. Therefore the assessment of safety awareness in terms of safety and health about hazardous and risks at laboratories is needed and has to be incorporated with technical education and other training programmes. The purpose of this study was to determine the efficacy of technical laboratory safety in one of the polytechnics in northern region. The study examined three related issues that were; the availability of safety material and equipment, safety practice adopted by technical teachers and administrator-s safety attitudes in enforcing safety to the students. A model of efficacy technical laboratory was developed to test the linear relationship between existing safety material and equipment, teachers- safety practice and administrators- attitude in enforcing safety and to identify which of technical laboratory safety issues was the most pertinent factor to realize safety in technical laboratory. This was done by analyzing survey-based data sets particularly those obtained from samples of 210 students in the polytechnic. The Pearson Correlation was used to measure the association between the variables and to test the research hypotheses. The result of the study has found that there was a significant correlation between existing safety material and equipment, safety practice adopted by teacher and administrator-s attitude. There was also a significant relationship between technical laboratory safety and safety practice adopted by teacher and between technical laboratory safety and administrator attitude. Hence, safety practice adopted by teacher and administrator attitude is vital in realizing technical laboratory safety.

Keywords: Polytechnic, Safety attitudes, Safety practices, Technical laboratory

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2396
217 Controlled Release of Glucosamine from Pluronic-Based Hydrogels for the Treatment of Osteoarthritis

Authors: Papon Thamvasupong, Kwanchanok Viravaidya-Pasuwat

Abstract:

Osteoarthritis affects a lot of people worldwide. Local injection of glucosamine is one of the alternative treatment methods to replenish the natural lubrication of cartilage. However, multiple injections can potentially lead to possible bacterial infection. Therefore, a drug delivery system is desired to reduce the frequencies of injections. A hydrogel is one of the delivery systems that can control the release of drugs. Thermo-reversible hydrogels can be beneficial to the drug delivery system especially in the local injection route because this formulation can change from liquid to gel after getting into human body. Once the gel is in the body, it will slowly release the drug in a controlled manner. In this study, various formulations of Pluronic-based hydrogels were synthesized for the controlled release of glucosamine. One of the challenges of the Pluronic controlled release system is its fast dissolution rate. To overcome this problem, alginate and calcium sulfate (CaSO4) were added to the polymer solution. The characteristics of the hydrogels were investigated including the gelation temperature, gelation time, hydrogel dissolution and glucosamine release mechanism. Finally, a mathematical model of glucosamine release from Pluronic-alginate-hyaluronic acid hydrogel was developed. Our results have shown that crosslinking Pluronic gel with alginate did not significantly extend the dissolution rate of the gel. Moreover, the gel dissolution profiles and the glucosamine release mechanisms were best described using the zeroth-order kinetic model, indicating that the release of glucosamine was primarily governed by the gel dissolution.

Keywords: Controlled release, drug delivery system, glucosamine, Pluronic® F-127, thermoreversible hydrogel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1626
216 Liquidity Risk of Banks in Light of a Dominant Share of Foreign Capital in the Polish Banking Sector

Authors: Karolina Patora

Abstract:

This article investigates liquidity risk management by banks, which has gained significant importance since the global financial crisis of 2008. The issue is of particular interest for countries like Poland, in which foreign capital plays a dominant role. Such an ownership structure poses certain risks to the local banking sector, which faces an increased probability of the withdrawal of funding or assets’ transfers abroad in case of a crisis. Both these factors can have a detrimental influence on the liquidity position of foreign-owned banks and hence negatively affect the financial stability of the whole banking sector. The aim of this study is to evaluate the impact of a dominating share of foreign investors in the Polish banking sector on the liquidity position of commercial banks. The study hypothesizes that the ownership structure of the Polish banking sector, in which there are banks predominantly controlled by foreign investors, does not pose a threat to the liquidity position of Polish banks. A supplementary research hypothesis is that the liquidity risk profile of foreign-owned banks differs from that of domestic banks. The sample consists of 14 foreign-owned banks and 5 domestic banks owned by local investors, which together constitute approximately 87% of the banking sector’s assets. The data covers the period of 2004–2014. The results of the regression models show no evidence of significant differences in terms of the dynamics of changes of the liquidity buffers between the foreign-owned and domestic banks, although the signs of the coefficients might suggest that the foreign-owned banks were decreasing the holdings of liquid assets at a slower pace over the examined period, compared to the domestic banks. However, no proof of the statistical significance of these findings has been found. The supplementary research hypothesis that the liquidity risk profile of foreign-controlled banks differs from that of domestic banks was rejected.

Keywords: Financial stability, foreign-owned banks, liquidity position, liquidity risk.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1064
215 Water Resources Crisis in Saudi Arabia, Challenges and Possible Management Options: An Analytic Review

Authors: A. A. Ghanim

Abstract:

The Kingdom of Saudi Arabia (KSA) is heading towards a severe and rapidly expanding water crisis, which can have negative impacts on the country’s environment and economy. Of the total water consumption in KSA, the agricultural sector accounts for nearly 87% of the total water use and, therefore, any attempt that overlooks this sector will not help in improving the sustainability of the country’s water resources. KSA Vision 2030 gives priority of water use in the agriculture sector for the regions that have natural renewable water resources. It means that there is little concern for making reuse of municipal wastewater for irrigation purposes in any region in general and in water-scarce regions in particular. The use of treated wastewater is very limited in Saudi Arabia, but it has very considerable potential for future expansion due its numerous beneficial uses. This study reviews the current situation of water resources in Saudi Arabia, providing more highlights on agriculture and wastewater reuse. The reviewed study is proposing some corrective measures for development and better management of water resources in the Kingdom. Suggestions also include consideration of treated water as an alternative source for irrigation in some regions of the country. The study concluded that a sustainable solution for the water crisis in KSA requires implementation of multiple measures in an integrated manner. The integrated solution plan should focus on two main directions: first, improving the current management practices of the existing water resources; second, developing new water supplies from both conventional and non-conventional sources.

Keywords: Saudi Arabia, water resources, water crisis, treated wastewater.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1888
214 Collapse Load Analysis of Reinforced Concrete Pile Group in Liquefying Soils under Lateral Loading

Authors: Pavan K. Emani, Shashank Kothari, V. S. Phanikanth

Abstract:

The ultimate load analysis of RC pile groups has assumed a lot of significance under liquefying soil conditions, especially due to post-earthquake studies of 1964 Niigata, 1995 Kobe and 2001 Bhuj earthquakes. The present study reports the results of numerical simulations on pile groups subjected to monotonically increasing lateral loads under design amounts of pile axial loading. The soil liquefaction has been considered through the non-linear p-y relationship of the soil springs, which can vary along the depth/length of the pile. This variation again is related to the liquefaction potential of the site and the magnitude of the seismic shaking. As the piles in the group can reach their extreme deflections and rotations during increased amounts of lateral loading, a precise modeling of the inelastic behavior of the pile cross-section is done, considering the complete stress-strain behavior of concrete, with and without confinement, and reinforcing steel, including the strain-hardening portion. The possibility of the inelastic buckling of the individual piles is considered in the overall collapse modes. The model is analysed using Riks analysis in finite element software to check the post buckling behavior and plastic collapse of piles. The results confirm the kinds of failure modes predicted by centrifuge test results reported by researchers on pile group, although the pile material used is significantly different from that of the simulation model. The extension of the present work promises an important contribution to the design codes for pile groups in liquefying soils.

Keywords: Collapse load analysis, inelastic buckling, liquefaction, pile group.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 867
213 Implementing an Intuitive Reasoner with a Large Weather Database

Authors: Yung-Chien Sun, O. Grant Clark

Abstract:

In this paper, the implementation of a rule-based intuitive reasoner is presented. The implementation included two parts: the rule induction module and the intuitive reasoner. A large weather database was acquired as the data source. Twelve weather variables from those data were chosen as the “target variables" whose values were predicted by the intuitive reasoner. A “complex" situation was simulated by making only subsets of the data available to the rule induction module. As a result, the rules induced were based on incomplete information with variable levels of certainty. The certainty level was modeled by a metric called "Strength of Belief", which was assigned to each rule or datum as ancillary information about the confidence in its accuracy. Two techniques were employed to induce rules from the data subsets: decision tree and multi-polynomial regression, respectively for the discrete and the continuous type of target variables. The intuitive reasoner was tested for its ability to use the induced rules to predict the classes of the discrete target variables and the values of the continuous target variables. The intuitive reasoner implemented two types of reasoning: fast and broad where, by analogy to human thought, the former corresponds to fast decision making and the latter to deeper contemplation. . For reference, a weather data analysis approach which had been applied on similar tasks was adopted to analyze the complete database and create predictive models for the same 12 target variables. The values predicted by the intuitive reasoner and the reference approach were compared with actual data. The intuitive reasoner reached near-100% accuracy for two continuous target variables. For the discrete target variables, the intuitive reasoner predicted at least 70% as accurately as the reference reasoner. Since the intuitive reasoner operated on rules derived from only about 10% of the total data, it demonstrated the potential advantages in dealing with sparse data sets as compared with conventional methods.

Keywords: Artificial intelligence, intuition, knowledge acquisition, limited certainty.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1354
212 Application of Single Tuned Passive Filters in Distribution Networks at the Point of Common Coupling

Authors: M. Almutairi, S. Hadjiloucas

Abstract:

The harmonic distortion of voltage is important in relation to power quality due to the interaction between the large diffusion of non-linear and time-varying single-phase and three-phase loads with power supply systems. However, harmonic distortion levels can be reduced by improving the design of polluting loads or by applying arrangements and adding filters. The application of passive filters is an effective solution that can be used to achieve harmonic mitigation mainly because filters offer high efficiency, simplicity, and are economical. Additionally, possible different frequency response characteristics can work to achieve certain required harmonic filtering targets. With these ideas in mind, the objective of this paper is to determine what size single tuned passive filters work in distribution networks best, in order to economically limit violations caused at a given point of common coupling (PCC). This article suggests that a single tuned passive filter could be employed in typical industrial power systems. Furthermore, constrained optimization can be used to find the optimal sizing of the passive filter in order to reduce both harmonic voltage and harmonic currents in the power system to an acceptable level, and, thus, improve the load power factor. The optimization technique works to minimize voltage total harmonic distortions (VTHD) and current total harmonic distortions (ITHD), where maintaining a given power factor at a specified range is desired. According to the IEEE Standard 519, both indices are viewed as constraints for the optimal passive filter design problem. The performance of this technique will be discussed using numerical examples taken from previous publications.

Keywords: Harmonics, passive filter, power factor, power quality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2155
211 Efficient Real-time Remote Data Propagation Mechanism for a Component-Based Approach to Distributed Manufacturing

Authors: V. Barot, S. McLeod, R. Harrison, A. A. West

Abstract:

Manufacturing Industries face a crucial change as products and processes are required to, easily and efficiently, be reconfigurable and reusable. In order to stay competitive and flexible, situations also demand distribution of enterprises globally, which requires implementation of efficient communication strategies. A prototype system called the “Broadcaster" has been developed with an assumption that the control environment description has been engineered using the Component-based system paradigm. This prototype distributes information to a number of globally distributed partners via an adoption of the circular-based data processing mechanism. The work highlighted in this paper includes the implementation of this mechanism in the domain of the manufacturing industry. The proposed solution enables real-time remote propagation of machine information to a number of distributed supply chain client resources such as a HMI, VRML-based 3D views and remote client instances regardless of their distribution nature and/ or their mechanisms. This approach is presented together with a set of evaluation results. Authors- main concentration surrounds the reliability and the performance metric of the adopted approach. Performance evaluation is carried out in terms of the response times taken to process the data in this domain and compared with an alternative data processing implementation such as the linear queue mechanism. Based on the evaluation results obtained, authors justify the benefits achieved from this proposed implementation and highlight any further research work that is to be carried out.

Keywords: Broadcaster, circular buffer, Component-based, distributed manufacturing, remote data propagation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1350
210 Methane versus Carbon Dioxide: Mitigation Prospects

Authors: Alexander J. Severinsky, Allen L. Sessoms

Abstract:

Atmospheric carbon dioxide (CO2) has dominated the discussion around the causes of climate change. This is a reflection of a 100-year time horizon for all greenhouse gases that became a norm.  The 100-year time horizon is much too long – and yet, almost all mitigation efforts, including those set in the near-term frame of within 30 years, are still geared toward it. In this paper, we show that for a 30-year time horizon, methane (CH4) is the greenhouse gas whose radiative forcing exceeds that of CO2. In our analysis, we use the radiative forcing of greenhouse gases in the atmosphere, because they directly affect the rise in temperature on Earth. We found that in 2019, the radiative forcing (RF) of methane was ~2.5 W/m2 and that of carbon dioxide was ~2.1 W/m2. Under a business-as-usual (BAU) scenario until 2050, such forcing would be ~2.8 W/m2 and ~3.1 W/m2 respectively. There is a substantial spread in the data for anthropogenic and natural methane (CH4) emissions, along with natural gas, (which is primarily CH4), leakages from industrial production to consumption. For this reason, we estimate the minimum and maximum effects of a reduction of these leakages, and assume an effective immediate reduction by 80%. Such action may serve to reduce the annual radiative forcing of all CH4 emissions by ~15% to ~30%. This translates into a reduction of RF by 2050 from ~2.8 W/m2 to ~2.5 W/m2 in the case of the minimum effect that can be expected, and to ~2.15 W/m2 in the case of the maximum effort to reduce methane leakages. Under the BAU, we find that the RF of CO2 will increase from ~2.1 W/m2 now to ~3.1 W/m2 by 2050. We assume a linear reduction of 50% in anthropogenic emission over the course of the next 30 years, which would reduce the radiative forcing of CO2 from ~3.1 W/m2 to ~2.9 W/m2. In the case of "net zero," the other 50% of only anthropogenic CO2 emissions reduction would be limited to being either from sources of emissions or directly from the atmosphere. In this instance, the total reduction would be from ~3.1 W/m2 to ~2.7 W/m2, or ~0.4 W/m2. To achieve the same radiative forcing as in the scenario of maximum reduction of methane leakages of ~2.15 W/m2, an additional reduction of radiative forcing of CO2 would be approximately 2.7 -2.15 = 0.55 W/m2. In total, one would need to remove ~660 GT of CO2 from the atmosphere in order to match the maximum reduction of current methane leakages, and ~270 GT of CO2 from emitting sources, to reach "negative emissions". This amounts to over 900 GT of CO2.

Keywords: Methane Leakages, Methane Radiative Forcing, Methane Mitigation, Methane Net Zero.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 558
209 Production and Application of Organic Waste Compost for Urban Agriculture in Emerging Cities

Authors: Alemayehu Agizew Woldeamanuel, Mekonnen Maschal Tarekegn, Raj Mohan Balakrishina

Abstract:

Composting is one of the conventional techniques adopted for organic waste management but the practice is very limited in emerging cities despite that most of the waste generated is organic. This paper aims to examine the viability of composting for organic waste management in the emerging city of Addis Ababa, Ethiopia by addressing the composting practice, quality of compost and application of compost in urban agriculture. The study collects data using compost laboratory testing and urban farm households’ survey and uses descriptive analysis on the state of compost production and application, physicochemical analysis of the compost samples, and regression analysis on the urban farmer’s willingness to pay for compost. The findings of the study indicated that there is composting practice at a small scale, most of the producers use unsorted feedstock materials, aerobic composting is dominantly used and the maturation period ranged from four to 10 weeks. The carbon content of the compost ranges from 30.8 to 277.1 due to the type of feedstock applied and this surpasses the ideal proportions for C:N ratio. The total nitrogen, pH, organic matter and moisture content are relatively optimal. The levels of heavy metals measured for Mn, Cu, Pb, Cd and Cr6+ in the compost samples are also insignificant. In the urban agriculture sector, chemical fertilizer is the dominant type of soil input in crop productions but vegetable producers use a combination of both fertilizer and other organic inputs including compost. The willingness to pay for compost depends on income, household size, gender, type of soil inputs, monitoring soil fertility, the main product of the farm, farming method and farm ownership. Finally, this study recommends the need for collaboration among stakeholders along the value chain of waste, awareness creation on the benefits of composting and addressing challenges faced by both compost producers and users.

Keywords: Composting, emerging city, organic waste management, urban agriculture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 994
208 Software Vulnerability Markets: Discoverers and Buyers

Authors: Abdullah M. Algarni, Yashwant K. Malaiya

Abstract:

Some of the key aspects of vulnerability—discovery, dissemination, and disclosure—have received some attention recently. However, the role of interaction among the vulnerability discoverers and vulnerability acquirers has not yet been adequately addressed. Our study suggests that a major percentage of discoverers, a majority in some cases, are unaffiliated with the software developers and thus are free to disseminate the vulnerabilities they discover in any way they like. As a result, multiple vulnerability markets have emerged. In some of these markets, the exchange is regulated, but in others, there is little or no regulation. In recent vulnerability discovery literature, the vulnerability discoverers have remained anonymous individuals. Although there has been an attempt to model the level of their efforts, information regarding their identities, modes of operation, and what they are doing with the discovered vulnerabilities has not been explored.

Reports of buying and selling of the vulnerabilities are now appearing in the press; however, the existence of such markets requires validation, and the natures of the markets need to be analyzed. To address this need, we have attempted to collect detailed information. We have identified the most prolific vulnerability discoverers throughout the past decade and examined their motivation and methods. A large percentage of these discoverers are located in Eastern and Western Europe and in the Far East. We have contacted several of them in order to collect firsthand information regarding their techniques, motivations, and involvement in the vulnerability markets. We examine why many of the discoverers appear to retire after a highly successful vulnerability-finding career. The paper identifies the actual vulnerability markets, rather than the hypothetical ideal markets that are often examined. The emergence of worldwide government agencies as vulnerability buyers has significant implications. We discuss potential factors that can impact the risk to society and the need for detailed exploration.

Keywords: Risk management, software security, vulnerability discoverers, vulnerability markets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3230
207 Assessing the Suitability of South African Waste Foundry Sand as an Additive in Clay Masonry Products

Authors: Nthabiseng Portia Mahumapelo, Andre van Niekerk, Ndabenhle Sosibo, Nirdesh Singh

Abstract:

The foundry industry generates large quantities of solid waste in the form of waste foundry sand. The ever-increasing quantities of this type of industrial waste put pressure on land-filling space and its proper management has become a global concern. The South African foundry industry is not different when it comes to this solid waste generation. Utilizing the foundry waste sand in other applications has become an attractive avenue to deal with this waste stream. In the present paper, an evaluation was done on the suitability of foundry waste sand as an additive in clay masonry products. Purchased clay was added to the foundry waste sand sample in a 50/50 ratio. The mixture was named FC sample. The FC sample was mixed with water in a pan mixer until the mixture was consistent and suitable for extrusion. The FC sample was extruded and cut into briquettes. Water absorption, shrinkage and modulus of rupture tests were conducted on the resultant briquettes. Foundry waste sand and FC samples were respectively characterized mineralogically using X-Ray Diffraction, and the major and trace elements were determined using Inductively Coupled Plasma Optical Emission Spectroscopy. Adding purchased clay to the foundry waste sand positively influenced the workability of the test sample. Another positive characteristic was the low linear shrinkage, which indicated that products manufactured from the FC sample would not be susceptible to cracking. The water absorption values were acceptable and the unfired and fired strength values of the briquette’s samples were acceptable. In conclusion, tests showed that foundry waste sand can be used as an additive in masonry clay bricks, provided it is blended with good quality clay.

Keywords: Foundry waste sand, masonry clay bricks, modulus of rupture, shrinkage.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 623
206 Sustainability Assessment of a Deconstructed Residential House

Authors: Atiq U. Zaman, Juliet Arnott

Abstract:

This paper analyses the various benefits and barriers of residential deconstruction in the context of environmental performance and circular economy based on a case study project in Christchurch, New Zealand. The case study project “Whole House Deconstruction” which aimed, firstly, to harvest materials from a residential house, secondly, to produce new products using the recovered materials, and thirdly, to organize an exhibition for the local public to promote awareness on resource conservation and sustainable deconstruction practices. Through a systematic deconstruction process, the project recovered around 12 tonnes of various construction materials, most of which would otherwise be disposed of to landfill in the traditional demolition approach. It is estimated that the deconstruction of a similar residential house could potentially prevent around 27,029 kg of carbon emission to the atmosphere by recovering and reusing the building materials. In addition, the project involved local designers to produce 400 artefacts using the recovered materials and to exhibit them to accelerate public awareness. The findings from this study suggest that the deconstruction project has significant environmental benefits, as well as social benefits by involving the local community and unemployed youth as a part of their professional skills development opportunities. However, the project faced a number of economic and institutional challenges. The study concludes that with proper economic models and appropriate institutional support a significant amount of construction and demolition waste can be reduced through a systematic deconstruction process. Traditionally, the greatest benefits from such projects are often ignored and remain unreported to wider audiences as most of the external and environmental costs have not been considered in the traditional linear economy.

Keywords: Circular economy, construction and demolition waste, resource recovery, systematic deconstruction, sustainable waste management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1084
205 Rank-Based Chain-Mode Ensemble for Binary Classification

Authors: Chongya Song, Kang Yen, Alexander Pons, Jin Liu

Abstract:

In the field of machine learning, the ensemble has been employed as a common methodology to improve the performance upon multiple base classifiers. However, the true predictions are often canceled out by the false ones during consensus due to a phenomenon called “curse of correlation” which is represented as the strong interferences among the predictions produced by the base classifiers. In addition, the existing practices are still not able to effectively mitigate the problem of imbalanced classification. Based on the analysis on our experiment results, we conclude that the two problems are caused by some inherent deficiencies in the approach of consensus. Therefore, we create an enhanced ensemble algorithm which adopts a designed rank-based chain-mode consensus to overcome the two problems. In order to evaluate the proposed ensemble algorithm, we employ a well-known benchmark data set NSL-KDD (the improved version of dataset KDDCup99 produced by University of New Brunswick) to make comparisons between the proposed and 8 common ensemble algorithms. Particularly, each compared ensemble classifier uses the same 22 base classifiers, so that the differences in terms of the improvements toward the accuracy and reliability upon the base classifiers can be truly revealed. As a result, the proposed rank-based chain-mode consensus is proved to be a more effective ensemble solution than the traditional consensus approach, which outperforms the 8 ensemble algorithms by 20% on almost all compared metrices which include accuracy, precision, recall, F1-score and area under receiver operating characteristic curve.

Keywords: Consensus, curse of correlation, imbalanced classification, rank-based chain-mode ensemble.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 687
204 Cross Signal Identification for PSG Applications

Authors: Carmen Grigoraş, Victor Grigoraş, Daniela Boişteanu

Abstract:

The standard investigational method for obstructive sleep apnea syndrome (OSAS) diagnosis is polysomnography (PSG), which consists of a simultaneous, usually overnight recording of multiple electro-physiological signals related to sleep and wakefulness. This is an expensive, encumbering and not a readily repeated protocol, and therefore there is need for simpler and easily implemented screening and detection techniques. Identification of apnea/hypopnea events in the screening recordings is the key factor for the diagnosis of OSAS. The analysis of a solely single-lead electrocardiographic (ECG) signal for OSAS diagnosis, which may be done with portable devices, at patient-s home, is the challenge of the last years. A novel artificial neural network (ANN) based approach for feature extraction and automatic identification of respiratory events in ECG signals is presented in this paper. A nonlinear principal component analysis (NLPCA) method was considered for feature extraction and support vector machine for classification/recognition. An alternative representation of the respiratory events by means of Kohonen type neural network is discussed. Our prospective study was based on OSAS patients of the Clinical Hospital of Pneumology from Iaşi, Romania, males and females, as well as on non-OSAS investigated human subjects. Our computed analysis includes a learning phase based on cross signal PSG annotation.

Keywords: Artificial neural networks, feature extraction, obstructive sleep apnea syndrome, pattern recognition, signalprocessing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1509
203 Synthesis of Highly Sensitive Molecular Imprinted Sensor for Selective Determination of Doxycycline in Honey Samples

Authors: Nadia El Alami El Hassani, Soukaina Motia, Benachir Bouchikhi, Nezha El Bari

Abstract:

Doxycycline (DXy) is a cycline antibiotic, most frequently prescribed to treat bacterial infections in veterinary medicine. However, its broad antimicrobial activity and low cost, lead to an intensive use, which can seriously affect human health. Therefore, its spread in the food products has to be monitored. The scope of this work was to synthetize a sensitive and very selective molecularly imprinted polymer (MIP) for DXy detection in honey samples. Firstly, the synthesis of this biosensor was performed by casting a layer of carboxylate polyvinyl chloride (PVC-COOH) on the working surface of a gold screen-printed electrode (Au-SPE) in order to bind covalently the analyte under mild conditions. Secondly, DXy as a template molecule was bounded to the activated carboxylic groups, and the formation of MIP was performed by a biocompatible polymer by the mean of polyacrylamide matrix. Then, DXy was detected by measurements of differential pulse voltammetry (DPV). A non-imprinted polymer (NIP) prepared in the same conditions and without the use of template molecule was also performed. We have noticed that the elaborated biosensor exhibits a high sensitivity and a linear behavior between the regenerated current and the logarithmic concentrations of DXy from 0.1 pg.mL−1 to 1000 pg.mL−1. This technic was successfully applied to determine DXy residues in honey samples with a limit of detection (LOD) of 0.1 pg.mL−1 and an excellent selectivity when compared to the results of oxytetracycline (OXy) as analogous interfering compound. The proposed method is cheap, sensitive, selective, simple, and is applied successfully to detect DXy in honey with the recoveries of 87% and 95%. Considering these advantages, this system provides a further perspective for food quality control in industrial fields.

Keywords: Electrochemical sensor, molecular imprinted polymer, doxycycline, food control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1143
202 ANN based Multi Classifier System for Prediction of High Energy Shower Primary Energy and Core Location

Authors: Gitanjali Devi, Kandarpa Kumar Sarma, Pranayee Datta, Anjana Kakoti Mahanta

Abstract:

Cosmic showers, during the transit through space, produce sub - products as a result of interactions with the intergalactic or interstellar medium which after entering earth generate secondary particles called Extensive Air Shower (EAS). Detection and analysis of High Energy Particle Showers involve a plethora of theoretical and experimental works with a host of constraints resulting in inaccuracies in measurements. Therefore, there exist a necessity to develop a readily available system based on soft-computational approaches which can be used for EAS analysis. This is due to the fact that soft computational tools such as Artificial Neural Network (ANN)s can be trained as classifiers to adapt and learn the surrounding variations. But single classifiers fail to reach optimality of decision making in many situations for which Multiple Classifier System (MCS) are preferred to enhance the ability of the system to make decisions adjusting to finer variations. This work describes the formation of an MCS using Multi Layer Perceptron (MLP), Recurrent Neural Network (RNN) and Probabilistic Neural Network (PNN) with data inputs from correlation mapping Self Organizing Map (SOM) blocks and the output optimized by another SOM. The results show that the setup can be adopted for real time practical applications for prediction of primary energy and location of EAS from density values captured using detectors in a circular grid.

Keywords: EAS, Shower, Core, ANN, Location.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1269
201 Bacteriological Screening and Antibiotic – Heavy Metal Resistance Profile of the Bacteria Isolated from Some Amphibian and Reptile Species of the Biga Stream in Turkey

Authors: Nurcihan Hacioglu, Cigdem Gul, Murat Tosunoglu

Abstract:

In this article, the antibiogram and heavy metal resistance profile of the bacteria isolated from total 34 studied animals (Pelophylax ridibundus = 12; Mauremys rivulata = 14; Natrix natrix = 8) captured around the Biga Stream, are described. There was no database information on antibiogram and heavy metal resistance profile of bacteria from these area’s amphibians and reptiles. A total of 200 bacteria were successfully isolated from cloaca and oral samples of the aquatic amphibians and reptiles as well as from the water sample. According to Jaccard’s similarity index, the degree of similarity in the bacterial flora was quite high among the amphibian and reptile species under examination, whereas it was different from the bacterial diversity in the water sample. The most frequent isolates were A. hydrophila (31.5%), B. pseudomallei (8.5%), and C. freundii (7%). The total numbers of bacteria obtained were as follows: 45 in P. ridibundus, 45 in N. natrix 30 in M. rivulata, and 80 in the water sample. The result showed that cefmetazole was the most effective antibiotic to control the bacteria isolated in this study and that approximately 93.33% of the bacterial isolates were sensitive to this antibiotic. The multiple antibiotic resistances (MAR) index indicated that P. ridibundus (0.95) > N. natrix (0.89) > M. rivulata (0.39). Furthermore, all the tested heavy metals (Pb+2, Cu+2, Cr+3, and Mn+2) inhibit the growth of the bacterial isolates at different rates. Therefore, it indicated that the water source of the animals was contaminated with both antibiotic residues and heavy metals.

Keywords: Amphibian, Bacteriological Quality, Reptile, Antibiotic & Heavy Metal Resistance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2207
200 Reduction of False Positives in Head-Shoulder Detection Based on Multi-Part Color Segmentation

Authors: Lae-Jeong Park

Abstract:

The paper presents a method that utilizes figure-ground color segmentation to extract effective global feature in terms of false positive reduction in the head-shoulder detection. Conventional detectors that rely on local features such as HOG due to real-time operation suffer from false positives. Color cue in an input image provides salient information on a global characteristic which is necessary to alleviate the false positives of the local feature based detectors. An effective approach that uses figure-ground color segmentation has been presented in an effort to reduce the false positives in object detection. In this paper, an extended version of the approach is presented that adopts separate multipart foregrounds instead of a single prior foreground and performs the figure-ground color segmentation with each of the foregrounds. The multipart foregrounds include the parts of the head-shoulder shape and additional auxiliary foregrounds being optimized by a search algorithm. A classifier is constructed with the feature that consists of a set of the multiple resulting segmentations. Experimental results show that the presented method can discriminate more false positive than the single prior shape-based classifier as well as detectors with the local features. The improvement is possible because the presented approach can reduce the false positives that have the same colors in the head and shoulder foregrounds.

Keywords: Pedestrian detection, color segmentation, false positives, feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1117
199 A State Aggregation Approach to Singularly Perturbed Markov Reward Processes

Authors: Dali Zhang, Baoqun Yin, Hongsheng Xi

Abstract:

In this paper, we propose a single sample path based algorithm with state aggregation to optimize the average rewards of singularly perturbed Markov reward processes (SPMRPs) with a large scale state spaces. It is assumed that such a reward process depend on a set of parameters. Differing from the other kinds of Markov chain, SPMRPs have their own hierarchical structure. Based on this special structure, our algorithm can alleviate the load in the optimization for performance. Moreover, our method can be applied on line because of its evolution with the sample path simulated. Compared with the original algorithm applied on these problems of general MRPs, a new gradient formula for average reward performance metric in SPMRPs is brought in, which will be proved in Appendix, and then based on these gradients, the schedule of the iteration algorithm is presented, which is based on a single sample path, and eventually a special case in which parameters only dominate the disturbance matrices will be analyzed, and a precise comparison with be displayed between our algorithm with the old ones which is aim to solve these problems in general Markov reward processes. When applied in SPMRPs, our method will approach a fast pace in these cases. Furthermore, to illustrate the practical value of SPMRPs, a simple example in multiple programming in computer systems will be listed and simulated. Corresponding to some practical model, physical meanings of SPMRPs in networks of queues will be clarified.

Keywords: Singularly perturbed Markov processes, Gradient of average reward, Differential reward, State aggregation, Perturbed close network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1605
198 Layer-by-Layer Deposition of Poly (Ethylene Imine) Nanolayers on Polypropylene Nonwoven Fabric. Electrostatic and Thermal Properties

Authors: Dawid Stawski, Silviya Halacheva, Dorota Zielińska

Abstract:

The surface properties of many materials can be readily and predictably modified by the controlled deposition of thin layers containing appropriate functional groups and this research area is now a subject of widespread interest. The layer-by-layer (lbl) method involves depositing oppositely charged layers of polyelectrolytes onto the substrate material which are stabilized due to strong electrostatic forces between adjacent layers. This type of modification affords products that combine the properties of the original material with the superficial parameters of the new external layers. Through an appropriate selection of the deposited layers, the surface properties can be precisely controlled and readily adjusted in order to meet the requirements of the intended application. In the presented paper a variety of anionic (poly(acrylic acid)) and cationic (linear poly(ethylene imine), polymers were successfully deposited onto the polypropylene nonwoven using the lbl technique. The chemical structure of the surface before and after modification was confirmed by reflectance FTIR spectroscopy, volumetric analysis and selective dyeing tests. As a direct result of this work, new materials with greatly improved properties have been produced. For example, following a modification process significant changes in the electrostatic activity of a range of novel nanocomposite materials were observed. The deposition of polyelectrolyte nanolayers was found to strongly accelerate the loss of electrostatically generated charges and to increase considerably the thermal resistance properties of the modified fabric (the difference in T50% is over 20oC). From our results, a clear relationship between the type of polyelectrolyte layer deposited onto the flat fabric surface and the properties of the modified fabric was identified.

Keywords: Layer-by-layer technique, polypropylene nonwoven, surface modification, surface properties.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2479
197 An Effect of Organic Supplements on Stimulating Growth of Dendrobium Protocorms and Seedlings

Authors: Sunthari Tharapan, Chockpisit Thepsithar, Kullanart Obsuwan

Abstract:

This study was aimed to investigate the effect of various organic supplements on growth and development of Dendrobium discolor’s protocorms and seedlings growth of Dendrobium Judy Rutz. Protocorms of Dendrobium discolor with 2.0 cm. in diameter and seedlings of Dendrobium Judy Rutz at the same size (0.5 cm. height) were sub-cultured on Hyponex medium supplemented with cow milk (CM), soy milk (SM), potato extract (PE) and peptone (P) for 2 months. The protocorms were developed to seedlings in all treatments after cultured for 2 months. However, the best results were found on Hyponex medium supplemented with P was the best in which the maximum fresh and dry weight and maximum shoot height were obtained in this treatment statistically different (p ≤ 0.05) to other treatments. Moreover, Hyponex medium supplemented with P also stimulated the maximum mean number of 5.7 shoots per explant which also showed statistically different (p ≤ 0.05) when compared to other treatments. The results of growth of Dendrobium Judy Rutz seedlings indicated the medium supplemented with 100 mL/L PE enhanced the maximum fresh and dry weigh per explants with significantly different (p ≤ 0.05) in fresh weight from other treatments including the control medium without any organic supplementation. However, the dry weight was not significantly different (p ≤ 0.05) from medium supplemented with SM and P. There was multiple shoots induction in all media with or without organic supplementation ranging from 2.6 to 3 shoots per explants. The maximum shoot height was also obtained in the seedlings cultured on medium supplemented with PE while the longest root length was found in medium supplemented with SM.

Keywords: Fresh weight, in vitro propagation, orchid, plant height.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2634
196 Necessary Condition to Utilize Adaptive Control in Wind Turbine Systems to Improve Power System Stability

Authors: Javad Taherahmadi, Mohammad Jafarian, Mohammad Naser Asefi

Abstract:

The global capacity of wind power has dramatically increased in recent years. Therefore, improving the technology of wind turbines to take different advantages of this enormous potential in the power grid, could be interesting subject for scientists. The doubly-fed induction generator (DFIG) wind turbine is a popular system due to its many advantages such as the improved power quality, high energy efficiency and controllability, etc. With an increase in wind power penetration in the network and with regard to the flexible control of wind turbines, the use of wind turbine systems to improve the dynamic stability of power systems has been of significance importance for researchers. Subsynchronous oscillations are one of the important issues in the stability of power systems. Damping subsynchronous oscillations by using wind turbines has been studied in various research efforts, mainly by adding an auxiliary control loop to the control structure of the wind turbine. In most of the studies, this control loop is composed of linear blocks. In this paper, simple adaptive control is used for this purpose. In order to use an adaptive controller, the convergence of the controller should be verified. Since adaptive control parameters tend to optimum values in order to obtain optimum control performance, using this controller will help the wind turbines to have positive contribution in damping the network subsynchronous oscillations at different wind speeds and system operating points. In this paper, the application of simple adaptive control in DFIG wind turbine systems to improve the dynamic stability of power systems is studied and the essential condition for using this controller is considered. It is also shown that this controller has an insignificant effect on the dynamic stability of the wind turbine, itself.

Keywords: Almost strictly positive real, doubly-fed induction generator, simple adaptive control, subsynchronous oscillations, wind turbine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1099
195 Investigation of the Properties of Epoxy Modified Binders Based on Epoxy Oligomer with Improved Deformation and Strength Properties

Authors: Hlaing Zaw Oo, N. Kostromina, V. Osipchik, T. Kravchenko, K. Yakovleva

Abstract:

The process of modification of ed-20 epoxy resin synthesized by vinyl-containing compounds is considered. It is shown that the introduction of vinyl-containing compounds into the composition based on epoxy resin ED-20 allows adjusting the technological and operational characteristics of the binder. For improvement of the properties of epoxy resin, following modifiers were selected: polyvinylformalethyl, polyvinyl butyral and composition of linear and aromatic amines (Аramine) as a hardener. Now the big range of hardeners of epoxy resins exists that allows varying technological properties of compositions, and also thermophysical and strength indicators. The nature of the aramin type hardener has a significant impact on the spatial parameters of the mesh, glass transition temperature, and strength characteristics. Epoxy composite materials based on ED-20 modified with polyvinyl butyral were obtained and investigated. It is shown that the composition of resins based on derivatives of polyvinyl butyral and ED-20 allows obtaining composite materials with a higher complex of deformation-strength, adhesion and thermal properties, better water resistance, frost resistance, chemical resistance, and impact strength. The magnitude of the effect depends on the chemical structure, temperature and curing time. In the area of concentrations, where the effect of composite synergy is appearing, the values of strength and stiffness significantly exceed the similar parameters of the individual components of the mixture. The polymer-polymer compositions form their class of materials with diverse specific properties that ensure their competitive application. Coatings with high performance under cyclic loading have been obtained based on epoxy oligomers modified with vinyl-containing compounds.

Keywords: Epoxy resins, modification, vinyl-containing compounds, deformation and strength properties.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 552
194 A Large Ion Collider Experiment (ALICE) Diffractive Detector Control System for RUN-II at the Large Hadron Collider

Authors: J. C. Cabanillas-Noris, M. I. Martínez-Hernández, I. León-Monzón

Abstract:

The selection of diffractive events in the ALICE experiment during the first data taking period (RUN-I) of the Large Hadron Collider (LHC) was limited by the range over which rapidity gaps occur. It would be possible to achieve better measurements by expanding the range in which the production of particles can be detected. For this purpose, the ALICE Diffractive (AD0) detector has been installed and commissioned for the second phase (RUN-II). Any new detector should be able to take the data synchronously with all other detectors and be operated through the ALICE central systems. One of the key elements that must be developed for the AD0 detector is the Detector Control System (DCS). The DCS must be designed to operate safely and correctly this detector. Furthermore, the DCS must also provide optimum operating conditions for the acquisition and storage of physics data and ensure these are of the highest quality. The operation of AD0 implies the configuration of about 200 parameters, from electronics settings and power supply levels to the archiving of operating conditions data and the generation of safety alerts. It also includes the automation of procedures to get the AD0 detector ready for taking data in the appropriate conditions for the different run types in ALICE. The performance of AD0 detector depends on a certain number of parameters such as the nominal voltages for each photomultiplier tube (PMT), their threshold levels to accept or reject the incoming pulses, the definition of triggers, etc. All these parameters define the efficiency of AD0 and they have to be monitored and controlled through AD0 DCS. Finally, AD0 DCS provides the operator with multiple interfaces to execute these tasks. They are realized as operating panels and scripts running in the background. These features are implemented on a SCADA software platform as a distributed control system which integrates to the global control system of the ALICE experiment.

Keywords: AD0, ALICE, DCS, LHC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1354
193 To Cloudify or Not to Cloudify

Authors: Laila Yasir Al-Harthy, Ali H. Al-Badi

Abstract:

As an emerging business model, cloud computing has been initiated to satisfy the need of organizations and to push Information Technology as a utility. The shift to the cloud has changed the way Information Technology departments are managed traditionally and has raised many concerns for both, public and private sectors.

The purpose of this study is to investigate the possibility of cloud computing services replacing services provided traditionally by IT departments. Therefore, it aims to 1) explore whether organizations in Oman are ready to move to the cloud; 2) identify the deciding factors leading to the adoption or rejection of cloud computing services in Oman; and 3) provide two case studies, one for a successful Cloud provider and another for a successful adopter.

This paper is based on multiple research methods including conducting a set of interviews with cloud service providers and current cloud users in Oman; and collecting data using questionnaires from experts in the field and potential users of cloud services.

Despite the limitation of bandwidth capacity and Internet coverage offered in Oman that create a challenge in adopting the cloud, it was found that many information technology professionals are encouraged to move to the cloud while few are resistant to change.

The recent launch of a new Omani cloud service provider and the entrance of other international cloud service providers in the Omani market make this research extremely valuable as it aims to provide real-life experience as well as two case studies on the successful provision of cloud services and the successful adoption of these services.

Keywords: Cloud computing, cloud deployment models, cloud service models and deciding factors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2271
192 Current Status and Future Trends of Mechanized Fruit Thinning Devices and Sensor Technology

Authors: Marco Lopes, Pedro D. Gaspar, Maria P. Simões

Abstract:

This paper reviews the different concepts that have been investigated concerning the mechanization of fruit thinning as well as multiple working principles and solutions that have been developed for feature extraction of horticultural products, both in the field and industrial environments. The research should be committed towards selective methods, which inevitably need to incorporate some kinds of sensor technology. Computer vision often comes out as an obvious solution for unstructured detection problems, although leaves despite the chosen point of view frequently occlude fruits. Further research on non-traditional sensors that are capable of object differentiation is needed. Ultrasonic and Near Infrared (NIR) technologies have been investigated for applications related to horticultural produce and show a potential to satisfy this need while simultaneously providing spatial information as time of flight sensors. Light Detection and Ranging (LIDAR) technology also shows a huge potential but it implies much greater costs and the related equipment is usually much larger, making it less suitable for portable devices, which may serve a purpose on smaller unstructured orchards. Portable devices may serve a purpose on these types of orchards. In what concerns sensor methods, on-tree fruit detection, major challenge is to overcome the problem of fruits’ occlusion by leaves and branches. Hence, nontraditional sensors capable of providing some type of differentiation should be investigated.

Keywords: Fruit thinning, horticultural field, portable devices, sensor technologies.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 953
191 Relationship between Mental Health and Food Access among Healthcare College Students in a Snowy Area in Japan

Authors: Yuki Irie, Shota Ogawa, Hitomi Kosugi, Hiromitsu Shinozaki

Abstract:

Dropout rates in higher educational institutions pose significant challenges for both students and institutions, with poor mental health (MH) emerging as a key risk factor. Healthcare college students, including medical students, are particularly vulnerable to MH issues due to the demanding academic schedules they face. Poor mental health (MH) would be considered as a key risk factor for dropout from higher educational institutions that pose significant challenges for both students and institutions. And, inadequate food access (FA) has been related to poor MH. Given that targeted students may experience multiple risk factors for poor MH and vulnerable FA, the study aims to clarify the relationship between MH and FA to enhance student well-being. A cross-sectional design was used to explore the association between MH status and FA among 421 students (147 male, 274 female). Participants completed two questionnaires assessing MH and FA during winter 2022. The mean MH score was 6.7 ± 4.6, with higher scores indicating worse MH (max. score 27). While year-round FA showed no significant association with MH, FA during winter was significantly associated with MH (p = 0.01). Although car ownership did not directly impact MH, it was significantly associated with FA (p < 0.01), thus indirectly influencing MH. Our findings underscore the importance of FA in promoting MH, particularly during winter. Adopting a lifestyle that facilitates easier FA may be beneficial for MH, given its indirect association with MH outcomes. These insights emphasize the significance of addressing FA-related challenges to enhance student’s mental well-being.

Keywords: Mental health, food access, co-medical students, lifestyle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 77