Search results for: patterns of traditional sustainability for residential buildings
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11605

Search results for: patterns of traditional sustainability for residential buildings

4225 The Vanishing Treasure: An Anthropological Study on Changing Social Relationships, Values, Belief System and Language Pattern of the Limbus in Kalimpong Sub-Division of the Darjeeling District in West Bengal, India

Authors: Biva Samadder, Samita Manna

Abstract:

India is a melting pot of races, tribes, castes and communities. The population of India can be roughly branched into the huge majority of “Civilized” Indians of the Plains and the minority of Tribal population of the hill area and the forest who constituting almost 16 percent of total population of India. The Kirat community composed of four ethnic tribes: Limbu, Lepcha, Dhimal, and Rai. These Kirat people were found to be rich in indigenous knowledge, skill and practices especially for the use on medicinal plants and livelihood purposes. The “Mundhum" is the oral scripture or the “Bible of the Limbus” which serves as the canon of the codes of the Limbu socialization, their moral values and the very orientation of their lifestyle. From birth till death the Limbus are disciplined in the life with full of religious rituals, traditions and culture governed by community norms with a rich legacy of indigenous knowledge and traditional practices. The present study has been conducted using both secondary as well as primary data by applying social methodology consisting of the social survey, questionnaire, interviews and observations in the Kalimpong Block-I of Darjeeling District of west Bengal of India, which is a heterogeneous zone in terms of its ethnic composition and where the Limbus are pre-dominantly concentrated. Due to their close contact with other caste and communities Limbus are now adjusted with the changing situation by borrowing some cultural traits from the other communities and changes that have taken place in their cultural practices, religious beliefs, economic aspects, languages and in social roles and relationships which is bringing the change in their material culture. Limbu language is placed in the Tibeto- Burman Language category. But due to the political and cultural domination of educationally sound and numerically dominant Bengali race, the different communities in this area forced to come under the one umbrella of the Nepali or Gorkhali nation (nation-people). Their respective identities had to be submerged in order to constitute as a strong force to resist Nepali domination and ensure their common survival. As Nepali is a lingua-franca of the area knowing and speaking Nepali language helps them in procuring economic and occupational facilities. Ironically, present day younger generation does not feel comfortable speaking in their own Limbu tongue. The traditional knowledge about medicinal plants, healing, and health culture is found to be wear away due to the lack of interest of young generation. Not only poverty, along with exclusion due to policies they are in the phase of extinction, but their capabilities are ignored and not documented and preserved especially in the case of Limbus who having a great cultural heritage of an oral tradition. Attempts have been made to discuss the persistence and changes in socioeconomic pattern of life in relation to the social structure, material culture, cultural practices, social relationships, indigenous technology, ethos and their values and belief system.

Keywords: changing social relationship, cultural transition, identity, indigenous knowledge, language

Procedia PDF Downloads 174
4224 Synthesis of Uio-66 Metal Organic Framework Impregnated Thin-Film Nanocomposite Membrane for the Desalination via Pressure Assisted Osmosis

Authors: Rajesha Kumar Alambi, Mansour Ahmed, Garudachari Bhadrachari, Safiyah Al-Muqahwi, Mansour Al-Rughaib, Jibu P. Thomas

Abstract:

Membrane-based pressure assisted osmosis (PAO) for seawater desalination has the potential to overcome the challenges of forward osmosis technology. PAO technology is gaining interest among the research community to ensure the sustainability of freshwater with a significant reduction in energy. The requirements of PAO membranes differ from the FO membrane; as it needs a slightly higher porous with sufficient mechanical strength to overcome the applied hydraulic pressure. The porous metal-organic framework (MOF) as a filler for the membrane synthesis has demonstrated a great potential to generate new channels for water transport, high selectivity, and reduced fouling propensity. Accordingly, this study is aimed at fabricating the UiO-66 MOF-based thin film nanocomposite membranes with specific characteristics for water desalination by PAO. A PAO test unit manufactured by Trevi System, USA, was used to determine the performance of the synthesized membranes. Further, the synthesized membranes were characterized in terms of morphological features, hydrophilicity, surface roughness, and mechanical properties. The 0.05 UiO-66 loaded membrane produced highest flux of 38L/m2h and with low reverse salt leakage of 2.1g/m²h for the DI water as feed solution and 2.0 M NaCl as draw solutions at the inlet feed pressure of 0.6 MPa. The new membranes showed a good tolerance toward the applied hydraulic pressure attributed to the fabric support used during the membrane synthesis.

Keywords: metal organic framework, composite membrane, desalination, salt rejection, flux

Procedia PDF Downloads 140
4223 A System Dynamics Model for Analyzing Customer Satisfaction in Healthcare Systems

Authors: Mahdi Bastan, Ali Mohammad Ahmadvand, Fatemeh Soltani Khamsehpour

Abstract:

Health organizations’ sustainable development has nowadays become highly affected by customers’ satisfaction due to significant changes made in the business environment of the healthcare system and emerging of Competitiveness paradigm. In case we look at the hospitals and other health organizations as service providers concerning profit issues, the satisfaction of employees as interior customers, and patients as exterior customers would be of significant importance in health business success. Furthermore, satisfaction rate could be considered in performance assessment of healthcare organizations as a perceived quality measure. Several researches have been carried out in identification of effective factors on patients’ satisfaction in health organizations. However, considering a systemic view, the complex causal relations among many components of healthcare system would be an issue that its acquisition and sustainability requires an understanding of the dynamic complexity, an appropriate cognition of different components, and effective relationships among them resulting ultimately in identifying the generative structure of patients’ satisfaction. Hence, the presenting paper applies system dynamics approaches coherently and methodologically to represent the systemic structure of customers’ satisfaction of a health system involving the constituent components and interactions among them. Then, the results of different policies taken on the system are simulated via developing mathematical models, identifying leverage points, and using scenario making technique and then, the best solutions are presented to improve customers’ satisfaction of the services. The presenting approach supports taking advantage of decision support systems. Additionally, relying on understanding of system behavior Dynamics, the effective policies for improving the health system would be recognized.

Keywords: customer satisfaction, healthcare, scenario, simulation, system dynamics

Procedia PDF Downloads 417
4222 An Experience on Urban Regeneration: A Case Study of Isfahan, Iran

Authors: Sedigheh Kalantari, Yaping Huang

Abstract:

The historic area of cities has experienced different phases of transformation. The beginning of the twentieth century, modernism, and modern development changed the integrated pattern of change and the historic urban quarter were regarded as subject comprehensive redevelopment. In this respect, historic area of Iranian cities have not been safe from these changes and affected by widespread evolutions; in particular after Islamic Revolution eras (1978) cities have traveled through an evolution in conservation and development policies and practices. Moreover, moving toward a specific approach and specific attention paid to the regeneration of the historical urban centers in Iran has started since the 1990s. This reveals the great importance attached to the historical centers of cities. This paper is an approach to examine an experience on urban regeneration in Iran through a case study. The study relies on multiple source of evidence. The use of multiple sources of evidence can help substantially improve the validity and reliability of the research. The empirical core of this research, therefore, rests in the process of urban revitalization of the old square in Isfahan. Isfahan is one of the oldest city of Persia. The historic area of city encompasses a large number of valuable buildings and monuments. One of the cultural and historical region of Isfahan is Atiq Square (Old Square). It has been the backbone node of the city that in course of time has being ignored more and more and transformed negatively. The complex had suffered from insufficiencies especially with respect to social and spatial aspects. Therefore, reorganization of that complex as the main and most important urban center of Isfahan became an inevitable issue; So this paper except from reminding the value of such historic-cultural heritage and review of its transformation, focused on an experience of urban revitalization project in this heritage site. The outcome of this research shows that situated in different socio-economic political and historical contexts and in face of different urban regeneration issues, Iran have displayed significant differences in the way of urban regeneration.

Keywords: historic area, Iran, urban regeneration, revitalization

Procedia PDF Downloads 265
4221 The Effect of Tacit Knowledge for Intelligence Cycle

Authors: Bahadir Aydin

Abstract:

It is difficult to access accurate knowledge because of mass data. This huge data make environment more and more caotic. Data are main piller of intelligence. The affiliation between intelligence and knowledge is quite significant to understand underlying truths. The data gathered from different sources can be modified, interpreted and classified by using intelligence cycle process. This process is applied in order to progress to wisdom as well as intelligence. Within this process the effect of tacit knowledge is crucial. Knowledge which is classified as explicit and tacit knowledge is the key element for any purpose. Tacit knowledge can be seen as "the tip of the iceberg”. This tacit knowledge accounts for much more than we guess in all intelligence cycle. If the concept of intelligence cycle is scrutinized, it can be seen that it contains risks, threats as well as success. The main purpose of all organizations is to be successful by eliminating risks and threats. Therefore, there is a need to connect or fuse existing information and the processes which can be used to develop it. Thanks to this process the decision-makers can be presented with a clear holistic understanding, as early as possible in the decision making process. Altering from the current traditional reactive approach to a proactive intelligence cycle approach would reduce extensive duplication of work in the organization. Applying new result-oriented cycle and tacit knowledge intelligence can be procured and utilized more effectively and timely.

Keywords: information, intelligence cycle, knowledge, tacit Knowledge

Procedia PDF Downloads 516
4220 Real-Time Optimisation and Minimal Energy Use for Water and Environment Efficient Irrigation

Authors: Kanya L. Khatri, Ashfaque A. Memon, Rod J. Smith, Shamas Bilal

Abstract:

The viability and sustainability of crop production is currently threatened by increasing water scarcity. Water scarcity problems can be addressed through improved water productivity and the options usually presumed in this context are efficient water use and conversion of surface irrigation to pressurized systems. By replacing furrow irrigation with drip or centre pivot systems, the water efficiency can be improved by up to 30 to 45%. However, the installation and application of pumps and pipes, and the associated fuels needed for these alternatives increase energy consumption and cause significant greenhouse gas emissions. Hence, a balance between the improvement in water use and the potential increase in energy consumption is required keeping in view adverse impact of increased carbon emissions on the environment. When surface water is used, pressurized systems increase energy consumption substantially, by between 65% to 75%, and produce greenhouse gas emissions around 1.75 times higher than that of gravity based irrigation. With gravity based surface irrigation methods the energy consumption is assumed to be negligible. This study has shown that a novel real-time infiltration model REIP has enabled implementation of real-time optimization and control of surface irrigation and surface irrigation with real-time optimization has potential to bring significant improvements in irrigation performance along with substantial water savings of 2.92 ML/ha which is almost equivalent to that given by pressurized systems. Thus real-time optimization and control offers a modern, environment friendly and water efficient system with close to zero increase in energy consumption and minimal greenhouse gas emissions.

Keywords: pressurised irrigation, carbon emissions, real-time, environmentally-friendly, REIP

Procedia PDF Downloads 505
4219 Automatic Detection of Traffic Stop Locations Using GPS Data

Authors: Areej Salaymeh, Loren Schwiebert, Stephen Remias, Jonathan Waddell

Abstract:

Extracting information from new data sources has emerged as a crucial task in many traffic planning processes, such as identifying traffic patterns, route planning, traffic forecasting, and locating infrastructure improvements. Given the advanced technologies used to collect Global Positioning System (GPS) data from dedicated GPS devices, GPS equipped phones, and navigation tools, intelligent data analysis methodologies are necessary to mine this raw data. In this research, an automatic detection framework is proposed to help identify and classify the locations of stopped GPS waypoints into two main categories: signalized intersections or highway congestion. The Delaunay triangulation is used to perform this assessment in the clustering phase. While most of the existing clustering algorithms need assumptions about the data distribution, the effectiveness of the Delaunay triangulation relies on triangulating geographical data points without such assumptions. Our proposed method starts by cleaning noise from the data and normalizing it. Next, the framework will identify stoppage points by calculating the traveled distance. The last step is to use clustering to form groups of waypoints for signalized traffic and highway congestion. Next, a binary classifier was applied to find distinguish highway congestion from signalized stop points. The binary classifier uses the length of the cluster to find congestion. The proposed framework shows high accuracy for identifying the stop positions and congestion points in around 99.2% of trials. We show that it is possible, using limited GPS data, to distinguish with high accuracy.

Keywords: Delaunay triangulation, clustering, intelligent transportation systems, GPS data

Procedia PDF Downloads 279
4218 The Role of Privatization on the Formulation of Productive Supply Chain: The Case of Ethiopian Firms

Authors: Merhawit Fisseha Gebremariam, Yohannes Yebabe Tesfay

Abstract:

This study focuses on the formulation of a sustainable, effective, and efficient supply chain strategy framework that will enable Ethiopian privatized firms. The study examined the role of privatization in productive sourcing, production, and delivery to Ethiopian firm’s performances. To analyze our hypothesis, the authors applied the concepts of Key Performance Indicator (KPI), strategic outsourcing, purchasing portfolio analysis, and Porter's marketing analysis. The authors selected ten privatized companies and compared their financial, market expansion, and sustainability performances. The Chi-Square Test showed that at the 5% level of significance, privatization and outsourcing activities can assist the business performances of Ethiopian firms in terms of product promotion and new market expansion. At the 5% level of significance, the independent t-test result showed that firms that were privatized by Ethiopian investors showed stronger financial performance than those that were privatized by foreign investors. Furthermore, it is better if Ethiopian firms apply both cost leadership and differentiated strategy to enhance thriving in their business area. Ethiopian firms need to implement the supply chain operations reference (SCOR) model for an exclusive framework that supports communication links the supply chain partners, and enhances productivity. The government of Ethiopia should be aware that the privatization of firms by Ethiopian investors will strengthen the economy. Otherwise, the privatization process will be risky for the country, and therefore, the government of Ethiopia should stop doing those activities.

Keywords: correlation analysis, market strategies, KPIs, privatization, risk and Ethiopia

Procedia PDF Downloads 75
4217 Study on Planning of Smart GRID Using Landscape Ecology

Authors: Sunglim Lee, Susumu Fujii, Koji Okamura

Abstract:

Smart grid is a new approach for electric power grid that uses information and communications technology to control the electric power grid. Smart grid provides real-time control of the electric power grid, controlling the direction of power flow or time of the flow. Control devices are installed on the power lines of the electric power grid to implement smart grid. The number of the control devices should be determined, in relation with the area one control device covers and the cost associated with the control devices. One approach to determine the number of the control devices is to use the data on the surplus power generated by home solar generators. In current implementations, the surplus power is sent all the way to the power plant, which may cause power loss. To reduce the power loss, the surplus power may be sent to a control device and sent to where the power is needed from the control device. Under assumption that the control devices are installed on a lattice of equal size squares, our goal is to figure out the optimal spacing between the control devices, where the power sharing area (the area covered by one control device) is kept small to avoid power loss, and at the same time the power sharing area is big enough to have no surplus power wasted. To achieve this goal, a simulation using landscape ecology method is conducted on a sample area. First an aerial photograph of the land of interest is turned into a mosaic map where each area is colored according to the ratio of the amount of power production to the amount of power consumption in the area. The amount of power consumption is estimated according to the characteristics of the buildings in the area. The power production is calculated by the sum of the area of the roofs shown in the aerial photograph and assuming that solar panels are installed on all the roofs. The mosaic map is colored in three colors, each color representing producer, consumer, and neither. We started with a mosaic map with 100 m grid size, and the grid size is grown until there is no red grid. One control device is installed on each grid, so that the grid is the area which the control device covers. As the result of this simulation we got 350 m as the optimal spacing between the control devices that makes effective use of the surplus power for the sample area.

Keywords: landscape ecology, IT, smart grid, aerial photograph, simulation

Procedia PDF Downloads 447
4216 Ceramic Membrane Filtration Technologies for Oilfield Produced Water Treatment

Authors: Mehrdad Ebrahimi, Oliver Schmitz, Axel Schmidt, Peter Czermak

Abstract:

“Produced water” (PW) is any fossil water that is brought to the surface along with crude oil or natural gas. By far, PW is the largest waste stream by volume associated with oil and gas production operations. Due to the increasing volume of waste all over the world in the current decade, the outcome and effect of discharging PW on the environment has lately become a significant issue of environmental concerns. Therefore, there is a need for new technologies for PW treatment due to increase focus on water conservation and environmental regulation. The use of membrane processes for treatment of PW has several advantages over many of the traditional separation techniques. In oilfield produced water treatment with ceramic membranes, process efficiency is characterized by the specific permeate flux and by the oil separation performance. Apart from the membrane properties, the permeate flux during filtration of oily wastewaters is known to be strongly dependent on the constituents of the feed solution, as well as on process conditions, e.g. trans-membrane pressure (TMP) and cross-flow velocity (CFV). The research project presented in these report describes the application of different ceramic membrane filtration technologies for the efficient treatment of oil-field produced water and different model oily solutions.

Keywords: ceramic membrane, membrane fouling, oil rejection, produced water treatment

Procedia PDF Downloads 190
4215 Investigating Smoothness: An In-Depth Study of Extremely Degenerate Elliptic Equations

Authors: Zahid Ullah, Atlas Khan

Abstract:

The presented research is dedicated to an extensive examination of the regularity properties associated with a specific class of equations, namely extremely degenerate elliptic equations. This study holds significance in unraveling the complexities inherent in these equations and understanding the smoothness of their solutions. The focus is on analyzing the regularity of results, aiming to contribute to the broader field of mathematical theory. By delving into the intricacies of extremely degenerate elliptic equations, the research seeks to advance our understanding beyond conventional analyses, addressing challenges posed by degeneracy and pushing the boundaries of classical analytical methods. The motivation for this exploration lies in the practical applicability of mathematical models, particularly in real-world scenarios where physical phenomena exhibit characteristics that challenge traditional mathematical modeling. The research aspires to fill gaps in the current understanding of regularity properties within solutions to extremely degenerate elliptic equations, ultimately contributing to both theoretical foundations and practical applications in diverse scientific fields.

Keywords: investigating smoothness, extremely degenerate elliptic equations, regularity properties, mathematical analysis, complexity solutions

Procedia PDF Downloads 64
4214 Automated Facial Symmetry Assessment for Orthognathic Surgery: Utilizing 3D Contour Mapping and Hyperdimensional Computing-Based Machine Learning

Authors: Wen-Chung Chiang, Lun-Jou Lo, Hsiu-Hsia Lin

Abstract:

This study aimed to improve the evaluation of facial symmetry, which is crucial for planning and assessing outcomes in orthognathic surgery (OGS). Facial symmetry plays a key role in both aesthetic and functional aspects of OGS, making its accurate evaluation essential for optimal surgical results. To address the limitations of traditional methods, a different approach was developed, combining three-dimensional (3D) facial contour mapping with hyperdimensional (HD) computing to enhance precision and efficiency in symmetry assessments. The study was conducted at Chang Gung Memorial Hospital, where data were collected from 2018 to 2023 using 3D cone beam computed tomography (CBCT), a highly detailed imaging technique. A large and comprehensive dataset was compiled, consisting of 150 normal individuals and 2,800 patients, totaling 5,750 preoperative and postoperative facial images. These data were critical for training a machine learning model designed to analyze and quantify facial symmetry. The machine learning model was trained to process 3D contour data from the CBCT images, with HD computing employed to power the facial symmetry quantification system. This combination of technologies allowed for an objective and detailed analysis of facial features, surpassing the accuracy and reliability of traditional symmetry assessments, which often rely on subjective visual evaluations by clinicians. In addition to developing the system, the researchers conducted a retrospective review of 3D CBCT data from 300 patients who had undergone OGS. The patients’ facial images were analyzed both before and after surgery to assess the clinical utility of the proposed system. The results showed that the facial symmetry algorithm achieved an overall accuracy of 82.5%, indicating its robustness in real-world clinical applications. Postoperative analysis revealed a significant improvement in facial symmetry, with an average score increase of 51%. The mean symmetry score rose from 2.53 preoperatively to 3.89 postoperatively, demonstrating the system's effectiveness in quantifying improvements after OGS. These results underscore the system's potential for providing valuable feedback to surgeons and aiding in the refinement of surgical techniques. The study also led to the development of a web-based system that automates facial symmetry assessment. This system integrates HD computing and 3D contour mapping into a user-friendly platform that allows for rapid and accurate evaluations. Clinicians can easily access this system to perform detailed symmetry assessments, making it a practical tool for clinical settings. Additionally, the system facilitates better communication between clinicians and patients by providing objective, easy-to-understand symmetry scores, which can help patients visualize the expected outcomes of their surgery. In conclusion, this study introduced a valuable and highly effective approach to facial symmetry evaluation in OGS, combining 3D contour mapping, HD computing, and machine learning. The resulting system achieved high accuracy and offers a streamlined, automated solution for clinical use. The development of the web-based platform further enhances its practicality, making it a valuable tool for improving surgical outcomes and patient satisfaction in orthognathic surgery.

Keywords: facial symmetry, orthognathic surgery, facial contour mapping, hyperdimensional computing

Procedia PDF Downloads 34
4213 D6tions: A Serious Game to Learn Software Engineering Process and Design

Authors: Hector G. Perez-Gonzalez, Miriam Vazquez-Escalante, Sandra E. Nava-Muñoz, 
 Francisco E. Martinez-Perez, Alberto S. Nunez-Varela

Abstract:

The software engineering teaching process has been the subject of many studies. To improve this process, researchers have proposed merely illustrative techniques in the classroom, such as topic presentations and dynamics between students on one side or attempts to involve students in real projects with companies and institutions to bring them to a real software development problem on the other hand. Simulators and serious games have been used as auxiliary tools to introduce students to topics that are too abstract when these are presented in the traditional way. Most of these tools cover a limited area of the huge software engineering scope. To address this problem, we have developed D6tions, an educational serious game that simulates the software engineering process and is designed to experiment the different stages a software engineer (playing roles as project leader or as a developer or designer) goes through, while participating in a software project. We describe previous approaches to this problem, how D6tions was designed, its rules, directions, and the results we obtained of the use of this game involving undergraduate students playing the game.

Keywords: serious games, software engineering, software engineering education, software engineering teaching process

Procedia PDF Downloads 496
4212 Drought Risk Analysis Using Neural Networks for Agri-Businesses and Projects in Lejweleputswa District Municipality, South Africa

Authors: Bernard Moeketsi Hlalele

Abstract:

Drought is a complicated natural phenomenon that creates significant economic, social, and environmental problems. An analysis of paleoclimatic data indicates that severe and extended droughts are inevitable part of natural climatic circle. This study characterised drought in Lejweleputswa using both Standardised Precipitation Index (SPI) and neural networks (NN) to quantify and predict respectively. Monthly 37-year long time series precipitation data were obtained from online NASA database. Prior to the final analysis, this dataset was checked for outliers using SPSS. Outliers were removed and replaced by Expectation Maximum algorithm from SPSS. This was followed by both homogeneity and stationarity tests to ensure non-spurious results. A non-parametric Mann Kendall's test was used to detect monotonic trends present in the dataset. Two temporal scales SPI-3 and SPI-12 corresponding to agricultural and hydrological drought events showed statistically decreasing trends with p-value = 0.0006 and 4.9 x 10⁻⁷, respectively. The study area has been plagued with severe drought events on SPI-3, while on SPI-12, it showed approximately a 20-year circle. The concluded the analyses with a seasonal analysis that showed no significant trend patterns, and as such NN was used to predict possible SPI-3 for the last season of 2018/2019 and four seasons for 2020. The predicted drought intensities ranged from mild to extreme drought events to come. It is therefore recommended that farmers, agri-business owners, and other relevant stakeholders' resort to drought resistant crops as means of adaption.

Keywords: drought, risk, neural networks, agri-businesses, project, Lejweleputswa

Procedia PDF Downloads 129
4211 Biodiesel Fuel Properties of Mixed Culture Microalgae under Different CO₂ Concentration from Coal Fired Flue Gas

Authors: Ambreen Aslam, Tahira Aziz Mughal, Skye R. Thomas-Hall, Peer M. Schenk

Abstract:

Biodiesel is an alternative to petroleum-derived fuel mainly composed of fatty acid from oleaginous microalgae feedstock. Microalgae produced fatty acid methyl esters (FAMEs) as they can store high levels of lipids without competing for food productivity. After lipid extraction and esterification, fatty acid profile from algae feedstock possessed the abundance of fatty acids with carbon chain length specifically C16 and C18. The qualitative analysis of FAME was done by cultivating mix microalgae consortia under three different CO₂ concentrations (1%, 3%, and 5.5%) from a coal fired flue gas. FAME content (280.3 µg/mL) and productivity (18.69 µg/mL/D) was higher under 1% CO₂ (flue gas) as compare to other treatments. Whereas, Mixed C. (F) supplemented with 5.5% CO₂ (50% flue gas) had higher SFA (36.28%) and UFA (63.72%) which improve the oxidative stability of biodiesel. Subsequently, low Iodine value (136.3 gI₂/100g) and higher Cetane number (52) of Mixed C.+P (F) were found to be in accordance with European (EN 14214) standard under 5.5% CO₂ along with 50mM phosphate buffer. Experimental results revealed that sufficient phosphate reduced FAME productivity but significantly enhance biodiesel quality. This research aimed to develop an integrated approach of utilizing flue gas (as CO₂ source) for significant improvement in biodiesel quality under surplus phosphorus. CO₂ sequestration from industrial flue gas not only reduce greenhouse gases (GHG) emissions but also ensure sustainability and eco-friendliness of the biodiesel production process through microalgae.

Keywords: biodiesel analysis, carbon dioxide, coal fired flue gas, FAME productivity, fatty acid profile, fuel properties, lipid content, mixed culture microalgae

Procedia PDF Downloads 332
4210 High-Frequency Monitoring Results of a Piled Raft Foundation under Wind Loading

Authors: Laurent Pitteloud, Jörg Meier

Abstract:

Piled raft foundations represent an efficient and reliable technique for transferring high vertical and horizontal loads to the subsoil. Piled raft foundations were success­fully implemented for several high-rise buildings world­wide over the last decades. For the structural design of this foundation type the stiffnesses of both the piles and the raft have to be deter­mined for the static (e.g. dead load, live load) and the dynamic load cases (e.g. earthquake). In this context the question often arises, to which proportion wind loads are to be considered as dynamic loads. Usually a piled raft foundation has to be monitored in order to verify the design hypotheses. As an additional benefit, the analysis of this monitoring data may lead to a better under­standing of the behaviour of this foundation type for future projects in similar subsoil conditions. In case the measurement frequency is high enough, one may also draw conclusions on the effect of wind loading on the piled raft foundation. For a 41-storey office building in Basel, Switzerland, the preliminary design showed that a piled raft foundation was the best solution to satisfy both design requirements, as well as economic aspects. A high-frequency monitoring of the foundation including pile loads, vertical stresses under the raft, as well as pore water pressures was performed over 5 years. In windy situations the analysis of the measure­ments shows that the pile load increment due to wind consists of a static and a cyclic load term. As piles and raft react with different stiffnesses under static and dynamic loading, these measure­ments are useful for the correct definition of stiffnesses of future piled raft foundations. This paper outlines the design strategy and the numerical modelling of the aforementioned piled raft foundation. The measurement results are presented and analysed. Based on the findings, comments and conclusions on the definition of pile and raft stiffnesses for vertical and wind loading are proposed.

Keywords: design, dynamic, foundation, monitoring, pile, raft, wind load

Procedia PDF Downloads 200
4209 Cascaded Multi-Level Single-Phase Switched Boost Inverter

Authors: Van-Thuan Tran, Minh-Khai Nguyen, Geum-Bae Cho

Abstract:

Recently, multilevel inverters have become more attractive for researchers due to low total harmonic distortion (THD) in the output voltage and low electromagnetic interference (EMI). This paper proposes a single-phase cascaded H-bridge quasi switched boost inverter (CHB-qSBI) for renewable energy sources applications. The proposed inverter has the advantage over the cascaded H-bridge quasi-Z-source inverter (CHB-qZSI) in reducing two capacitors and two inductors. As a result, cost, weight, and size are reduced. Furthermore, the dc-link voltage of each module is controlled by individual shoot-through duty cycle to get the same values. Therefore, the proposed inverter solves the imbalance problem of dc-link voltage in traditional CHB inverter. This paper shows the operating principles and analysis of the single-phase cascaded H-bridge quasi switched boost inverter. Also, a control strategy for the proposed inverter is shown. Experimental and simulation results are shown to verify the operating principle of the proposed inverter.

Keywords: renewable energy sources, cascaded h-bridge inverter, quasi switched boost inverter, quasi z-source inverter, multilevel inverter

Procedia PDF Downloads 335
4208 Rapid Monitoring of Earthquake Damages Using Optical and SAR Data

Authors: Saeid Gharechelou, Ryutaro Tateishi

Abstract:

Earthquake is an inevitable catastrophic natural disaster. The damages of buildings and man-made structures, where most of the human activities occur are the major cause of casualties from earthquakes. A comparison of optical and SAR data is presented in the case of Kathmandu valley which was hardly shaken by 2015-Nepal Earthquake. Though many existing researchers have conducted optical data based estimated or suggested combined use of optical and SAR data for improved accuracy, however finding cloud-free optical images when urgently needed are not assured. Therefore, this research is specializd in developing SAR based technique with the target of rapid and accurate geospatial reporting. Should considers that limited time available in post-disaster situation offering quick computation exclusively based on two pairs of pre-seismic and co-seismic single look complex (SLC) images. The InSAR coherence pre-seismic, co-seismic and post-seismic was used to detect the change in damaged area. In addition, the ground truth data from field applied to optical data by random forest classification for detection of damaged area. The ground truth data collected in the field were used to assess the accuracy of supervised classification approach. Though a higher accuracy obtained from the optical data then integration by optical-SAR data. Limitation of cloud-free images when urgently needed for earthquak evevent are and is not assured, thus further research on improving the SAR based damage detection is suggested. Availability of very accurate damage information is expected for channelling the rescue and emergency operations. It is expected that the quick reporting of the post-disaster damage situation quantified by the rapid earthquake assessment should assist in channeling the rescue and emergency operations, and in informing the public about the scale of damage.

Keywords: Sentinel-1A data, Landsat-8, earthquake damage, InSAR, rapid damage monitoring, 2015-Nepal earthquake

Procedia PDF Downloads 174
4207 Modeling Soil Erosion and Sediment Yield in Geba Catchment, Ethiopia

Authors: Gebremedhin Kiros, Amba Shetty, Lakshman Nandagiri

Abstract:

Soil erosion is a major threat to the sustainability of land and water resources in the catchment and there is a need to identify critical areas of erosion so that suitable conservation measures may be adopted. The present study was taken up to understand the temporal and spatial distribution of soil erosion and daily sediment yield in Geba catchment (5137 km2) located in the Northern Highlands of Ethiopia. Soil and Water Assessment Tool (SWAT) was applied to the Geba catchment using data pertaining to rainfall, climate, soils, topography and land use/land cover (LU/LC) for the historical period 2000-2013. LU/LC distribution in the catchment was characterized using LANDSAT satellite imagery and the GIS-based ArcSWAT version of the model. The model was calibrated and validated using sediment concentration measurements made at the catchment outlet. The catchment was divided into 13 sub-basins and based on estimated soil erosion, these were prioritized on the basis of susceptibility to soil erosion. Model results indicated that the average sediment yield estimated of the catchment was 12.23 tons/ha/yr. The generated soil loss map indicated that a large portion of the catchment has high erosion rates resulting in significantly large sediment yield at the outlet. Steep and unstable terrain, the occurrence of highly erodible soils and low vegetation cover appeared to favor high soil erosion. Results obtained from this study prove useful in adopting in targeted soil and water conservation measures and promote sustainable management of natural resources in the Geba and similar catchments in the region.

Keywords: Ethiopia, Geba catchment, MUSLE, sediment yield, SWAT Model

Procedia PDF Downloads 318
4206 Non-Communicable Diseases: Knowledge, Attitudes and Practices of Risk Factors among Secondary School Students in Sharjah, UAE

Authors: A. Al-Wandi, A. Al-Ali, R. Dali, Y. Al-Karaghouli

Abstract:

Background: Non-communicable diseases (NCDs) have become an alarming health problem across the globe. The risk of developing those diseases begins in childhood and develops gradually under the influence of risk factors including obesity, hypertension, dyslipidemia, cigarette smoking and decreased physical activity. Therefore, this study aims to determine the level of knowledge, attitudes, and practices of the risk factors of lifestyle induced chronic diseases (non-communicable diseases) among secondary school students in Sharjah city. Methods: Five hundred and ninety-one school children, from grades 10 to 12, formed the study sample, using the multistage stratified cluster sampling method. Four governmental schools were chosen, for each gender. Data was collected through a pretested, close-ended questionnaire consisting of five sections; demographics, physical activity, diet, smoking and sleeping patterns. Frequencies and descriptive statistics were used to analyze data through SPSS 23. Results: The data showed 64.6% of students had low knowledge of risk factors of non-communicable diseases. Concerning physical activity, 58.2 % were physically inactive and females being less active than males. More than 2/3 of students didn’t fulfill the recommended daily intake of fruits and vegetables (75.9%). 8% reported to be smokers with cigarettes being the most encountered tobacco product. Conclusion: Our study has demonstrated a low level of knowledge and practices yet, positive attitudes towards risk factors of chronic diseases. We recommend implementation of thorough awareness campaigns through public health education about the risk factors of non-communicable diseases.

Keywords: non-communicable diseases, physical activity, diet, knowledge, attitudes, practices, smoking

Procedia PDF Downloads 234
4205 Efficient Tuning Parameter Selection by Cross-Validated Score in High Dimensional Models

Authors: Yoonsuh Jung

Abstract:

As DNA microarray data contain relatively small sample size compared to the number of genes, high dimensional models are often employed. In high dimensional models, the selection of tuning parameter (or, penalty parameter) is often one of the crucial parts of the modeling. Cross-validation is one of the most common methods for the tuning parameter selection, which selects a parameter value with the smallest cross-validated score. However, selecting a single value as an "optimal" value for the parameter can be very unstable due to the sampling variation since the sample sizes of microarray data are often small. Our approach is to choose multiple candidates of tuning parameter first, then average the candidates with different weights depending on their performance. The additional step of estimating the weights and averaging the candidates rarely increase the computational cost, while it can considerably improve the traditional cross-validation. We show that the selected value from the suggested methods often lead to stable parameter selection as well as improved detection of significant genetic variables compared to the tradition cross-validation via real data and simulated data sets.

Keywords: cross validation, parameter averaging, parameter selection, regularization parameter search

Procedia PDF Downloads 417
4204 Planning of Construction Material Flow Using Hybrid Simulation Modeling

Authors: A. M. Naraghi, V. Gonzalez, M. O'Sullivan, C. G. Walker, M. Poshdar, F. Ying, M. Abdelmegid

Abstract:

Discrete Event Simulation (DES) and Agent Based Simulation (ABS) are two simulation approaches that have been proposed to support decision-making in the construction industry. Despite the wide use of these simulation approaches in the construction field, their applications for production and material planning is still limited. This is largely due to the dynamic and complex nature of construction material supply chain systems. Moreover, managing the flow of construction material is not well integrated with site logistics in traditional construction planning methods. This paper presents a hybrid of DES and ABS to simulate on-site and off-site material supply processes. DES is applied to determine the best production scenarios with information of on-site production systems, while ABS is used to optimize the supply chain network. A case study of a construction piling project in New Zealand is presented illustrating the potential benefits of using the proposed hybrid simulation model in construction material flow planning. The hybrid model presented can be used to evaluate the impact of different decisions on construction supply chain management.

Keywords: construction supply-chain management, simulation modeling, decision-support tools, hybrid simulation

Procedia PDF Downloads 210
4203 Simulation of Complex-Shaped Particle Breakage with a Bonded Particle Model Using the Discrete Element Method

Authors: Felix Platzer, Eric Fimbinger

Abstract:

In Discrete Element Method (DEM) simulations, the breakage behavior of particles can be simulated based on different principles. In the case of large, complex-shaped particles that show various breakage patterns depending on the scenario leading to the failure and often only break locally instead of fracturing completely, some of these principles do not lead to realistic results. The reason for this is that in said cases, the methods in question, such as the Particle Replacement Method (PRM) or Voronoi Fracture, replace the initial particle (that is intended to break) into several sub-particles when certain breakage criteria are reached, such as exceeding the fracture energy. That is why those methods are commonly used for the simulation of materials that fracture completely instead of breaking locally. That being the case, when simulating local failure, it is advisable to pre-build the initial particle from sub-particles that are bonded together. The dimensions of these sub-particles consequently define the minimum size of the fracture results. This structure of bonded sub-particles enables the initial particle to break at the location of the highest local loads – due to the failure of the bonds in those areas – with several sub-particle clusters being the result of the fracture, which can again also break locally. In this project, different methods for the generation and calibration of complex-shaped particle conglomerates using bonded particle modeling (BPM) to enable the ability to depict more realistic fracture behavior were evaluated based on the example of filter cake. The method that proved suitable for this purpose and which furthermore allows efficient and realistic simulation of breakage behavior of complex-shaped particles applicable to industrial-sized simulations is presented in this paper.

Keywords: bonded particle model, DEM, filter cake, particle breakage

Procedia PDF Downloads 213
4202 Magnetorheological Elastomer Composites Obtained by Extrusion

Authors: M. Masłowski, M. Zaborski

Abstract:

Magnetorheological elastomer composites based on micro- and nano-sized magnetite, gamma iron oxide and carbonyl iron powder in ethylene-octene rubber are reported and studied. The method of preparation process influenced the specific properties of MREs (isotropy/anisotropy). The use of extrusion method instead of traditional preparation processes (two-roll mill, mixer) of composites is presented. Micro and nan-sized magnetites as well as gamma iron oxide and carbonyl iron powder were found to be an active fillers improving the mechanical properties of elastomers. They also changed magnetic properties of composites. Application of extrusion process also influenced the mechanical properties of composites and the dispersion of magnetic fillers. Dynamic-mechanical analysis (DMA) indicates the presence of strongly developed secondary structure in vulcanizates. Scanning electron microscopy images (SEM) show that the dispersion improvement had significant effect on the composites properties. Studies investigated by vibration sample magnetometer (VSM) proved that all composites exhibit good magnetic properties.

Keywords: extrusion, magnetic fillers, magnetorheological elastomers, mechanical properties

Procedia PDF Downloads 322
4201 Self-Organizing Maps for Credit Card Fraud Detection

Authors: ChunYi Peng, Wei Hsuan CHeng, Shyh Kuang Ueng

Abstract:

This study focuses on the application of self-organizing maps (SOM) technology in analyzing credit card transaction data, aiming to enhance the accuracy and efficiency of fraud detection. Som, as an artificial neural network, is particularly suited for pattern recognition and data classification, making it highly effective for the complex and variable nature of credit card transaction data. By analyzing transaction characteristics with SOM, the research identifies abnormal transaction patterns that could indicate potentially fraudulent activities. Moreover, this study has developed a specialized visualization tool to intuitively present the relationships between SOM analysis outcomes and transaction data, aiding financial institution personnel in quickly identifying and responding to potential fraud, thereby reducing financial losses. Additionally, the research explores the integration of SOM technology with composite intelligent system technologies (including finite state machines, fuzzy logic, and decision trees) to further improve fraud detection accuracy. This multimodal approach provides a comprehensive perspective for identifying and understanding various types of fraud within credit card transactions. In summary, by integrating SOM technology with visualization tools and composite intelligent system technologies, this research offers a more effective method of fraud detection for the financial industry, not only enhancing detection accuracy but also deepening the overall understanding of fraudulent activities.

Keywords: self-organizing map technology, fraud detection, information visualization, data analysis, composite intelligent system technologies, decision support technologies

Procedia PDF Downloads 63
4200 A Deep Learning-Based Pedestrian Trajectory Prediction Algorithm

Authors: Haozhe Xiang

Abstract:

With the rise of the Internet of Things era, intelligent products are gradually integrating into people's lives. Pedestrian trajectory prediction has become a key issue, which is crucial for the motion path planning of intelligent agents such as autonomous vehicles, robots, and drones. In the current technological context, deep learning technology is becoming increasingly sophisticated and gradually replacing traditional models. The pedestrian trajectory prediction algorithm combining neural networks and attention mechanisms has significantly improved prediction accuracy. Based on in-depth research on deep learning and pedestrian trajectory prediction algorithms, this article focuses on physical environment modeling and learning of historical trajectory time dependence. At the same time, social interaction between pedestrians and scene interaction between pedestrians and the environment were handled. An improved pedestrian trajectory prediction algorithm is proposed by analyzing the existing model architecture. With the help of these improvements, acceptable predicted trajectories were successfully obtained. Experiments on public datasets have demonstrated the algorithm's effectiveness and achieved acceptable results.

Keywords: deep learning, graph convolutional network, attention mechanism, LSTM

Procedia PDF Downloads 76
4199 Driving Forces of Net Carbon Emissions in a Tropical Dry Forest, Oaxaca, México

Authors: Rogelio Omar Corona-Núñez, Alma Mendoza-Ponce

Abstract:

The Tropical Dry Forest not only is one of the most important tropical ecosystems in terms of area, but also it is one of the most degraded ecosystems. However, little is known about the degradation impacts on carbon stocks, therefore in carbon emissions. There are different studies which explain its deforestation dynamics, but there is still a lack of understanding of how they correlate to carbon losses. Recently different authors have built current biomass maps for the tropics and Mexico. However, it is not clear how well they predict at the local scale, and how they can be used to estimate carbon emissions. This study quantifies the forest net carbon losses by comparing the potential carbon stocks and the different current biomass maps in the Southern Pacific coast in Oaxaca, Mexico. The results show important differences in the current biomass estimates with not a clear agreement. However, by the aggregation of the information, it is possible to infer the general patterns of biomass distribution and it can identify the driving forces of the carbon emissions. This study estimated that currently ~44% of the potential carbon stock estimated for the region is still present. A total of 6,764 GgC has been emitted due to deforestation and degradation of the forest at a rate of above ground biomass loss of 66.4 Mg ha-1. Which, ~62% of the total carbon emissions can be regarded as being due to forest degradation. Most of carbon losses were identified in places suitable for agriculture, close to rural areas and to roads while the lowest losses were accounted in places with high water stress and within the boundaries of the National Protected Area. Moreover, places not suitable for agriculture, but close to the coast showed carbon losses as a result of urban settlements.

Keywords: above ground biomass, deforestation, degradation, driving forces, tropical deciduous forest

Procedia PDF Downloads 186
4198 Retail Strategy to Reduce Waste Keeping High Profit Utilizing Taylor's Law in Point-of-Sales Data

Authors: Gen Sakoda, Hideki Takayasu, Misako Takayasu

Abstract:

Waste reduction is a fundamental problem for sustainability. Methods for waste reduction with point-of-sales (POS) data are proposed, utilizing the knowledge of a recent econophysics study on a statistical property of POS data. Concretely, the non-stationary time series analysis method based on the Particle Filter is developed, which considers abnormal fluctuation scaling known as Taylor's law. This method is extended for handling incomplete sales data because of stock-outs by introducing maximum likelihood estimation for censored data. The way for optimal stock determination with pricing the cost of waste reduction is also proposed. This study focuses on the examination of the methods for large sales numbers where Taylor's law is obvious. Numerical analysis using aggregated POS data shows the effectiveness of the methods to reduce food waste maintaining a high profit for large sales numbers. Moreover, the way of pricing the cost of waste reduction reveals that a small profit loss realizes substantial waste reduction, especially in the case that the proportionality constant  of Taylor’s law is small. Specifically, around 1% profit loss realizes half disposal at =0.12, which is the actual  value of processed food items used in this research. The methods provide practical and effective solutions for waste reduction keeping a high profit, especially with large sales numbers.

Keywords: food waste reduction, particle filter, point-of-sales, sustainable development goals, Taylor's law, time series analysis

Procedia PDF Downloads 134
4197 Shariah Guideline on Value-Based Intermediation Implementation in the Light of Maqasid Shariah Analysis

Authors: Muhammad Izzam Bin Mohd Khazar, Ruqayyah Binti Mohamad Ali, Nurul Atiqah Binti Yusri

Abstract:

Value-based intermediation (VBI) has been introduced by Bank Negara Malaysia (BNM) as the next strategic direction and growth driver for Islamic banking institutions. The aim of VBI is to deliver the intended outcome of Shariah through practices, conducts, and offerings that generate positive and sustainable impact to the economy, community and environment which is aligned to Maqasid Shariah in preserving the common interest of society by preventing harm and maximizing benefit. Hence, upon its implementation, VBI will experiment the current Shariah compliance treatment and revolutionise new policies and systems that can meritoriously entrench and convey the objectives of Shariah. However, discussion revolving VBI in the light of Maqasid analysis is still scarce hence further research needs to be undertaken. The idea of implementation of VBI vision into quantifiable Maqasid Shariah measurement is yet to be explored due to the nature of Maqasid that is variable. The contemporary scholars also have different views on the implementation of VBI. This paper aims to discuss on the importance of Maqasid Shariah in the current Islamic finance transactions by providing Shariah index measurement in the application of VBI. This study also intends to explore basic Shariah guidelines and parameters based on the objectives of Shariah; preservation of the five pillars (religion, life, progeny, intellect and wealth) with further elaboration on preservation of wealth under five headings: rawaj (circulation and marketability); wuduh (transparency); hifz (preservation); thabat (durability and tranquillity); and ‘adl (equity and justice). In alignment with these headings, Islamic finance can be innovated for VBI implementation, particularly in Maybank Islamic being a significant leader in the IFI market.

Keywords: Islamic Financial Institutions, Maqasid Index, Maqasid Shariah, sustainability, value-based intermediation

Procedia PDF Downloads 170
4196 Impacts of Opium Addiction on Patterns of Angiographic Findings in Patients with Coronary Artery Syndrome

Authors: Alireza Abdiardekani, Maryam Salimi, Shirin Sarejloo, Mehdi Bazrafshan, Amir Askarinejad, Amirhossein Salimi, Hanieh Bazrafshan, Salar Javanshir, Armin Attar, Shokoufeh Khanzadeh, Mohsen Esmaeili, Hamed Bazrafshan Drissi

Abstract:

Background: Opium, after tobacco, is the most abused substance in the Middle East. The effects of opium use on coronary artery disease are indeed unclear. This study aimed to assess the association between opium use and angiographic findings in patients with acute coronary syndrome (ACS) diagnosis at Al-Zahra Heart Hospital, Shiraz, Iran. Methods: In this case-control study, 170 patients admitted for coronary angiography were enrolled from 2019 to 2020. They were categorized into two groups based on their history: "non-opium" and "opium." SPSS (Version 26) was used to investigate the correlation between opioid addiction and the severity of coronary artery disease. Results: The results of our study reveal that the mean age of the participants was 61.63±9.07. This study indicated that 49 (28.82%) patients were female, and 121 (71.17%) were male. Our findings revealed that three-vessel disease was more frequent in non-opium (40; 47.05%) and opium (45; 52.94%) groups. There was a significant correlation between the severity of the second diagonal artery(D2) and right coronary artery(RCA) involvement and opium consumption. There was a strong positive correlation between the location of the vascular lesion in the left circumflex artery and opium consumption. Conclusion: Opium, as an independent risk factor for cardiovascular diseases, can have specific effects on angiographic findings in patients with coronary artery disease. Public health officials and politicians should arrange several programs to increase the general population’s consciousness about opioid use and its consequences.

Keywords: acute coronary syndrome, opium, coronary artery disease, angiography

Procedia PDF Downloads 136