Search results for: carbon efficiency
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9158

Search results for: carbon efficiency

1268 Explanatory Variables for Crash Injury Risk Analysis

Authors: Guilhermina Torrao

Abstract:

An extensive number of studies have been conducted to determine the factors which influence crash injury risk (CIR); however, uncertainties inherent to selected variables have been neglected. A review of existing literature is required to not only obtain an overview of the variables and measures but also ascertain the implications when comparing studies without a systematic view of variable taxonomy. Therefore, the aim of this literature review is to examine and report on peer-reviewed studies in the field of crash analysis and to understand the implications of broad variations in variable selection in CIR analysis. The objective of this study is to demonstrate the variance in variable selection and classification when modeling injury risk involving occupants of light vehicles by presenting an analytical review of the literature. Based on data collected from 64 journal publications reported over the past 21 years, the analytical review discusses the variables selected by each study across an organized list of predictors for CIR analysis and provides a better understanding of the contribution of accident and vehicle factors to injuries acquired by occupants of light vehicles. A cross-comparison analysis demonstrates that almost half the studies (48%) did not consider vehicle design specifications (e.g., vehicle weight), whereas, for those that did, the vehicle age/model year was the most selected explanatory variable used by 41% of the literature studies. For those studies that included speed risk factor in their analyses, the majority (64%) used the legal speed limit data as a ‘proxy’ of vehicle speed at the moment of a crash, imposing limitations for CIR analysis and modeling. Despite the proven efficiency of airbags in minimizing injury impact following a crash, only 22% of studies included airbag deployment data. A major contribution of this study is to highlight the uncertainty linked to explanatory variable selection and identify opportunities for improvements when performing future studies in the field of road injuries.

Keywords: crash, exploratory, injury, risk, variables, vehicle

Procedia PDF Downloads 135
1267 Efficient Treatment of Azo Dye Wastewater with Simultaneous Energy Generation by Microbial Fuel Cell

Authors: Soumyadeep Bhaduri, Rahul Ghosh, Rahul Shukla, Manaswini Behera

Abstract:

The textile industry consumes a substantial amount of water throughout the processing and production of textile fabrics. The water eventually turns into wastewater, where it acts as an immense damaging nuisance due to its dye content. Wastewater streams contain a percentage ranging from 2.0% to 50.0% of the total weight of dye used, depending on the dye class. The management of dye effluent in textile industries presents a formidable challenge to global sustainability. The current focus is on implementing wastewater treatment technology that enable the recycling of wastewater, reduce energy usage and offset carbon emissions. Microbial fuel cell (MFC) is a device that utilizes microorganisms as a bio-catalyst to effectively treat wastewater while also producing electricity. The MFC harnesses the chemical energy present in wastewater by oxidizing organic compounds in the anodic chamber and reducing an electron acceptor in the cathodic chamber, thereby generating electricity. This research investigates the potential of MFCs to tackle this challenge of azo dye removal with simultaneously generating electricity. Although MFCs are well-established for wastewater treatment, their application in dye decolorization with concurrent electricity generation remains relatively unexplored. This study aims to address this gap by assessing the effectiveness of MFCs as a sustainable solution for treating wastewater containing azo dyes. By harnessing microorganisms as biocatalysts, MFCs offer a promising avenue for environmentally friendly dye effluent management. The performance of MFCs in treating azo dyes and generating electricity was evaluated by optimizing the Chemical Oxygen Demand (COD) and Hydraulic Retention Time (HRT) of influent. COD and HRT values ranged from 1600 mg/L to 2400 mg/L and 5 to 9 days, respectively. Results showed that the maximum open circuit voltage (OCV) reached 648 mV at a COD of 2400 mg/L and HRT of 5 days. Additionally, maximum COD removal of 98% and maximum color removal of 98.91% were achieved at a COD of 1600 mg/L and HRT of 9 days. Furthermore, the study observed a maximum power density of 19.95 W/m3 at a COD of 2400 mg/L and HRT of 5 days. Electrochemical analysis, including linear sweep voltammetry (LSV), cyclic voltammetry (CV) and electrochemical impedance spectroscopy (EIS) were done to find out the response current and internal resistance of the system. To optimize pH and dye concentration, pH values were varied from 4 to 10, and dye concentrations ranged from 25 mg/L to 175 mg/L. The highest voltage output of 704 mV was recorded at pH 7, while a dye concentration of 100 mg/L yielded the maximum output of 672 mV. This study demonstrates that MFCs offer an efficient and sustainable solution for treating azo dyes in textile industry wastewater, while concurrently generating electricity. These findings suggest the potential of MFCs to contribute to environmental remediation and sustainable development efforts on a global scale.

Keywords: textile wastewater treatment, microbial fuel cell, renewable energy, sustainable wastewater treatment

Procedia PDF Downloads 22
1266 Metal-Based Deep Eutectic Solvents for Extractive Desulfurization of Fuels: Analysis from Molecular Dynamics Simulations

Authors: Aibek Kukpayev, Dhawal Shah

Abstract:

Combustion of sour fuels containing high amount of sulfur leads to the formation of sulfur oxides, which adversely harm the environment and has a negative impact on human health. Considering this, several legislations have been imposed to bring down the sulfur content in fuel to less than 10 ppm. In recent years, novel deep eutectic solvents (DESs) have been developed to achieve deep desulfurization, particularly to extract thiophenic compounds from liquid fuels. These novel DESs, considered as analogous to ionic liquids are green, eco-friendly, inexpensive, and sustainable. We herein, using molecular dynamic simulation, analyze the interactions of metal-based DESs with model oil consisting of thiophenic compounds. The DES used consists of polyethylene glycol (PEG-200) as a hydrogen bond donor, choline chloride (ChCl) or tetrabutyl ammonium chloride (TBAC) as a hydrogen bond acceptor, and cobalt chloride (CoCl₂) as metal salt. In particular, the combination of ChCl: PEG-200:CoCl₂ at a ratio 1:2:1 and the combination of TBAC:PEG-200:CoCl₂ at a ratio 1:2:0.25 were simulated, separately, with model oil consisting of octane and thiophenes at 25ᵒC and 1 bar. The results of molecular dynamics simulations were analyzed in terms of interaction energies between different components. The simulations revealed a stronger interaction between DESs/thiophenes as compared with octane/thiophenes, suggestive of an efficient desulfurization process. In addition, our analysis suggests that the choice of hydrogen bond acceptor strongly influences the efficiency of the desulfurization process. Taken together, the results also show the importance of the metal ion, although present in small amount, in the process, and the role of the polymer in desulfurization of the model fuel.

Keywords: deep eutectic solvents, desulfurization, molecular dynamics simulations, thiophenes

Procedia PDF Downloads 146
1265 Influence of the Adsorption of Anionic–Nonionic Surfactants/Silica Nanoparticles Mixture on Clay Rock Minerals in Chemical Enhanced Oil Recovery

Authors: C. Mendoza Ramírez, M. Gambús Ordaz, R. Mercado Ojeda.

Abstract:

Chemical solutions flooding with surfactants, based on their property of reducing the interfacial tension between crude oil and water, is a potential application of chemical enhanced oil recovery (CEOR), however, the high-rate retention of surfactants associated with adsorption in the porous medium and the complexity of the mineralogical composition of the reservoir rock generates a limitation in the efficiency of displacement of crude oil. This study evaluates the effect of the concentration of a mixture of anionic-non-ionic surfactants with silica nanoparticles, in a rock sample composed of 25.14% clay minerals of the kaolinite, chlorite, halloysite and montmorillonite type, according to the results of X-Ray Diffraction analysis and Scanning Electron Spectrometry (XRD and SEM, respectively). The amount of the surfactant mixture adsorbed on the clay rock minerals was analyzed from the construction of its calibration curve and the 4-Region Isotherm Model in a UV-Visible spectroscopy. The adsorption rate of the surfactant in the clay rock averages 32% across all concentrations, influenced by the presence of the surface area of the substrate with a value of 1.6 m2/g and by the mineralogical composition of the clay that increases the cation exchange capacity (CEC). In addition, on Region I and II a final concentration measurement is not evident in the UV-VIS, due to its ionic nature, its high affinity with the clay rock and its low concentration. Finally, for potential CEOR applications, the adsorption of these mixed surfactant systems is considered due to their industrial relevance and it is concluded that it is possible to use concentrations in Region III and IV; initially the adsorption has an increasing slope and then reaches zero in the equilibrium where interfacial tension values are reached in the order of x10-1 mN/m.

Keywords: anionic–nonionic surfactants, clay rock, adsorption, 4-region isotherm model, cation exchange capacity, critical micelle concentration, enhanced oil recovery

Procedia PDF Downloads 69
1264 Combination Method Cold Plasma and Liquid Threads

Authors: Nino Tsamalaidze

Abstract:

Cold plasma is an ionized neutral gas with a temperature of 30-40 degrees, but the impact of HP includes not only gas, but also active molecules, charged particles, heat and UV radiation of low power The main goal of the technology we describe is to launch the natural function of skin regeneration and improve the metabolism inside, which leads to a huge effect of rejuvenation. In particular: eliminate fine mimic wrinkles; get rid of wrinkles around the mouth (purse-string wrinkles); reduce the overhang of the upper eyelid; eliminate bags under the eyes; provide a lifting effect on the oval of the face; reduce stretch marks; shrink pores; even out the skin, reduce the appearance of acne, scars; remove pigmentation. A clear indication of the major findings of the study is based on the current patients practice. The method is to use combination of cold plasma and liquid threats. The advantage of cold plasma is undoubtedly its efficiency, the result of its implementation can be compared with the result of a surgical facelift, despite the fact that the procedure is non-invasive and the risks are minimized. Another advantage is that the technique can be applied on the most sensitive skin of the face - these are the eyelids and the space around the eyes. Cold plasma is one of the few techniques that eliminates bags under the eyes and overhanging eyelids, while not violating the integrity of the tissues. In addition to rejuvenation and lifting effect, among the benefits of cold plasma is also getting rid of scars, kuperoze, stretch marks and other skin defects, plasma allows to get rid of acne, seborrhea, skin fungus and even heals ulcers. The cold plasma method makes it possible to achieve a result similar to blepharoplasty. Carried out on the skin of the eyelids, the procedure allows non-surgical correction of the eyelid line in 3-4 sessions. One of the undoubted advantages of this method is a short rehabilitation and rapid healing of the skin.

Keywords: wrinkles, telangiectasia, pigmentation, pore closing

Procedia PDF Downloads 84
1263 Fabrication of Coatable Polarizer by Guest-Host System for Flexible Display Applications

Authors: Rui He, Seung-Eun Baik, Min-Jae Lee, Myong-Hoon Lee

Abstract:

The polarizer is one of the most essential optical elements in LCDs. Currently, the most widely used polarizers for LCD is the derivatives of the H-sheet polarizer. There is a need for coatable polarizers which are much thinner and more stable than H-sheet polarizers. One possible approach to obtain thin, stable, and coatable polarizers is based on the use of highly ordered guest-host system. In our research, we aimed to fabricate coatable polarizer based on highly ordered liquid crystalline monomer and dichroic dye ‘guest-host’ system, in which the anisotropic absorption of light could be achieved by aligning a dichroic dye (guest) in the cooperative motion of the ordered liquid crystal (host) molecules. Firstly, we designed and synthesized a new reactive liquid crystalline monomer containing polymerizable acrylate groups as the ‘host’ material. The structure was confirmed by 1H-NMR and IR spectroscopy. The liquid crystalline behavior was studied by differential scanning calorimetry (DSC) and polarized optical microscopy (POM). It was confirmed that the monomers possess highly ordered smectic phase at relatively low temperature. Then, the photocurable ‘guest-host’ system was prepared by mixing the liquid crystalline monomer, dichroic dye and photoinitiator. Coatable polarizers were fabricated by spin-coating above mixture on a substrate with alignment layer. The in-situ photopolymerization was carried out at room temperature by irradiating UV light, resulting in the formation of crosslinked structure that stabilized the aligned dichroic dye molecules. Finally, the dichroic ratio (DR), order parameter (S) and polarization efficiency (PE) were determined by polarized UV/Vis spectroscopy. We prepared the coatable polarizers by using different type of dichroic dyes to meet the requirement of display application. The results reveal that the coatable polarizers at a thickness of 8μm exhibited DR=12~17 and relatively high PE (>96%) with the highest PE=99.3%, which possess potential for the LCD or flexible display applications.

Keywords: coatable polarizer, display, guest-host, liquid crystal

Procedia PDF Downloads 251
1262 AI-Driven Solutions for Optimizing Master Data Management

Authors: Srinivas Vangari

Abstract:

In the era of big data, ensuring the accuracy, consistency, and reliability of critical data assets is crucial for data-driven enterprises. Master Data Management (MDM) plays a crucial role in this endeavor. This paper investigates the role of Artificial Intelligence (AI) in enhancing MDM, focusing on how AI-driven solutions can automate and optimize various stages of the master data lifecycle. By integrating AI (Quantitative and Qualitative Analysis) into processes such as data creation, maintenance, enrichment, and usage, organizations can achieve significant improvements in data quality and operational efficiency. Quantitative analysis is employed to measure the impact of AI on key metrics, including data accuracy, processing speed, and error reduction. For instance, our study demonstrates an 18% improvement in data accuracy and a 75% reduction in duplicate records across multiple systems post-AI implementation. Furthermore, AI’s predictive maintenance capabilities reduced data obsolescence by 22%, as indicated by statistical analyses of data usage patterns over a 12-month period. Complementing this, a qualitative analysis delves into the specific AI-driven strategies that enhance MDM practices, such as automating data entry and validation, which resulted in a 28% decrease in manual errors. Insights from case studies highlight how AI-driven data cleansing processes reduced inconsistencies by 25% and how AI-powered enrichment strategies improved data relevance by 24%, thus boosting decision-making accuracy. The findings demonstrate that AI significantly enhances data quality and integrity, leading to improved enterprise performance through cost reduction, increased compliance, and more accurate, real-time decision-making. These insights underscore the value of AI as a critical tool in modern data management strategies, offering a competitive edge to organizations that leverage its capabilities.

Keywords: artificial intelligence, master data management, data governance, data quality

Procedia PDF Downloads 18
1261 The Intensity of Root and Soil Respiration Is Significantly Determined by the Organic Matter and Moisture Content of the Soil

Authors: Zsolt Kotroczó, Katalin Juhos, Áron Béni, Gábor Várbíró, Tamás Kocsis, István Fekete

Abstract:

Soil organic matter plays an extremely important role in the functioning and regulation processes of ecosystems. It follows that the C content of organic matter in soil is one of the most important indicators of soil fertility. Part of the carbon stored in them is returned to the atmosphere during soil respiration. Climate change and inappropriate land use can accelerate these processes. Our work aimed to determine how soil CO2 emissions change over ten years as a result of organic matter manipulation treatments. With the help of this, we were able to examine not only the effects of the different organic matter intake but also the effects of the different microclimates that occur as a result of the treatments. We carried out our investigations in the area of the Síkfőkút DIRT (Detritus Input and Removal Treatment) Project. The research area is located in the southern, hilly landscape of the Bükk Mountains, northeast of Eger (Hungary). GPS coordinates of the project: 47°55′34′′ N and 20°26′ 29′′ E, altitude 320-340 m. The soil of the area is Luvisols. The 27-hectare protected forest area is now under the supervision of the Bükki National Park. The experimental plots in Síkfőkút were established in 2000. We established six litter manipulation treatments each with three 7×7 m replicate plots established under complete canopy cover. There were two types of detritus addition treatments (Double Wood and Double Litter). In three treatments, detritus inputs were removed: No Litter No Roots plots, No Inputs, and the Controls. After the establishment of the plots, during the drier periods, the NR and NI treatments showed the highest CO2 emissions. In the first few years, the effect of this process was evident, because due to the lack of living vegetation, the amount of evapotranspiration on the NR and NI plots was much lower, and transpiration practically ceased on these plots. In the wetter periods, the NL and NI treatments showed the lowest soil respiration values, which were significantly lower compared to the Co, DW, and DL treatments. Due to the lower organic matter content and the lack of surface litter cover, the water storage capacity of these soils was significantly limited, therefore we measured the lowest average moisture content among the treatments after ten years. Soil respiration is significantly influenced by temperature values. Furthermore, the supply of nutrients to the soil microorganisms is also a determining factor, which in this case is influenced by the litter production dictated by the treatments. In the case of dry soils with a moisture content of less than 20% in the initial period, litter removal treatments showed a strong correlation with soil moisture (r=0.74). In very dry soils, a smaller increase in moisture does not cause a significant increase in soil respiration, while it does in a slightly higher moisture range. In wet soils, the temperature is the main regulating factor, above a certain moisture limit, water displaces soil air from the soil pores, which inhibits aerobic decomposition processes, and so heterotrophic soil respiration also declines.

Keywords: soil biology, organic matter, nutrition, DIRT, soil respiration

Procedia PDF Downloads 75
1260 Building and Development of the Stock Market Institutional Infrastructure in Russia

Authors: Irina Bondarenko, Olga Vandina

Abstract:

The theory of evolutionary economics is the basis for preparation and application of methods forming the stock market infrastructure development concept. The authors believe that the basis for the process of formation and development of the stock market model infrastructure in Russia is the theory of large systems. This theory considers the financial market infrastructure as a whole on the basis of macroeconomic approach with the further definition of its aims and objectives. Evaluation of the prospects for interaction of securities market institutions will enable identifying the problems associated with the development of this system. The interaction of elements of the stock market infrastructure allows to reduce the costs and time of transactions, thereby freeing up resources of market participants for more efficient operation. Thus, methodology of the transaction analysis allows to determine the financial infrastructure as a set of specialized institutions that form a modern quasi-stable system. The financial infrastructure, based on international standards, should include trading systems, regulatory and supervisory bodies, rating agencies, settlement, clearing and depository organizations. Distribution of financial assets, reducing the magnitude of transaction costs, increased transparency of the market are promising tasks in the solution for questions of services level and quality increase provided by institutions of the securities market financial infrastructure. In order to improve the efficiency of the regulatory system, it is necessary to provide "standards" for all market participants. The development of a clear regulation for the barrier to the stock market entry and exit, provision of conditions for the development and implementation of new laws regulating the activities of participants in the securities market, as well as formulation of proposals aimed at minimizing risks and costs, will enable the achievement of positive results. The latter will be manifested in increasing the level of market participant security and, accordingly, the attractiveness of this market for investors and issuers.

Keywords: institutional infrastructure, financial assets, regulatory system, stock market, transparency of the market

Procedia PDF Downloads 134
1259 Monitoring Future Climate Changes Pattern over Major Cities in Ghana Using Coupled Modeled Intercomparison Project Phase 5, Support Vector Machine, and Random Forest Modeling

Authors: Stephen Dankwa, Zheng Wenfeng, Xiaolu Li

Abstract:

Climate change is recently gaining the attention of many countries across the world. Climate change, which is also known as global warming, referring to the increasing in average surface temperature has been a concern to the Environmental Protection Agency of Ghana. Recently, Ghana has become vulnerable to the effect of the climate change as a result of the dependence of the majority of the population on agriculture. The clearing down of trees to grow crops and burning of charcoal in the country has been a contributing factor to the rise in temperature nowadays in the country as a result of releasing of carbon dioxide and greenhouse gases into the air. Recently, petroleum stations across the cities have been on fire due to this climate changes and which have position Ghana in a way not able to withstand this climate event. As a result, the significant of this research paper is to project how the rise in the average surface temperature will be like at the end of the mid-21st century when agriculture and deforestation are allowed to continue for some time in the country. This study uses the Coupled Modeled Intercomparison Project phase 5 (CMIP5) experiment RCP 8.5 model output data to monitor the future climate changes from 2041-2050, at the end of the mid-21st century over the ten (10) major cities (Accra, Bolgatanga, Cape Coast, Koforidua, Kumasi, Sekondi-Takoradi, Sunyani, Ho, Tamale, Wa) in Ghana. In the models, Support Vector Machine and Random forest, where the cities as a function of heat wave metrics (minimum temperature, maximum temperature, mean temperature, heat wave duration and number of heat waves) assisted to provide more than 50% accuracy to predict and monitor the pattern of the surface air temperature. The findings identified were that the near-surface air temperature will rise between 1°C-2°C (degrees Celsius) over the coastal cities (Accra, Cape Coast, Sekondi-Takoradi). The temperature over Kumasi, Ho and Sunyani by the end of 2050 will rise by 1°C. In Koforidua, it will rise between 1°C-2°C. The temperature will rise in Bolgatanga, Tamale and Wa by 0.5°C by 2050. This indicates how the coastal and the southern part of the country are becoming hotter compared with the north, even though the northern part is the hottest. During heat waves from 2041-2050, Bolgatanga, Tamale, and Wa will experience the highest mean daily air temperature between 34°C-36°C. Kumasi, Koforidua, and Sunyani will experience about 34°C. The coastal cities (Accra, Cape Coast, Sekondi-Takoradi) will experience below 32°C. Even though, the coastal cities will experience the lowest mean temperature, they will have the highest number of heat waves about 62. Majority of the heat waves will last between 2 to 10 days with the maximum 30 days. The surface temperature will continue to rise by the end of the mid-21st century (2041-2050) over the major cities in Ghana and so needs to be addressed to the Environmental Protection Agency in Ghana in order to mitigate this problem.

Keywords: climate changes, CMIP5, Ghana, heat waves, random forest, SVM

Procedia PDF Downloads 200
1258 PLGA Nanoparticles Entrapping dual anti-TB drugs of Amikacin and Moxifloxacin as a Potential Host-Directed Therapy for Multidrug Resistant Tuberculosis

Authors: Sharif Abdelghany

Abstract:

Polymeric nanoparticles have been widely investigated as a controlled release drug delivery platform for the treatment of tuberculosis (TB). These nanoparticles were also readily internalised into macrophages, leading to high intracellular drug concentration. In this study two anti-TB drugs, amikacin and moxifloxacin were encapsulated into PLGA nanoparticles. The novelty of this work appears in: (1) the efficient encapsulation of two hydrophilic second-line anti-TB drugs, and (2) intramacrophage delivery of this synergistic combination potentially for rapid treatment of multi-drug resistant TB (MDR-TB). Two water-oil-water (w/o/w) emulsion strategies were employed in this study: (1) alginate coated PLGA nanoparticles, and (2) alginate entrapped PLGA nanoparticles. The average particle size and polydispersity index (PDI) of the alginate coated PLGA nanoparticles were found to be unfavourably high with values of 640 ± 32 nm and 0.63 ± 0.09, respectively. In contrast, the alginate entrapped PLGA nanoparticles were within the desirable particle size range of 282 - 315 nm and the PDI was 0.08 - 0.16, and therefore were chosen for subsequent studies. Alginate entrapped PLGA nanoparticles yielded a drug loading of over 10 µg/mg powder for amikacin, and more than 5 µg/mg for moxifloxacin and entrapment efficiencies range of approximately 25-31% for moxifloxacin and 51-59% for amikacin. To study macrophage uptake efficiency, the nanoparticles of alginate entrapped nanoparticle formulation were loaded with acridine orange as a marker, seeded to THP-1 derived macrophages and viewed under confocal microscopy. The particles were readily internalised into the macrophages and highly concentrated in the nucleus region. Furthermore, the anti-mycobacterial activity of the drug-loaded particles was evaluated using M. tuberculosis-infected macrophages, which revealed a significant reduction (4 log reduction) of viable bacterial count compared to the untreated group. In conclusion, the amikacin-moxifloxacin alginate entrapped PLGA nanoparticles are promising for further in vivo studies.

Keywords: moxifloxacin and amikacin, nanoparticles, multidrug resistant TB, PLGA

Procedia PDF Downloads 366
1257 Revolutionizing Autonomous Trucking Logistics with Customer Relationship Management Cloud

Authors: Sharda Kumari, Saiman Shetty

Abstract:

Autonomous trucking is just one of the numerous significant shifts impacting fleet management services. The Society of Automotive Engineers (SAE) has defined six levels of vehicle automation that have been adopted internationally, including by the United States Department of Transportation. On public highways in the United States, organizations are testing driverless vehicles with at least Level 4 automation which indicates that a human is present in the vehicle and can disable automation, which is usually done while the trucks are not engaged in highway driving. However, completely driverless vehicles are presently being tested in the state of California. While autonomous trucking can increase safety, decrease trucking costs, provide solutions to trucker shortages, and improve efficiencies, logistics, too, requires advancements to keep up with trucking innovations. Given that artificial intelligence, machine learning, and automated procedures enable people to do their duties in other sectors with fewer resources, CRM (Customer Relationship Management) can be applied to the autonomous trucking business to provide the same level of efficiency. In a society witnessing significant digital disruptions, fleet management is likewise being transformed by technology. Utilizing strategic alliances to enhance core services is an effective technique for capitalizing on innovations and delivering enhanced services. Utilizing analytics on CRM systems improves cost control of fuel strategy, fleet maintenance, driver behavior, route planning, road safety compliance, and capacity utilization. Integration of autonomous trucks with automated fleet management, yard/terminal management, and customer service is possible, thus having significant power to redraw the lines between the public and private spheres in autonomous trucking logistics.

Keywords: autonomous vehicles, customer relationship management, customer experience, autonomous trucking, digital transformation

Procedia PDF Downloads 108
1256 The Rapid Industrialization Model

Authors: Fredrick Etyang

Abstract:

This paper presents a Rapid Industrialization Model (RIM) designed to support existing industrialization policies, strategies and industrial development plans at National, Regional and Constituent level in Africa. The model will reinforce efforts to attainment of inclusive and sustainable industrialization of Africa by state and non-state actors. The overall objective of this model is to serve as a framework for rapid industrialization in developing economies and the specific objectives range from supporting rapid industrialization development to promoting a structural change in the economy, a balanced regional industrial growth, achievement of local, regional and international competitiveness in areas of clear comparative advantage in industrial exports and ultimately, the RIM will serve as a step-by-step guideline for the industrialization of African Economies. This model is a product of a scientific research process underpinned by desk research through the review of African countries development plans, strategies, datasets, industrialization efforts and consultation with key informants. The rigorous research process unearthed multi-directional and renewed efforts towards industrialization of Africa premised on collective commitment of individual states, regional economic communities and the African union commission among other strategic stakeholders. It was further, established that the inputs into industrialization of Africa outshine the levels of industrial development on the continent. The RIM comes in handy to serve as step-by-step framework for African countries to follow in their industrial development efforts of transforming inputs into tangible outputs and outcomes in the short, intermediate and long-run. This model postulates three stages of industrialization and three phases toward rapid industrialization of African economies, the model is simple to understand, easily implementable and contextualizable with high return on investment for each unit invested into industrialization supported by the model. Therefore, effective implementation of the model will result into inclusive and sustainable rapid industrialization of Africa.

Keywords: economic development, industrialization, economic efficiency, exports and imports

Procedia PDF Downloads 84
1255 Guidelines for Enhancing the Learning Environment by the Integration of Design Flexibility and Immersive Technology: The Case of the British University in Egypt’s Classrooms

Authors: Eman Ayman, Gehan Nagy

Abstract:

The learning environment has four main parameters that affect its efficiency which they are: pedagogy, user, technology, and space. According to Morrone, enhancing these parameters to be adaptable for future developments is essential. The educational organization will be in need of developing its learning spaces. Flexibility of design an immersive technology could be used as tools for this development. when flexible design concepts are used, learning spaces that can accommodate a variety of teaching and learning activities are created. To accommodate the various needs and interests of students, these learning spaces are easily reconfigurable and customizable. The immersive learning opportunities offered by technologies like virtual reality, augmented reality, and interactive displays, on the other hand, transcend beyond the confines of the traditional classroom. These technological advancements could improve learning. This thesis highlights the problem of the lack of innovative, flexible learning spaces in educational institutions. It aims to develop guidelines for enhancing the learning environment by the integration of flexible design and immersive technology. This research uses a mixed method approach, both qualitative and quantitative: the qualitative section is related to the literature review theories and case studies analysis. On the other hand, the quantitative section will be identified by the results of the applied studies of the effectiveness of redesigning a learning space from its traditional current state to a flexible technological contemporary space that will be adaptable to many changes and educational needs. Research findings determine the importance of flexibility in learning spaces' internal design as it enhances the space optimization and capability to accommodate the changes and record the significant contribution of immersive technology that assists the process of designing. It will be summarized by the questionnaire results and comparative analysis, which will be the last step of finalizing the guidelines.

Keywords: flexibility, learning space, immersive technology, learning environment, interior design

Procedia PDF Downloads 93
1254 Applying Participatory Design for the Reuse of Deserted Community Spaces

Authors: Wei-Chieh Yeh, Yung-Tang Shen

Abstract:

The concept of community building started in 1994 in Taiwan. After years of development, it fostered the notion of active local resident participation in community issues as co-operators, instead of minions. Participatory design gives participants more control in the decision-making process, helps to reduce the friction caused by arguments and assists in bringing different parties to consensus. This results in an increase in the efficiency of projects run in the community. Therefore, the participation of local residents is key to the success of community building. This study applied participatory design to develop plans for the reuse of deserted spaces in the community from the first stage of brainstorming for design ideas, making creative models to be employed later, through to the final stage of construction. After conducting a series of participatory designed activities, it aimed to integrate the different opinions of residents, develop a sense of belonging and reach a consensus. Besides this, it also aimed at building the residents’ awareness of their responsibilities for the environment and related issues of sustainable development. By reviewing relevant literature and understanding the history of related studies, the study formulated a theory. It took the “2012-2014 Changhua County Community Planner Counseling Program” as a case study to investigate the implementation process of participatory design. Research data are collected by document analysis, participants’ observation and in-depth interviews. After examining the three elements of “Design Participation”, “Construction Participation”, and” Follow–up Maintenance Participation” in the case, the study emerged with a promising conclusion: Maintenance works were carried out better compared to common public works. Besides this, maintenance costs were lower. Moreover, the works that residents were involved in were more creative. Most importantly, the community characteristics could be easy be recognized.

Keywords: participatory design, deserted space, community building, reuse

Procedia PDF Downloads 371
1253 Preliminary Study of Hand Gesture Classification in Upper-Limb Prosthetics Using Machine Learning with EMG Signals

Authors: Linghui Meng, James Atlas, Deborah Munro

Abstract:

There is an increasing demand for prosthetics capable of mimicking natural limb movements and hand gestures, but precise movement control of prosthetics using only electrode signals continues to be challenging. This study considers the implementation of machine learning as a means of improving accuracy and presents an initial investigation into hand gesture recognition using models based on electromyographic (EMG) signals. EMG signals, which capture muscle activity, are used as inputs to machine learning algorithms to improve prosthetic control accuracy, functionality and adaptivity. Using logistic regression, a machine learning classifier, this study evaluates the accuracy of classifying two hand gestures from the publicly available Ninapro dataset using two-time series feature extraction algorithms: Time Series Feature Extraction (TSFE) and Convolutional Neural Networks (CNNs). Trials were conducted using varying numbers of EMG channels from one to eight to determine the impact of channel quantity on classification accuracy. The results suggest that although both algorithms can successfully distinguish between hand gesture EMG signals, CNNs outperform TSFE in extracting useful information for both accuracy and computational efficiency. In addition, although more channels of EMG signals provide more useful information, they also require more complex and computationally intensive feature extractors and consequently do not perform as well as lower numbers of channels. The findings also underscore the potential of machine learning techniques in developing more effective and adaptive prosthetic control systems.

Keywords: EMG, machine learning, prosthetic control, electromyographic prosthetics, hand gesture classification, CNN, computational neural networks, TSFE, time series feature extraction, channel count, logistic regression, ninapro, classifiers

Procedia PDF Downloads 31
1252 Chemical Modifications of Three Underutilized Vegetable Fibres for Improved Composite Value Addition and Dye Absorption Performance

Authors: Abayomi O. Adetuyi, Jamiu M. Jabar, Samuel O. Afolabi

Abstract:

Vegetable fibres are classes of fibres of low density, biodegradable and non-abrasive that are largely abundant fibre materials with specific properties and mostly found/ obtained in plants on earth surface. They are classified into three categories, depending on the part of the plant from which they are gotten from namely: fruit, Blast and Leaf fibre. Ever since four/five millennium B.C, attention has been focussing on the commonest and highly utilized cotton fibre obtained from the fruit of cotton plants (Gossypium spp), for the production of cotton fabric used in every home today. The present study, therefore, focused on the ability of three underutilized vegetable (fruit) fibres namely: coir fiber (Eleas coniferus), palm kernel fiber and empty fruit bunch fiber (Elias guinensis) through chemical modifications for better composite value addition performance to polyurethane form and dye adsorption. These fibres were sourced from their parents’ plants, identified and cleansed with 2% hot detergent solution 1:100, rinsed in distilled water and oven-dried to constant weight, before been chemically modified through alkali bleaching, mercerization and acetylation. The alkali bleaching involves treating 0.5g of each fiber material with 100 mL of 2% H2O2 in 25 % NaOH solution with refluxing for 2 h. While that of mercerization and acetylation involves the use of 5% sodium hydroxide NaOH solution for 2 h and 10% acetic acid- acetic anhydride 1:1 (v/v) (CH3COOH) / (CH3CO)2O solution with conc. H2SO4 as catalyst for 1 h, respectively on the fibres. All were subsequently washed thoroughly with distilled water and oven dried at 105 0C for 1 h. These modified fibres were incorporated as composite into polyurethane form and used in dye adsorption study of indigo. The first two treatments led to fiber weight reduction, while the acidified acetic anhydride treatment gave the fibers weight increment. All the treated fibers were found to be of less hydrophilic nature, better mechanical properties, higher thermal stabilities as well as better adsorption surfaces/capacities than the untreated ones. These were confirmed by gravimetric analysis, Instron Universal Testing Machine, Thermogravimetric Analyser and the Scanning Electron Microscope (SEM) respectively. The fiber morphology of the modified fibers showed smoother surfaces than unmodified fibres.The empty fruit bunch fibre and the coconut coir fibre are better than the palm kernel fibres as reinforcers for composites or as adsorbents for waste-water treatment. Acetylation and alkaline bleaching treatment improve the potentials of the fibres more than mercerization treatment. Conclusively, vegetable fibres, especially empty fruit bunch fibre and the coconut coir fibre, which are cheap, abundant and underutilized, can replace the very costly powdered activated carbon in wastewater treatment and as reinforcer in foam.

Keywords: chemical modification, industrial application, value addition, vegetable fibre

Procedia PDF Downloads 331
1251 Biostimulant and Abiotic Plant Stress Interactions in Malting Barley: A Glasshouse Study

Authors: Conor Blunt, Mariluz del Pino-de Elias, Grace Cott, Saoirse Tracy, Rainer Melzer

Abstract:

The European Green Deal announced in 2021 details agricultural chemical pesticide use and synthetic fertilizer application to be reduced by 50% and 20% by 2030. Increasing and maintaining expected yields under these ambitious goals has strained the agricultural sector. This intergovernmental plan has identified plant biostimulants as one potential input to facilitate this new phase of sustainable agriculture; these products are defined as microorganisms or substances that can stimulate soil and plant functioning to enhance crop nutrient use efficiency, quality and tolerance to abiotic stresses. Spring barley is Ireland’s most widely sown tillage crop, and grain destined for malting commands the most significant market price. Heavy erratic rainfall is forecasted in Ireland’s climate future, and barley is particularly susceptible to waterlogging. Recent findings suggest that plant receptivity to biostimulants may depend on the level of stress inflicted on crops to elicit an assisted plant response. In this study, three biostimulants of different genesis (seaweed, protein hydrolysate and bacteria) are applied to ‘RGT Planet’ malting barley fertilized at three different rates (0 kg/ha, 40 kg/ha, 75 kg/ha) of calcium ammonium nitrogen (27% N) under non-stressed and waterlogged conditions. This 4x3x2 factorial trial design was planted in a completed randomized block with one plant per experimental unit. Leaf gas exchange data and key agronomic and grain quality parameters were analyzed via ANOVA. No penalty on productivity was evident on plants receiving 40 kg/ha of N and bio stimulant compared to 75 kg/ha of N treatments. The main effects of nitrogen application and waterlogging provided the most significant variation in the dataset.

Keywords: biostimulant, Barley, malting, NUE, waterlogging

Procedia PDF Downloads 76
1250 Chikungunya Virus Detection Utilizing an Origami Based Electrochemical Paper Analytical Device

Authors: Pradakshina Sharma, Jagriti Narang

Abstract:

Due to the critical significance in the early identification of infectious diseases, electrochemical sensors have garnered considerable interest. Here, we develop a detection platform for the chikungunya virus by rationally implementing the extremely high charge-transfer efficiency of a ternary nanocomposite of graphene oxide, silver, and gold (G/Ag/Au) (CHIKV). Because paper is an inexpensive substrate and can be produced in large quantities, the use of electrochemical paper analytical device (EPAD) origami further enhances the sensor's appealing qualities. A cost-effective platform for point-of-care diagnostics is provided by paper-based testing. These types of sensors are referred to as eco-designed analytical tools due to their efficient production, usage of the eco-friendly substrate, and potential to reduce waste management after measuring by incinerating the sensor. In this research, the paper's foldability property has been used to develop and create 3D multifaceted biosensors that can specifically detect the CHIKVX-ray diffraction, scanning electron microscopy, UV-vis spectroscopy, and transmission electron microscopy (TEM) were used to characterize the produced nanoparticles. In this work, aptamers are used since they are thought to be a unique and sensitive tool for use in rapid diagnostic methods. Cyclic voltammetry (CV) and linear sweep voltammetry (LSV), which were both validated with a potentiostat, were used to measure the analytical response of the biosensor. The target CHIKV antigen was hybridized with using the aptamer-modified electrode as a signal modulation platform, and its presence was determined by a decline in the current produced by its interaction with an anionic mediator, Methylene Blue (MB). Additionally, a detection limit of 1ng/ml and a broad linear range of 1ng/ml-10µg/ml for the CHIKV antigen were reported.

Keywords: biosensors, ePAD, arboviral infections, point of care

Procedia PDF Downloads 97
1249 Exploring Management of the Fuzzy Front End of Innovation in a Product Driven Startup Company

Authors: Dmitry K. Shaytan, Georgy D. Laptev

Abstract:

In our research we aimed to test a managerial approach for the fuzzy front end (FFE) of innovation by creating controlled experiment/ business case in a breakthrough innovation development. The experiment was in the sport industry and covered all aspects of the customer discovery stage from ideation to prototyping followed by patent application. In the paper we describe and analyze mile stones, tasks, management challenges, decisions made to create the break through innovation, evaluate overall managerial efficiency that was at the considered FFE stage. We set managerial outcome of the FFE stage as a valid product concept in hand. In our paper we introduce hypothetical construct “Q-factor” that helps us in the experiment to distinguish quality of FFE outcomes. The experiment simulated for entrepreneur the FFE of innovation and put on his shoulders responsibility for the outcome of valid product concept. While developing managerial approach to reach the outcome there was a decision to look on product concept from the cognitive psychology and cognitive science point of view. This view helped us to develop the profile of a person whose projection (mental representation) of a new product could optimize for a manager or entrepreneur FFE activities. In the experiment this profile was tested to develop breakthrough innovation for swimmers. Following the managerial approach the product concept was created to help swimmers to feel/sense water. The working prototype was developed to estimate the product concept validity and value added effect for customers. Based on feedback from coachers and swimmers there were strong positive effect that gave high value for customers, and for the experiment – the valid product concept being developed by proposed managerial approach for the FFE. In conclusions there is a suggestion of managerial approach that was derived from experiment.

Keywords: concept development, concept testing, customer discovery, entrepreneurship, entrepreneurial management, idea generation, idea screening, startup management

Procedia PDF Downloads 445
1248 Design and Optimization of Spoke Rotor Type Brushless Direct Current Motor for Electric Vehicles Using Different Flux Barriers

Authors: Ismail Kurt, Necibe Fusun Oyman Serteller

Abstract:

Today, with the reduction in semiconductor system costs, Brushless Direct Current (BLDC) motors have become widely preferred. Based on rotor architecture, BLDC structures are divided into internal permanent magnet (IPM) and surface permanent magnet (SPM). However, permanent magnet (PM) motors in electric vehicles (EVs) are still predominantly based on interior permanent magnet (IPM) motors, as the rotors do not require sleeves, the PMs are better protected by the rotor cores, and the air-gap lengths can be much smaller. This study discusses the IPM rotor structure in detail, highlighting its higher torque levels, reluctance torque, wide speed range operation, and production advantages. IPM rotor structures are particularly preferred in EVs due to their high-speed capabilities, torque density and field weakening (FW) features. In FW applications, the motor becomes more suitable for operation at torques lower than the rated torque but at speeds above the rated speed. Although V-type and triangular IPM rotor structures are generally preferred in EV applications, the spoke-type rotor structure offers distinct advantages, making it a competitive option for these systems. The flux barriers in the rotor significantly affect motor performance, providing notable benefits in both motor efficiency and cost. This study utilizes ANSYS/Maxwell simulation software to analyze the spoke-type IPM motor and examine its key design parameters. Through analytical and 2D analysis, preliminary motor design and parameter optimization have been carried out. During the parameter optimization phase, torque ripple a common issue, especially for IPM motors has been investigated, along with the associated changes in motor parameters.

Keywords: electric vehicle, field weakening, flux barrier, spoke rotor.

Procedia PDF Downloads 8
1247 An Overview of Domain Models of Urban Quantitative Analysis

Authors: Mohan Li

Abstract:

Nowadays, intelligent research technology is more and more important than traditional research methods in urban research work, and this proportion will greatly increase in the next few decades. Frequently such analyzing work cannot be carried without some software engineering knowledge. And here, domain models of urban research will be necessary when applying software engineering knowledge to urban work. In many urban plan practice projects, making rational models, feeding reliable data, and providing enough computation all make indispensable assistance in producing good urban planning. During the whole work process, domain models can optimize workflow design. At present, human beings have entered the era of big data. The amount of digital data generated by cities every day will increase at an exponential rate, and new data forms are constantly emerging. How to select a suitable data set from the massive amount of data, manage and process it has become an ability that more and more planners and urban researchers need to possess. This paper summarizes and makes predictions of the emergence of technologies and technological iterations that may affect urban research in the future, discover urban problems, and implement targeted sustainable urban strategies. They are summarized into seven major domain models. They are urban and rural regional domain model, urban ecological domain model, urban industry domain model, development dynamic domain model, urban social and cultural domain model, urban traffic domain model, and urban space domain model. These seven domain models can be used to guide the construction of systematic urban research topics and help researchers organize a series of intelligent analytical tools, such as Python, R, GIS, etc. These seven models make full use of quantitative spatial analysis, machine learning, and other technologies to achieve higher efficiency and accuracy in urban research, assisting people in making reasonable decisions.

Keywords: big data, domain model, urban planning, urban quantitative analysis, machine learning, workflow design

Procedia PDF Downloads 177
1246 Two-Dimensional Van-Der Waals Heterostructure for Highly Energy-Efficient Field-Free Deterministic Spin-Orbit Torque Switching at Room Temperature

Authors: Pradeep Raj Sharma, Bogeun Jang, Jongill Hong

Abstract:

Spin-orbit torque (SOT) is an efficient approach for manipulating the magnetization of ferromagnetic materials (FMs), providing improved device performance, better compatibility, and ultra-fast switching with lower power consumption compared to spin-transfer torque (STT). Among the various materials and structural designs, two-dimensional (2D) van-der Waals (vdW) layered materials and their heterostructures have been demonstrated as highly scalable and promising device architecture for SOT. In particular, a bilayer heterostructure consisting of fully 2D-vdW-FM, non-magnetic material (NM) offers a potential platform for controlling the magnetization using SOT because of the advantages of being easy to scale and less energy to switch. Here, we report filed-free deterministic switching driven by SOT at room temperature, integrating perpendicularly magnetized 2D-vdW material Fe₃GaTe₂ (FGaT) and NM WTe₂. Pulse current-induced magnetization switching with an ultra-low current density of about 6.5×10⁵ A/cm², yielding a SOT efficiency close to double-digits at 300 K, is reported. These values are two orders of magnitude higher than those observed in conventional heavy metal (HM) based SOT and exceed those reported with 2D-vdW layered materials. WTe₂, a topological semimetal possessing strong SOC and high spin Hall angle, can induce significant spin accumulation with negligible spin loss across the transparent 2D bilayer heterointerface. This promising device architecture enables highly compatible, energy-efficient, non-volatile memory and lays the foundation for designing efficient, flexible, and miniaturized spintronic devices.

Keywords: spintronics, spin-orbit torque, spin Hall effect, spin Hall angle, topological semimetal, perpendicular magnetic anisotropy

Procedia PDF Downloads 5
1245 Bias Minimization in Construction Project Dispute Resolution

Authors: Keyao Li, Sai On Cheung

Abstract:

Incorporation of alternative dispute resolution (ADR) mechanism has been the main feature of current trend of construction project dispute resolution (CPDR). ADR approaches have been identified as efficient mechanisms and are suitable alternatives to litigation and arbitration. Moreover, the use of ADR in this multi-tiered dispute resolution process often leads to repeated evaluations of a same dispute. Multi-tiered CPDR may become a breeding ground for cognitive biases. When completed knowledge is not available at the early tier of construction dispute resolution, disputing parties may form preconception of the dispute matter or the counterpart. This preconception would influence their information processing in the subsequent tier. Disputing parties tend to search and interpret further information in a self-defensive way to confirm their early positions. Their imbalanced information collection would boost their confidence in the held assessments. Their attitudes would be hardened and difficult to compromise. The occurrence of cognitive bias, therefore, impedes efficient dispute settlement. This study aims to explore ways to minimize bias in CPDR. Based on a comprehensive literature review, three types of bias minimizing approaches were collected: strategy-based, attitude-based and process-based. These approaches were further operationalized into bias minimizing measures. To verify the usefulness and practicability of these bias minimizing measures, semi-structured interviews were conducted with ten CPDR third party neutral professionals. All of the interviewees have at least twenty years of experience in facilitating settlement of construction dispute. The usefulness, as well as the implications of the bias minimizing measures, were validated and suggested by these experts. There are few studies on cognitive bias in construction management in general and in CPDR in particular. This study would be the first of its type to enhance the efficiency of construction dispute resolution by highlighting strategies to minimize the biases therein.

Keywords: bias, construction project dispute resolution, minimization, multi-tiered, semi-structured interview

Procedia PDF Downloads 186
1244 Micropropagation and in vitro Conservation via Slow Growth Techniques of Prunus webbii (Spach) Vierh: An Endangered Plant Species in Albania

Authors: Valbona Sota, Efigjeni Kongjika

Abstract:

Wild almond is a woody species, which is difficult to propagate either generatively by seed or by vegetative methods (grafting or cuttings) and also considered as Endangered (EN) in Albania based on IUCN criteria. As a wild relative of cultivated fruit trees, this species represents a source of genetic variability and can be very important in breeding programs and cultivation. For this reason, it would be of interest to use an effective method of in vitro mid-term conservation, which involves strategies to slow plant growth through physicochemical alterations of in vitro growth conditions. Multiplication of wild almond was carried out using zygotic embryos, as primary explants, with the purpose to develop a successful propagation protocol. Results showed that zygotic embryos can proliferate through direct or indirect organogenesis. During subculture, stage was obtained a great number of new plantlets identical to mother plants derived from the zygotic embryos. All in vitro plantlets obtained from subcultures underwent in vitro conservation by minimal growth in low temperature (4ºC) and darkness. The efficiency of this technique was evaluated for 3, 6, and 10 months of conservation period. Maintenance in these conditions reduced micro cuttings growth. Survival and regeneration rates for each period were evaluated and resulted that the maximal time of conservation without subculture on 4ºC was 10 months, but survival and regeneration rates were significantly reduced, specifically 15.6% and 7.6%. An optimal period of conservation in these conditions can be considered the 5-6 months storage, which can lead to 60-50% of survival and regeneration rates. This protocol may be beneficial for mass propagation, mid-term conservation, and for genetic manipulation of wild almond.

Keywords: micropropagation, minimal growth, storage, wild almond

Procedia PDF Downloads 128
1243 Design of Solar Charge Controller and Power Converter with the Multisim

Authors: Sohal Latif

Abstract:

Solar power is in the form of photovoltaic, also known as PV, which is a form of renewable energy that applies solar panels in producing electricity from the sun. It has a vital role in fulfilling the present need for clean and renewable energy to get rid of conventional and non-renewable energy sources that emit high levels of greenhouse gases. Solar energy is embraced because of its availability, easy accessibility, and effectiveness in the provision of power, chiefly in country areas. In solar charging, device charge entails a change of light power into electricity using photovoltaic or PV panels, which supply direct current electric power or DC. Here, the solar charge controller has a very crucial role to play regarding the voltages and the currents coming from the solar panels to take up the changing needs of a battery without overcharging the same. Certain devices, such as inverters, are required to transform the DC power produced by the solar panels into an AC to serve the normal electrical appliances and the current power network. This project was initiated for a project of a solar charge controller and power converter with the MULTISIM. The formation of this project begins with a literature survey to obtain basic knowledge about power converters, charge controllers, and photovoltaic systems. Fundamentals of the operation of solar panels include the process by which light is converted into electricity and a comparison of PWM and MPPT chargers with controllers. Knowledge of rectifiers is built to help achieve AC-to-DC and DC-AC change. Choosing a resistor, capacitance, MOSFET, and OP-AMP is done by the need of the system. The circuit diagrams of converters and charge controllers are designed using the Multisim program. Pulse width modulation, Bubba oscillator circuit, and inverter circuits are modeled and simulated. In the subsequent steps, the analysis of the simulation outcomes indicates the efficiency of the intended converter systems. The various outputs from the different configurations, with the transformer incorporated as well as without it, are then monitored for effective power conversion as well as power regulation.

Keywords: solar charge controller, MULTISIM, converter, inverter

Procedia PDF Downloads 22
1242 Conduction Transfer Functions for the Calculation of Heat Demands in Heavyweight Facade Systems

Authors: Mergim Gasia, Bojan Milovanovica, Sanjin Gumbarevic

Abstract:

Better energy performance of the building envelope is one of the most important aspects of energy savings if the goals set by the European Union are to be achieved in the future. Dynamic heat transfer simulations are being used for the calculation of building energy consumption because they give more realistic energy demands compared to the stationary calculations that do not take the building’s thermal mass into account. Software used for these dynamic simulation use methods that are based on the analytical models since numerical models are insufficient for longer periods. The analytical models used in this research fall in the category of the conduction transfer functions (CTFs). Two methods for calculating the CTFs covered by this research are the Laplace method and the State-Space method. The literature review showed that the main disadvantage of these methods is that they are inadequate for heavyweight façade elements and shorter time periods used for the calculation. The algorithms for both the Laplace and State-Space methods are implemented in Mathematica, and the results are compared to the results from EnergyPlus and TRNSYS since these software use similar algorithms for the calculation of the building’s energy demand. This research aims to check the efficiency of the Laplace and the State-Space method for calculating the building’s energy demand for heavyweight building elements and shorter sampling time, and it also gives the means for the improvement of the algorithms used by these methods. As the reference point for the boundary heat flux density, the finite difference method (FDM) is used. Even though the dynamic heat transfer simulations are superior to the calculation based on the stationary boundary conditions, they have their limitations and will give unsatisfactory results if not properly used.

Keywords: Laplace method, state-space method, conduction transfer functions, finite difference method

Procedia PDF Downloads 133
1241 Hepatoprotective Assessment of L-Ascorbate 1-(2-Hydroxyethyl)-4,6-Dimethyl-1, 2-Dihydropyrimidine-2-on in Toxic Liver Damage Test

Authors: Vladimir Zobov, Nail Nazarov, Alexandra Vyshtakalyuk, Vyacheslav Semenov, Irina Galyametdinova, Vladimir Reznik

Abstract:

The aim of this study was to investigate hepatoprotective properties of the Xymedon derivative L-ascorbate 1- (2-hydroxyethyl)-4,6-dimethyl-1,2-dihydropyrimidine-2-one (XD), which exhibits high efficiency as actoprotector. The study was carried out on 68 male albino rats weighing 250-400 g using preventive exposure to the test preparation. Effectiveness of XD win comparison with effectiveness of Xymedon (original substance) after administration of the compounds in identical doses. Maximum dose was 20 mg/kg. The animals orally received Xymedon or its derivative in doses of 10 and 20 mg/kg over 4 days. In 1-1.5 h after drug administration, CCl4 in vegetable oil (1:1) in a dose of 2 ml/kg. Controls received CCl4 but without hepatoprotectors. Intact control group consisted of rats, not receiving CCl4 or other compounds. The next day after the last administration of CCl4 and compounds under study animals were dehematized under ether anesthesia, blood and liver samples were taken for biochemical and histological analysis. Xymedon and XD administered according to the preventice scheme, exerted hepatoprotective effects: Xymedon — in the dose of 20 mg/kg, XD — in doses of 10 and 20 mg/kg. The drugs under study had different effects on liver condition, affected by induction with CCl4. Xymedon had a more pronounced effect both on the ALT level, which can be elevated not only due to destructive changes in hepatocytes, but also as a cholestasis manifestation, and on the serum total protein level, which reflects protein synthesis in liver. XD had a more pronounced effect on AST level, which is one of the markers of hepatocyte damage. Lower effective dose of XD — 10 mg/kg, compared to Xymedon effective according to, and its pronounced effect on AST, the hepatocyte cytolysis marker, is indicative of its higher preventive effectiveness, compared to Xymedon. This work was performed with the financial support of Russian Science Foundation (grant No: 14-50-00014).

Keywords: hepatoprotectors, pyrimidine derivatives, toxic liver damage, xymedon

Procedia PDF Downloads 302
1240 Cost Overrun in Construction Projects

Authors: Hailu Kebede Bekele

Abstract:

Construction delays are suitable where project events occur at a certain time expected due to causes related to the client, consultant, and contractor. Delay is the major cause of the cost overrun that leads to the poor efficiency of the project. The cost difference between completion and the originally estimated is known as cost overrun. The common ways of cost overruns are not simple issues that can be neglected, but more attention should be given to prevent the organization from being devastated to be failed, and financial expenses to be extended. The reasons that may raised in different studies show that the problem may arise in construction projects due to errors in budgeting, lack of favorable weather conditions, inefficient machinery, and the availability of extravagance. The study is focused on the pace of mega projects that can have a significant change in the cost overrun calculation.15 mega projects are identified to study the problem of the cost overrun in the site. The contractor, consultant, and client are the principal stakeholders in the mega projects. 20 people from each sector were selected to participate in the investigation of the current mega construction project. The main objective of the study on the construction cost overrun is to prioritize the major causes of the cost overrun problem. The methodology that was employed in the construction cost overrun is the qualitative methodology that mostly rates the causes of construction project cost overrun. Interviews, open-ended and closed-ended questions group discussions, and rating qualitative methods are the best methodologies to study construction projects overrun. The result shows that design mistakes, lack of labor, payment delay, old equipment and scheduling, weather conditions, lack of skilled labor, payment delays, transportation, inflation, and order variations, market price fluctuation, and people's thoughts and philosophies, the prior cause of the cost overrun that fail the project performance. The institute shall follow the scheduled activities to bring a positive forward in the project life.

Keywords: cost overrun, delay, mega projects, design

Procedia PDF Downloads 62
1239 Numerical Study of a Ventilation Principle Based on Flow Pulsations

Authors: Amir Sattari, Mac Panah, Naeim Rashidfarokhi

Abstract:

To enhance the mixing of fluid in a rectangular enclosure with a circular inlet and outlet, an energy-efficient approach is further investigated through computational fluid dynamics (CFD). Particle image velocimetry (PIV) measurements help confirm that the pulsation of the inflow velocity improves the mixing performance inside the enclosure considerably without increasing energy consumption. In this study, multiple CFD simulations with different turbulent models were performed. The results obtained were compared with experimental PIV results. This study investigates small-scale representations of flow patterns in a ventilated rectangular room. The objective is to validate the concept of an energy-efficient ventilation strategy with improved thermal comfort and reduction of stagnant air inside the room. Experimental and simulated results confirm that through pulsation of the inflow velocity, strong secondary vortices are generated downstream of the entrance wall-jet. The pulsatile inflow profile promotes a periodic generation of vortices with stronger eddies despite a relatively low inlet velocity, which leads to a larger boundary layer with increased kinetic energy in the occupied zone. A real-scale study was not conducted; however, it can be concluded that a constant velocity inflow profile can be replaced with a lower pulsated flow rate profile while preserving the mixing efficiency. Among the turbulent CFD models demonstrated in this study, SST-kω is most advantageous, exhibiting a similar global airflow pattern as in the experiments. The detailed near-wall velocity profile is utilized to identify the wall-jet instabilities that consist of mixing and boundary layers. The SAS method was later applied to predict the turbulent parameters in the center of the domain. In both cases, the predictions are in good agreement with the measured results.

Keywords: CFD, PIV, pulsatile inflow, ventilation, wall-jet

Procedia PDF Downloads 174