Search results for: nitrogen removal efficiency
1228 Radioactivity Assessment of Sediments in Negombo Lagoon Sri Lanka
Authors: H. M. N. L. Handagiripathira
Abstract:
The distributions of naturally occurring and anthropogenic radioactive materials were determined in surface sediments taken at 27 different locations along the bank of Negombo Lagoon in Sri Lanka. Hydrographic parameters of lagoon water and the grain size analyses of the sediment samples were also carried out for this study. The conductivity of the adjacent water was varied from 13.6 mS/cm to 55.4 mS/cm near to the southern end and the northern end of the lagoon, respectively, and equally salinity levels varied from 7.2 psu to 32.1 psu. The average pH in the water was 7.6 and average water temperature was 28.7 °C. The grain size analysis emphasized the mass fractions of the samples as sand (60.9%), fine sand (30.6%) and fine silt+clay (1.3%) in the sampling locations. The surface sediment samples of wet weight, 1 kg each from upper 5-10 cm layer, were oven dried at 105 °C for 24 hours to get a constant weight, homogenized and sieved through a 2 mm sieve (IAEA technical series no. 295). The radioactivity concentrations were determined using gamma spectrometry technique. Ultra Low Background Broad Energy High Purity Ge Detector, BEGe (Model BE5030, Canberra) was used for radioactivity measurement with Canberra Industries' Laboratory Source-less Calibration Software (LabSOCS) mathematical efficiency calibration approach and Geometry composer software. The mean activity concentration was found to be 24 ± 4, 67 ± 9, 181 ± 10, 59 ± 8, 3.5 ± 0.4 and 0.47 ± 0.08 Bq/kg for 238U, 232Th, 40K, 210Pb, 235U and 137Cs respectively. The mean absorbed dose rate in air, radium equivalent activity, external hazard index, annual gonadal dose equivalent and annual effective dose equivalent were 60.8 nGy/h, 137.3 Bq/kg, 0.4, 425.3 mSv/year and 74.6 mSv/year, respectively. The results of this study will provide baseline information on the natural and artificial radioactive isotopes and environmental pollution associated with information on radiological risk.Keywords: gamma spectrometry, lagoon, radioactivity, sediments
Procedia PDF Downloads 1391227 The Renewal of Chinese Urban Village on Cultural Ecology: Hubei Village as an Example
Authors: Shaojun Zheng, Lei Xu, Yunzi Wang
Abstract:
The main purpose of the research is to use the cultural ecology to analyze the renewal of Shenzhen urban village in the process of China's urbanization and to evaluate and guide the renewal, which will combine the society value and economic efficiency and activate urban villages. The urban village has a long history. There are also many old buildings, various residents, and a strong connection with the surrounding environment. Cultural ecology, which uses the knowledge of ecology to study culture, provides us a cultural perspective in the renewal. We take Hubei village in Shenzhen as our example. By using cultural ecology, we find a new way dealing with the relationship between culture and other factors. It helps us to give the buildings and space the culture meanings from different scales. It enables us to find a unique development pattern of urban village. After analyzing several famous cultural blocks cases, we find it is possible to connect the unique culture of urban village with the renovation of its buildings, community, and commerce. We propose the following strategies with specific target: 1. Building renovation: We repair and rebuild the origin buildings as little as possible, and retain the original urban space tissue as much as possible to keep the original sense of place and the cultural atmosphere. 2. Community upgrade: We reshape the village stream, fix the original function, add event which will activate people to complete the existing cultural circle 3. District commerce: We implant food and drink district, boutique commercial, and creative industries, to make full use of the historical atmosphere of the site to enhance the culture feelings For the renewal of a seemingly chaotic mixed urban village, it is important to break out from the conventional practices of building shopping malls or residential towers. Without creating those building landmarks, cultural ecology activates the urban village by exploiting its unique culture, which makes the old and new combine and becomes a new stream of energy, forming the new cultural, commercial and stylish landmark of the city.Keywords: cultural ecology, urban village, renewal, combination
Procedia PDF Downloads 3941226 Effects of Neem (Azadirachta indica A. Juss) Kernel Inclusion in Broiler Diet on Growth Performance, Organ Weight and Gut Morphometry
Authors: Olatundun Bukola Ezekiel, Adejumo Olusoji
Abstract:
A feeding trial was conducted with 100 two-weeks old broiler chicken to evaluate the influence of inclusion in broiler diets at 0, 2.5, 5, 7.5 and 10% neem kernel (used to replace equal quantity of maize) on their performance, organ weight and gut morphometry. The birds were randomly allotted to five dietary treatments, each treatment having four replicates consisting of five broilers in a completely randomized design. The diets were formulated to be iso-nitrogenous (23% CP). Weekly feed intake and changes in body weight were calculated and feed efficiency determined. At the end of the 28-day feeding trial, four broilers per treatment were selected and sacrificed for carcass evaluation. Results were subjected to statistical analysis using the analysis of variance procedures of Statistical Analysis Software The treatment means were presented with group standard errors of means and where significant, were compared using the Duncan multiple range test of the same software. The results showed that broilers fed 2.5% neem kernel inclusion diets had growth performance statistically comparable to those fed the control diet. Birds on 5, 7.5 and 10% neem kernel diets showed significant (P<0.05) increase in relative weight of liver. The absolute weight of spleen also increased significantly (P<0.05) in birds on 10 % neem kernel diet. More than 5 % neem kernel diets gave significant (P<0.05) increase in the relative weight of the kidney. The length of the small intestine significantly increased in birds fed 7.5 and 10% neem kernel diets. Significant differences (P<0.05) did not occur in the length of the large intestine, right and left caeca. It is recommended that neem kernel can be included up to 2.5% in broiler chicken diet without any deleterious effects on the performance and physiological status of the birds.Keywords: broiler chicken, growth performance, gut morphometry, neem kernel, organ weight
Procedia PDF Downloads 7641225 Synthesis, Characterization, Optical and Photophysical Properties of Pyrene-Labeled Ruthenium(Ii) Trisbipyridine Complex Cored Dendrimers
Authors: Mireille Vonlanthen, Pasquale Porcu, Ernesto Rivera
Abstract:
Dendritic macromolecules are presenting unique physical and chemical properties. One of them is the faculty of transferring energy from a donor moiety introduced at the periphery to an acceptor moiety at the core, mimicking the antenna effect of the process of photosynthesis. The mechanism of energy transfer is based on the Förster resonance energy exchange and requires some overlap between the emission spectrum of the donor and the absorption spectrum of the acceptor. Since it requires a coupling of transition dipole but no overlap of the physical wavefunctions, the energy transfer by Förster mechanism can occur over quite long distances from 1 to a maximum of 10 nm. However, the efficiency of the transfer depends strongly on distance. The Förster radius is the distance at which 50% of the donor’s emission is deactivated by FRET. In this work, we synthesized and characterized a novel series of dendrimers bearing pyrene moieties at the periphery and a Ru (II) complex at the core. The optical and photophysical properties of these compounds were studied by absorption and fluorescence spectroscopy. Pyrene is a well-studied chromophore that has the particularity to present monomer as well as excimer fluorescence emission. The coordination compounds of Ru (II) are red emitters with low quantum yield and long excited lifetime. We observed an efficient singulet to singulet energy transfer in such constructs. Moreover, it is known that the energy of the MLCT emitting state of Ru (II) can be tuned to become almost isoenegetic with respect to the triplet state of pyrene, leading to an extended phosphorescence lifetime. Using dendrimers bearing pyrene moieties as ligands for Ru (II), we could combine the antenna effect of dendrimers as well as its protection effect to the quenching by dioxygen with lifetime increase due to triplet-triplet equilibrium.Keywords: dendritic molecules, energy transfer, pyrene, ru-trisbipyridine complex
Procedia PDF Downloads 2771224 Explanatory Variables for Crash Injury Risk Analysis
Authors: Guilhermina Torrao
Abstract:
An extensive number of studies have been conducted to determine the factors which influence crash injury risk (CIR); however, uncertainties inherent to selected variables have been neglected. A review of existing literature is required to not only obtain an overview of the variables and measures but also ascertain the implications when comparing studies without a systematic view of variable taxonomy. Therefore, the aim of this literature review is to examine and report on peer-reviewed studies in the field of crash analysis and to understand the implications of broad variations in variable selection in CIR analysis. The objective of this study is to demonstrate the variance in variable selection and classification when modeling injury risk involving occupants of light vehicles by presenting an analytical review of the literature. Based on data collected from 64 journal publications reported over the past 21 years, the analytical review discusses the variables selected by each study across an organized list of predictors for CIR analysis and provides a better understanding of the contribution of accident and vehicle factors to injuries acquired by occupants of light vehicles. A cross-comparison analysis demonstrates that almost half the studies (48%) did not consider vehicle design specifications (e.g., vehicle weight), whereas, for those that did, the vehicle age/model year was the most selected explanatory variable used by 41% of the literature studies. For those studies that included speed risk factor in their analyses, the majority (64%) used the legal speed limit data as a ‘proxy’ of vehicle speed at the moment of a crash, imposing limitations for CIR analysis and modeling. Despite the proven efficiency of airbags in minimizing injury impact following a crash, only 22% of studies included airbag deployment data. A major contribution of this study is to highlight the uncertainty linked to explanatory variable selection and identify opportunities for improvements when performing future studies in the field of road injuries.Keywords: crash, exploratory, injury, risk, variables, vehicle
Procedia PDF Downloads 1351223 Metal-Based Deep Eutectic Solvents for Extractive Desulfurization of Fuels: Analysis from Molecular Dynamics Simulations
Authors: Aibek Kukpayev, Dhawal Shah
Abstract:
Combustion of sour fuels containing high amount of sulfur leads to the formation of sulfur oxides, which adversely harm the environment and has a negative impact on human health. Considering this, several legislations have been imposed to bring down the sulfur content in fuel to less than 10 ppm. In recent years, novel deep eutectic solvents (DESs) have been developed to achieve deep desulfurization, particularly to extract thiophenic compounds from liquid fuels. These novel DESs, considered as analogous to ionic liquids are green, eco-friendly, inexpensive, and sustainable. We herein, using molecular dynamic simulation, analyze the interactions of metal-based DESs with model oil consisting of thiophenic compounds. The DES used consists of polyethylene glycol (PEG-200) as a hydrogen bond donor, choline chloride (ChCl) or tetrabutyl ammonium chloride (TBAC) as a hydrogen bond acceptor, and cobalt chloride (CoCl₂) as metal salt. In particular, the combination of ChCl: PEG-200:CoCl₂ at a ratio 1:2:1 and the combination of TBAC:PEG-200:CoCl₂ at a ratio 1:2:0.25 were simulated, separately, with model oil consisting of octane and thiophenes at 25ᵒC and 1 bar. The results of molecular dynamics simulations were analyzed in terms of interaction energies between different components. The simulations revealed a stronger interaction between DESs/thiophenes as compared with octane/thiophenes, suggestive of an efficient desulfurization process. In addition, our analysis suggests that the choice of hydrogen bond acceptor strongly influences the efficiency of the desulfurization process. Taken together, the results also show the importance of the metal ion, although present in small amount, in the process, and the role of the polymer in desulfurization of the model fuel.Keywords: deep eutectic solvents, desulfurization, molecular dynamics simulations, thiophenes
Procedia PDF Downloads 1461222 Influence of the Adsorption of Anionic–Nonionic Surfactants/Silica Nanoparticles Mixture on Clay Rock Minerals in Chemical Enhanced Oil Recovery
Authors: C. Mendoza Ramírez, M. Gambús Ordaz, R. Mercado Ojeda.
Abstract:
Chemical solutions flooding with surfactants, based on their property of reducing the interfacial tension between crude oil and water, is a potential application of chemical enhanced oil recovery (CEOR), however, the high-rate retention of surfactants associated with adsorption in the porous medium and the complexity of the mineralogical composition of the reservoir rock generates a limitation in the efficiency of displacement of crude oil. This study evaluates the effect of the concentration of a mixture of anionic-non-ionic surfactants with silica nanoparticles, in a rock sample composed of 25.14% clay minerals of the kaolinite, chlorite, halloysite and montmorillonite type, according to the results of X-Ray Diffraction analysis and Scanning Electron Spectrometry (XRD and SEM, respectively). The amount of the surfactant mixture adsorbed on the clay rock minerals was analyzed from the construction of its calibration curve and the 4-Region Isotherm Model in a UV-Visible spectroscopy. The adsorption rate of the surfactant in the clay rock averages 32% across all concentrations, influenced by the presence of the surface area of the substrate with a value of 1.6 m2/g and by the mineralogical composition of the clay that increases the cation exchange capacity (CEC). In addition, on Region I and II a final concentration measurement is not evident in the UV-VIS, due to its ionic nature, its high affinity with the clay rock and its low concentration. Finally, for potential CEOR applications, the adsorption of these mixed surfactant systems is considered due to their industrial relevance and it is concluded that it is possible to use concentrations in Region III and IV; initially the adsorption has an increasing slope and then reaches zero in the equilibrium where interfacial tension values are reached in the order of x10-1 mN/m.Keywords: anionic–nonionic surfactants, clay rock, adsorption, 4-region isotherm model, cation exchange capacity, critical micelle concentration, enhanced oil recovery
Procedia PDF Downloads 701221 Combination Method Cold Plasma and Liquid Threads
Authors: Nino Tsamalaidze
Abstract:
Cold plasma is an ionized neutral gas with a temperature of 30-40 degrees, but the impact of HP includes not only gas, but also active molecules, charged particles, heat and UV radiation of low power The main goal of the technology we describe is to launch the natural function of skin regeneration and improve the metabolism inside, which leads to a huge effect of rejuvenation. In particular: eliminate fine mimic wrinkles; get rid of wrinkles around the mouth (purse-string wrinkles); reduce the overhang of the upper eyelid; eliminate bags under the eyes; provide a lifting effect on the oval of the face; reduce stretch marks; shrink pores; even out the skin, reduce the appearance of acne, scars; remove pigmentation. A clear indication of the major findings of the study is based on the current patients practice. The method is to use combination of cold plasma and liquid threats. The advantage of cold plasma is undoubtedly its efficiency, the result of its implementation can be compared with the result of a surgical facelift, despite the fact that the procedure is non-invasive and the risks are minimized. Another advantage is that the technique can be applied on the most sensitive skin of the face - these are the eyelids and the space around the eyes. Cold plasma is one of the few techniques that eliminates bags under the eyes and overhanging eyelids, while not violating the integrity of the tissues. In addition to rejuvenation and lifting effect, among the benefits of cold plasma is also getting rid of scars, kuperoze, stretch marks and other skin defects, plasma allows to get rid of acne, seborrhea, skin fungus and even heals ulcers. The cold plasma method makes it possible to achieve a result similar to blepharoplasty. Carried out on the skin of the eyelids, the procedure allows non-surgical correction of the eyelid line in 3-4 sessions. One of the undoubted advantages of this method is a short rehabilitation and rapid healing of the skin.Keywords: wrinkles, telangiectasia, pigmentation, pore closing
Procedia PDF Downloads 841220 Fabrication of Coatable Polarizer by Guest-Host System for Flexible Display Applications
Authors: Rui He, Seung-Eun Baik, Min-Jae Lee, Myong-Hoon Lee
Abstract:
The polarizer is one of the most essential optical elements in LCDs. Currently, the most widely used polarizers for LCD is the derivatives of the H-sheet polarizer. There is a need for coatable polarizers which are much thinner and more stable than H-sheet polarizers. One possible approach to obtain thin, stable, and coatable polarizers is based on the use of highly ordered guest-host system. In our research, we aimed to fabricate coatable polarizer based on highly ordered liquid crystalline monomer and dichroic dye ‘guest-host’ system, in which the anisotropic absorption of light could be achieved by aligning a dichroic dye (guest) in the cooperative motion of the ordered liquid crystal (host) molecules. Firstly, we designed and synthesized a new reactive liquid crystalline monomer containing polymerizable acrylate groups as the ‘host’ material. The structure was confirmed by 1H-NMR and IR spectroscopy. The liquid crystalline behavior was studied by differential scanning calorimetry (DSC) and polarized optical microscopy (POM). It was confirmed that the monomers possess highly ordered smectic phase at relatively low temperature. Then, the photocurable ‘guest-host’ system was prepared by mixing the liquid crystalline monomer, dichroic dye and photoinitiator. Coatable polarizers were fabricated by spin-coating above mixture on a substrate with alignment layer. The in-situ photopolymerization was carried out at room temperature by irradiating UV light, resulting in the formation of crosslinked structure that stabilized the aligned dichroic dye molecules. Finally, the dichroic ratio (DR), order parameter (S) and polarization efficiency (PE) were determined by polarized UV/Vis spectroscopy. We prepared the coatable polarizers by using different type of dichroic dyes to meet the requirement of display application. The results reveal that the coatable polarizers at a thickness of 8μm exhibited DR=12~17 and relatively high PE (>96%) with the highest PE=99.3%, which possess potential for the LCD or flexible display applications.Keywords: coatable polarizer, display, guest-host, liquid crystal
Procedia PDF Downloads 2511219 AI-Driven Solutions for Optimizing Master Data Management
Authors: Srinivas Vangari
Abstract:
In the era of big data, ensuring the accuracy, consistency, and reliability of critical data assets is crucial for data-driven enterprises. Master Data Management (MDM) plays a crucial role in this endeavor. This paper investigates the role of Artificial Intelligence (AI) in enhancing MDM, focusing on how AI-driven solutions can automate and optimize various stages of the master data lifecycle. By integrating AI (Quantitative and Qualitative Analysis) into processes such as data creation, maintenance, enrichment, and usage, organizations can achieve significant improvements in data quality and operational efficiency. Quantitative analysis is employed to measure the impact of AI on key metrics, including data accuracy, processing speed, and error reduction. For instance, our study demonstrates an 18% improvement in data accuracy and a 75% reduction in duplicate records across multiple systems post-AI implementation. Furthermore, AI’s predictive maintenance capabilities reduced data obsolescence by 22%, as indicated by statistical analyses of data usage patterns over a 12-month period. Complementing this, a qualitative analysis delves into the specific AI-driven strategies that enhance MDM practices, such as automating data entry and validation, which resulted in a 28% decrease in manual errors. Insights from case studies highlight how AI-driven data cleansing processes reduced inconsistencies by 25% and how AI-powered enrichment strategies improved data relevance by 24%, thus boosting decision-making accuracy. The findings demonstrate that AI significantly enhances data quality and integrity, leading to improved enterprise performance through cost reduction, increased compliance, and more accurate, real-time decision-making. These insights underscore the value of AI as a critical tool in modern data management strategies, offering a competitive edge to organizations that leverage its capabilities.Keywords: artificial intelligence, master data management, data governance, data quality
Procedia PDF Downloads 191218 Building and Development of the Stock Market Institutional Infrastructure in Russia
Authors: Irina Bondarenko, Olga Vandina
Abstract:
The theory of evolutionary economics is the basis for preparation and application of methods forming the stock market infrastructure development concept. The authors believe that the basis for the process of formation and development of the stock market model infrastructure in Russia is the theory of large systems. This theory considers the financial market infrastructure as a whole on the basis of macroeconomic approach with the further definition of its aims and objectives. Evaluation of the prospects for interaction of securities market institutions will enable identifying the problems associated with the development of this system. The interaction of elements of the stock market infrastructure allows to reduce the costs and time of transactions, thereby freeing up resources of market participants for more efficient operation. Thus, methodology of the transaction analysis allows to determine the financial infrastructure as a set of specialized institutions that form a modern quasi-stable system. The financial infrastructure, based on international standards, should include trading systems, regulatory and supervisory bodies, rating agencies, settlement, clearing and depository organizations. Distribution of financial assets, reducing the magnitude of transaction costs, increased transparency of the market are promising tasks in the solution for questions of services level and quality increase provided by institutions of the securities market financial infrastructure. In order to improve the efficiency of the regulatory system, it is necessary to provide "standards" for all market participants. The development of a clear regulation for the barrier to the stock market entry and exit, provision of conditions for the development and implementation of new laws regulating the activities of participants in the securities market, as well as formulation of proposals aimed at minimizing risks and costs, will enable the achievement of positive results. The latter will be manifested in increasing the level of market participant security and, accordingly, the attractiveness of this market for investors and issuers.Keywords: institutional infrastructure, financial assets, regulatory system, stock market, transparency of the market
Procedia PDF Downloads 1341217 PLGA Nanoparticles Entrapping dual anti-TB drugs of Amikacin and Moxifloxacin as a Potential Host-Directed Therapy for Multidrug Resistant Tuberculosis
Authors: Sharif Abdelghany
Abstract:
Polymeric nanoparticles have been widely investigated as a controlled release drug delivery platform for the treatment of tuberculosis (TB). These nanoparticles were also readily internalised into macrophages, leading to high intracellular drug concentration. In this study two anti-TB drugs, amikacin and moxifloxacin were encapsulated into PLGA nanoparticles. The novelty of this work appears in: (1) the efficient encapsulation of two hydrophilic second-line anti-TB drugs, and (2) intramacrophage delivery of this synergistic combination potentially for rapid treatment of multi-drug resistant TB (MDR-TB). Two water-oil-water (w/o/w) emulsion strategies were employed in this study: (1) alginate coated PLGA nanoparticles, and (2) alginate entrapped PLGA nanoparticles. The average particle size and polydispersity index (PDI) of the alginate coated PLGA nanoparticles were found to be unfavourably high with values of 640 ± 32 nm and 0.63 ± 0.09, respectively. In contrast, the alginate entrapped PLGA nanoparticles were within the desirable particle size range of 282 - 315 nm and the PDI was 0.08 - 0.16, and therefore were chosen for subsequent studies. Alginate entrapped PLGA nanoparticles yielded a drug loading of over 10 µg/mg powder for amikacin, and more than 5 µg/mg for moxifloxacin and entrapment efficiencies range of approximately 25-31% for moxifloxacin and 51-59% for amikacin. To study macrophage uptake efficiency, the nanoparticles of alginate entrapped nanoparticle formulation were loaded with acridine orange as a marker, seeded to THP-1 derived macrophages and viewed under confocal microscopy. The particles were readily internalised into the macrophages and highly concentrated in the nucleus region. Furthermore, the anti-mycobacterial activity of the drug-loaded particles was evaluated using M. tuberculosis-infected macrophages, which revealed a significant reduction (4 log reduction) of viable bacterial count compared to the untreated group. In conclusion, the amikacin-moxifloxacin alginate entrapped PLGA nanoparticles are promising for further in vivo studies.Keywords: moxifloxacin and amikacin, nanoparticles, multidrug resistant TB, PLGA
Procedia PDF Downloads 3661216 Revolutionizing Autonomous Trucking Logistics with Customer Relationship Management Cloud
Authors: Sharda Kumari, Saiman Shetty
Abstract:
Autonomous trucking is just one of the numerous significant shifts impacting fleet management services. The Society of Automotive Engineers (SAE) has defined six levels of vehicle automation that have been adopted internationally, including by the United States Department of Transportation. On public highways in the United States, organizations are testing driverless vehicles with at least Level 4 automation which indicates that a human is present in the vehicle and can disable automation, which is usually done while the trucks are not engaged in highway driving. However, completely driverless vehicles are presently being tested in the state of California. While autonomous trucking can increase safety, decrease trucking costs, provide solutions to trucker shortages, and improve efficiencies, logistics, too, requires advancements to keep up with trucking innovations. Given that artificial intelligence, machine learning, and automated procedures enable people to do their duties in other sectors with fewer resources, CRM (Customer Relationship Management) can be applied to the autonomous trucking business to provide the same level of efficiency. In a society witnessing significant digital disruptions, fleet management is likewise being transformed by technology. Utilizing strategic alliances to enhance core services is an effective technique for capitalizing on innovations and delivering enhanced services. Utilizing analytics on CRM systems improves cost control of fuel strategy, fleet maintenance, driver behavior, route planning, road safety compliance, and capacity utilization. Integration of autonomous trucks with automated fleet management, yard/terminal management, and customer service is possible, thus having significant power to redraw the lines between the public and private spheres in autonomous trucking logistics.Keywords: autonomous vehicles, customer relationship management, customer experience, autonomous trucking, digital transformation
Procedia PDF Downloads 1081215 The Rapid Industrialization Model
Authors: Fredrick Etyang
Abstract:
This paper presents a Rapid Industrialization Model (RIM) designed to support existing industrialization policies, strategies and industrial development plans at National, Regional and Constituent level in Africa. The model will reinforce efforts to attainment of inclusive and sustainable industrialization of Africa by state and non-state actors. The overall objective of this model is to serve as a framework for rapid industrialization in developing economies and the specific objectives range from supporting rapid industrialization development to promoting a structural change in the economy, a balanced regional industrial growth, achievement of local, regional and international competitiveness in areas of clear comparative advantage in industrial exports and ultimately, the RIM will serve as a step-by-step guideline for the industrialization of African Economies. This model is a product of a scientific research process underpinned by desk research through the review of African countries development plans, strategies, datasets, industrialization efforts and consultation with key informants. The rigorous research process unearthed multi-directional and renewed efforts towards industrialization of Africa premised on collective commitment of individual states, regional economic communities and the African union commission among other strategic stakeholders. It was further, established that the inputs into industrialization of Africa outshine the levels of industrial development on the continent. The RIM comes in handy to serve as step-by-step framework for African countries to follow in their industrial development efforts of transforming inputs into tangible outputs and outcomes in the short, intermediate and long-run. This model postulates three stages of industrialization and three phases toward rapid industrialization of African economies, the model is simple to understand, easily implementable and contextualizable with high return on investment for each unit invested into industrialization supported by the model. Therefore, effective implementation of the model will result into inclusive and sustainable rapid industrialization of Africa.Keywords: economic development, industrialization, economic efficiency, exports and imports
Procedia PDF Downloads 841214 Guidelines for Enhancing the Learning Environment by the Integration of Design Flexibility and Immersive Technology: The Case of the British University in Egypt’s Classrooms
Authors: Eman Ayman, Gehan Nagy
Abstract:
The learning environment has four main parameters that affect its efficiency which they are: pedagogy, user, technology, and space. According to Morrone, enhancing these parameters to be adaptable for future developments is essential. The educational organization will be in need of developing its learning spaces. Flexibility of design an immersive technology could be used as tools for this development. when flexible design concepts are used, learning spaces that can accommodate a variety of teaching and learning activities are created. To accommodate the various needs and interests of students, these learning spaces are easily reconfigurable and customizable. The immersive learning opportunities offered by technologies like virtual reality, augmented reality, and interactive displays, on the other hand, transcend beyond the confines of the traditional classroom. These technological advancements could improve learning. This thesis highlights the problem of the lack of innovative, flexible learning spaces in educational institutions. It aims to develop guidelines for enhancing the learning environment by the integration of flexible design and immersive technology. This research uses a mixed method approach, both qualitative and quantitative: the qualitative section is related to the literature review theories and case studies analysis. On the other hand, the quantitative section will be identified by the results of the applied studies of the effectiveness of redesigning a learning space from its traditional current state to a flexible technological contemporary space that will be adaptable to many changes and educational needs. Research findings determine the importance of flexibility in learning spaces' internal design as it enhances the space optimization and capability to accommodate the changes and record the significant contribution of immersive technology that assists the process of designing. It will be summarized by the questionnaire results and comparative analysis, which will be the last step of finalizing the guidelines.Keywords: flexibility, learning space, immersive technology, learning environment, interior design
Procedia PDF Downloads 951213 Flexible Feedstock Concept in Gasification Process for Carbon-Negative Energy Technology: A Case Study in Malaysia
Authors: Zahrul Faizi M. S., Ali A., Norhuda A. M.
Abstract:
Emission of greenhouse gases (GHG) from solid waste treatment and dependency on fossil fuel to produce electricity are the major concern in Malaysia as well as global. Innovation in downdraft gasification with combined heat and power (CHP) systems has the potential to minimize solid waste and reduce the emission of anthropogenic GHG from conventional fossil fuel power plants. However, the efficiency and capability of downdraft gasification to generate electricity from various alternative fuels, for instance, agriculture residues (i.e., woodchip, coconut shell) and municipal solid waste (MSW), are still controversial, on top of the toxicity level from the produced bottom ash. Thus this study evaluates the adaptability and reliability of the 20 kW downdraft gasification system to generate electricity (while considering environmental sustainability from the bottom ash) using flexible local feedstock at 20, 40, and 60% mixed ratio of MSW: agriculture residues. Feedstock properties such as feed particle size, moisture, and ash contents are also analyzed to identify optimal characteristics for the combination of feedstock (feedstock flexibility) to obtain maximum energy generation. Results show that the gasification system is capable to flexibly accommodate different feedstock compositions subjected to specific particle size (less than 2 inches) at a moisture content between 15 to 20%. These values exhibit enhance gasifier performance and provide a significant effect to the syngas composition utilizes by the internal combustion engine, which reflects energy production. The result obtained in this study is able to provide a new perspective on the transition of the conventional gasification system to a future reliable carbon-negative energy technology. Subsequently, promoting commercial scale-up of the downdraft gasification system.Keywords: carbon-negative energy, feedstock flexibility, gasification, renewable energy
Procedia PDF Downloads 1351212 Applying Participatory Design for the Reuse of Deserted Community Spaces
Authors: Wei-Chieh Yeh, Yung-Tang Shen
Abstract:
The concept of community building started in 1994 in Taiwan. After years of development, it fostered the notion of active local resident participation in community issues as co-operators, instead of minions. Participatory design gives participants more control in the decision-making process, helps to reduce the friction caused by arguments and assists in bringing different parties to consensus. This results in an increase in the efficiency of projects run in the community. Therefore, the participation of local residents is key to the success of community building. This study applied participatory design to develop plans for the reuse of deserted spaces in the community from the first stage of brainstorming for design ideas, making creative models to be employed later, through to the final stage of construction. After conducting a series of participatory designed activities, it aimed to integrate the different opinions of residents, develop a sense of belonging and reach a consensus. Besides this, it also aimed at building the residents’ awareness of their responsibilities for the environment and related issues of sustainable development. By reviewing relevant literature and understanding the history of related studies, the study formulated a theory. It took the “2012-2014 Changhua County Community Planner Counseling Program” as a case study to investigate the implementation process of participatory design. Research data are collected by document analysis, participants’ observation and in-depth interviews. After examining the three elements of “Design Participation”, “Construction Participation”, and” Follow–up Maintenance Participation” in the case, the study emerged with a promising conclusion: Maintenance works were carried out better compared to common public works. Besides this, maintenance costs were lower. Moreover, the works that residents were involved in were more creative. Most importantly, the community characteristics could be easy be recognized.Keywords: participatory design, deserted space, community building, reuse
Procedia PDF Downloads 3721211 Preliminary Study of Hand Gesture Classification in Upper-Limb Prosthetics Using Machine Learning with EMG Signals
Authors: Linghui Meng, James Atlas, Deborah Munro
Abstract:
There is an increasing demand for prosthetics capable of mimicking natural limb movements and hand gestures, but precise movement control of prosthetics using only electrode signals continues to be challenging. This study considers the implementation of machine learning as a means of improving accuracy and presents an initial investigation into hand gesture recognition using models based on electromyographic (EMG) signals. EMG signals, which capture muscle activity, are used as inputs to machine learning algorithms to improve prosthetic control accuracy, functionality and adaptivity. Using logistic regression, a machine learning classifier, this study evaluates the accuracy of classifying two hand gestures from the publicly available Ninapro dataset using two-time series feature extraction algorithms: Time Series Feature Extraction (TSFE) and Convolutional Neural Networks (CNNs). Trials were conducted using varying numbers of EMG channels from one to eight to determine the impact of channel quantity on classification accuracy. The results suggest that although both algorithms can successfully distinguish between hand gesture EMG signals, CNNs outperform TSFE in extracting useful information for both accuracy and computational efficiency. In addition, although more channels of EMG signals provide more useful information, they also require more complex and computationally intensive feature extractors and consequently do not perform as well as lower numbers of channels. The findings also underscore the potential of machine learning techniques in developing more effective and adaptive prosthetic control systems.Keywords: EMG, machine learning, prosthetic control, electromyographic prosthetics, hand gesture classification, CNN, computational neural networks, TSFE, time series feature extraction, channel count, logistic regression, ninapro, classifiers
Procedia PDF Downloads 341210 Chikungunya Virus Detection Utilizing an Origami Based Electrochemical Paper Analytical Device
Authors: Pradakshina Sharma, Jagriti Narang
Abstract:
Due to the critical significance in the early identification of infectious diseases, electrochemical sensors have garnered considerable interest. Here, we develop a detection platform for the chikungunya virus by rationally implementing the extremely high charge-transfer efficiency of a ternary nanocomposite of graphene oxide, silver, and gold (G/Ag/Au) (CHIKV). Because paper is an inexpensive substrate and can be produced in large quantities, the use of electrochemical paper analytical device (EPAD) origami further enhances the sensor's appealing qualities. A cost-effective platform for point-of-care diagnostics is provided by paper-based testing. These types of sensors are referred to as eco-designed analytical tools due to their efficient production, usage of the eco-friendly substrate, and potential to reduce waste management after measuring by incinerating the sensor. In this research, the paper's foldability property has been used to develop and create 3D multifaceted biosensors that can specifically detect the CHIKVX-ray diffraction, scanning electron microscopy, UV-vis spectroscopy, and transmission electron microscopy (TEM) were used to characterize the produced nanoparticles. In this work, aptamers are used since they are thought to be a unique and sensitive tool for use in rapid diagnostic methods. Cyclic voltammetry (CV) and linear sweep voltammetry (LSV), which were both validated with a potentiostat, were used to measure the analytical response of the biosensor. The target CHIKV antigen was hybridized with using the aptamer-modified electrode as a signal modulation platform, and its presence was determined by a decline in the current produced by its interaction with an anionic mediator, Methylene Blue (MB). Additionally, a detection limit of 1ng/ml and a broad linear range of 1ng/ml-10µg/ml for the CHIKV antigen were reported.Keywords: biosensors, ePAD, arboviral infections, point of care
Procedia PDF Downloads 981209 Exploring Management of the Fuzzy Front End of Innovation in a Product Driven Startup Company
Authors: Dmitry K. Shaytan, Georgy D. Laptev
Abstract:
In our research we aimed to test a managerial approach for the fuzzy front end (FFE) of innovation by creating controlled experiment/ business case in a breakthrough innovation development. The experiment was in the sport industry and covered all aspects of the customer discovery stage from ideation to prototyping followed by patent application. In the paper we describe and analyze mile stones, tasks, management challenges, decisions made to create the break through innovation, evaluate overall managerial efficiency that was at the considered FFE stage. We set managerial outcome of the FFE stage as a valid product concept in hand. In our paper we introduce hypothetical construct “Q-factor” that helps us in the experiment to distinguish quality of FFE outcomes. The experiment simulated for entrepreneur the FFE of innovation and put on his shoulders responsibility for the outcome of valid product concept. While developing managerial approach to reach the outcome there was a decision to look on product concept from the cognitive psychology and cognitive science point of view. This view helped us to develop the profile of a person whose projection (mental representation) of a new product could optimize for a manager or entrepreneur FFE activities. In the experiment this profile was tested to develop breakthrough innovation for swimmers. Following the managerial approach the product concept was created to help swimmers to feel/sense water. The working prototype was developed to estimate the product concept validity and value added effect for customers. Based on feedback from coachers and swimmers there were strong positive effect that gave high value for customers, and for the experiment – the valid product concept being developed by proposed managerial approach for the FFE. In conclusions there is a suggestion of managerial approach that was derived from experiment.Keywords: concept development, concept testing, customer discovery, entrepreneurship, entrepreneurial management, idea generation, idea screening, startup management
Procedia PDF Downloads 4451208 Design and Optimization of Spoke Rotor Type Brushless Direct Current Motor for Electric Vehicles Using Different Flux Barriers
Authors: Ismail Kurt, Necibe Fusun Oyman Serteller
Abstract:
Today, with the reduction in semiconductor system costs, Brushless Direct Current (BLDC) motors have become widely preferred. Based on rotor architecture, BLDC structures are divided into internal permanent magnet (IPM) and surface permanent magnet (SPM). However, permanent magnet (PM) motors in electric vehicles (EVs) are still predominantly based on interior permanent magnet (IPM) motors, as the rotors do not require sleeves, the PMs are better protected by the rotor cores, and the air-gap lengths can be much smaller. This study discusses the IPM rotor structure in detail, highlighting its higher torque levels, reluctance torque, wide speed range operation, and production advantages. IPM rotor structures are particularly preferred in EVs due to their high-speed capabilities, torque density and field weakening (FW) features. In FW applications, the motor becomes more suitable for operation at torques lower than the rated torque but at speeds above the rated speed. Although V-type and triangular IPM rotor structures are generally preferred in EV applications, the spoke-type rotor structure offers distinct advantages, making it a competitive option for these systems. The flux barriers in the rotor significantly affect motor performance, providing notable benefits in both motor efficiency and cost. This study utilizes ANSYS/Maxwell simulation software to analyze the spoke-type IPM motor and examine its key design parameters. Through analytical and 2D analysis, preliminary motor design and parameter optimization have been carried out. During the parameter optimization phase, torque ripple a common issue, especially for IPM motors has been investigated, along with the associated changes in motor parameters.Keywords: electric vehicle, field weakening, flux barrier, spoke rotor.
Procedia PDF Downloads 81207 An Overview of Domain Models of Urban Quantitative Analysis
Authors: Mohan Li
Abstract:
Nowadays, intelligent research technology is more and more important than traditional research methods in urban research work, and this proportion will greatly increase in the next few decades. Frequently such analyzing work cannot be carried without some software engineering knowledge. And here, domain models of urban research will be necessary when applying software engineering knowledge to urban work. In many urban plan practice projects, making rational models, feeding reliable data, and providing enough computation all make indispensable assistance in producing good urban planning. During the whole work process, domain models can optimize workflow design. At present, human beings have entered the era of big data. The amount of digital data generated by cities every day will increase at an exponential rate, and new data forms are constantly emerging. How to select a suitable data set from the massive amount of data, manage and process it has become an ability that more and more planners and urban researchers need to possess. This paper summarizes and makes predictions of the emergence of technologies and technological iterations that may affect urban research in the future, discover urban problems, and implement targeted sustainable urban strategies. They are summarized into seven major domain models. They are urban and rural regional domain model, urban ecological domain model, urban industry domain model, development dynamic domain model, urban social and cultural domain model, urban traffic domain model, and urban space domain model. These seven domain models can be used to guide the construction of systematic urban research topics and help researchers organize a series of intelligent analytical tools, such as Python, R, GIS, etc. These seven models make full use of quantitative spatial analysis, machine learning, and other technologies to achieve higher efficiency and accuracy in urban research, assisting people in making reasonable decisions.Keywords: big data, domain model, urban planning, urban quantitative analysis, machine learning, workflow design
Procedia PDF Downloads 1771206 Monitoring and Management of Aquatic Macroinvertebrates for Determining the Level of Water Pollution Catchment Basin of Debed River, Armenia
Authors: Inga Badasyan
Abstract:
Every year we do monitoring of water pollution of catchment basin of Debed River. Next, the Ministry of Nature Protection does modeling programme. Finely, we are managing the impact of water pollution in Debed river. Ecosystem technologies efficiency performance were estimated based on the physical, chemical, and macrobiological analyses of water on regular base between 2012 to 2015. Algae community composition was determined to assess the ecological status of Debed river, while vegetation was determined to assess biodiversity. Last time, experts werespeaking about global warming, which is having bad impact on the surface water, freshwater, etc. As, we know that global warming is caused by the current high levels of carbon dioxide in the water. Geochemical modelling is increasingly playing an important role in various areas of hydro sciences and earth sciences. Geochemical modelling of highly concentrated aqueous solutions represents an important topic in the study of many environments such as evaporation ponds, groundwater and soils in arid and semi-arid zones, costal aquifers, etc. The sampling time is important for benthic macroinvertebrates, for that reason we have chosen in the spring (abundant flow of the river, the beginning of the vegetation season) and autumn (the flow of river is scarce). The macroinvertebrates are good indicator for a chromic pollution and aquatic ecosystems. Results of our earlier investigations in the Debed river reservoirs clearly show that management problem of ecosystem reservoirs is topical. Research results can be applied to studies of monitoring water quality in the rivers and allow for rate changes and to predict possible future changes in the nature of the lake.Keywords: ecohydrological monitoring, flood risk management, global warming, aquatic macroinvertebrates
Procedia PDF Downloads 2881205 Growth Performance Of fresh Water Microalgae Chlorella sp. Exposed to Carbon Dioxide
Authors: Titin Handayani, Adi Mulyanto, Fajar Eko Priyanto
Abstract:
It is generally recognized, that algae could be an interesting option for reducing CO₂ emissions. Based on light and CO₂, algae can be used for the production various economically interesting products. Current algae cultivation techniques, however, still present a number of limitations. Efficient feeding of CO₂, especially on a large scale, is one of them. Current methods for CO₂ feeding to algae cultures rely on the sparging pure CO₂ or directly from flue gas. The limiting factor in this system is the solubility of CO₂ in water, which demands a considerable amount of energy for an effective gas to liquid transfer and leads to losses to the atmosphere. Due to the current ineffective methods for CO₂ introduction into algae ponds very large surface areas would be required for enough ponds to capture a considerable amount of the CO₂. The purpose of this study is to assess technology to capture carbon dioxide (CO₂) emissions generated by industry by utilizing of microalgae Chlorella sp. The microalgae were cultivated in a bioreactor culture pond raceway type. The result is expected to be useful in mitigating the effects of greenhouse gases in reducing the CO₂ emissions. The research activities include: (1) Characterization of boiler flue gas, (2) Operation of culture pond, (3) Sampling and sample analysis. The results of this study showed that the initial assessment absorption of the flue gas by microalgae using 1000 L raceway pond completed by heat exchanger were quite promising. The transfer of CO₂ into the pond culture system was run well. This identified from the success of cooling the boiler flue gas from the temperature of about 200 °C to below ambient temperature. Except for the temperature, the gas bubbles into the culture media were quite fine. Therefore, the contact between the gas and the media was well performed. The efficiency of CO₂ absorption by Chlorella sp reached 6.68 % with an average CO₂ loading of 0.29 g/L/day.Keywords: Chlorella sp., CO2 emission, heat exchange, microalgae, milk industry, raceway pond
Procedia PDF Downloads 2171204 Two-Dimensional Van-Der Waals Heterostructure for Highly Energy-Efficient Field-Free Deterministic Spin-Orbit Torque Switching at Room Temperature
Authors: Pradeep Raj Sharma, Bogeun Jang, Jongill Hong
Abstract:
Spin-orbit torque (SOT) is an efficient approach for manipulating the magnetization of ferromagnetic materials (FMs), providing improved device performance, better compatibility, and ultra-fast switching with lower power consumption compared to spin-transfer torque (STT). Among the various materials and structural designs, two-dimensional (2D) van-der Waals (vdW) layered materials and their heterostructures have been demonstrated as highly scalable and promising device architecture for SOT. In particular, a bilayer heterostructure consisting of fully 2D-vdW-FM, non-magnetic material (NM) offers a potential platform for controlling the magnetization using SOT because of the advantages of being easy to scale and less energy to switch. Here, we report filed-free deterministic switching driven by SOT at room temperature, integrating perpendicularly magnetized 2D-vdW material Fe₃GaTe₂ (FGaT) and NM WTe₂. Pulse current-induced magnetization switching with an ultra-low current density of about 6.5×10⁵ A/cm², yielding a SOT efficiency close to double-digits at 300 K, is reported. These values are two orders of magnitude higher than those observed in conventional heavy metal (HM) based SOT and exceed those reported with 2D-vdW layered materials. WTe₂, a topological semimetal possessing strong SOC and high spin Hall angle, can induce significant spin accumulation with negligible spin loss across the transparent 2D bilayer heterointerface. This promising device architecture enables highly compatible, energy-efficient, non-volatile memory and lays the foundation for designing efficient, flexible, and miniaturized spintronic devices.Keywords: spintronics, spin-orbit torque, spin Hall effect, spin Hall angle, topological semimetal, perpendicular magnetic anisotropy
Procedia PDF Downloads 71203 Bias Minimization in Construction Project Dispute Resolution
Authors: Keyao Li, Sai On Cheung
Abstract:
Incorporation of alternative dispute resolution (ADR) mechanism has been the main feature of current trend of construction project dispute resolution (CPDR). ADR approaches have been identified as efficient mechanisms and are suitable alternatives to litigation and arbitration. Moreover, the use of ADR in this multi-tiered dispute resolution process often leads to repeated evaluations of a same dispute. Multi-tiered CPDR may become a breeding ground for cognitive biases. When completed knowledge is not available at the early tier of construction dispute resolution, disputing parties may form preconception of the dispute matter or the counterpart. This preconception would influence their information processing in the subsequent tier. Disputing parties tend to search and interpret further information in a self-defensive way to confirm their early positions. Their imbalanced information collection would boost their confidence in the held assessments. Their attitudes would be hardened and difficult to compromise. The occurrence of cognitive bias, therefore, impedes efficient dispute settlement. This study aims to explore ways to minimize bias in CPDR. Based on a comprehensive literature review, three types of bias minimizing approaches were collected: strategy-based, attitude-based and process-based. These approaches were further operationalized into bias minimizing measures. To verify the usefulness and practicability of these bias minimizing measures, semi-structured interviews were conducted with ten CPDR third party neutral professionals. All of the interviewees have at least twenty years of experience in facilitating settlement of construction dispute. The usefulness, as well as the implications of the bias minimizing measures, were validated and suggested by these experts. There are few studies on cognitive bias in construction management in general and in CPDR in particular. This study would be the first of its type to enhance the efficiency of construction dispute resolution by highlighting strategies to minimize the biases therein.Keywords: bias, construction project dispute resolution, minimization, multi-tiered, semi-structured interview
Procedia PDF Downloads 1861202 Micropropagation and in vitro Conservation via Slow Growth Techniques of Prunus webbii (Spach) Vierh: An Endangered Plant Species in Albania
Authors: Valbona Sota, Efigjeni Kongjika
Abstract:
Wild almond is a woody species, which is difficult to propagate either generatively by seed or by vegetative methods (grafting or cuttings) and also considered as Endangered (EN) in Albania based on IUCN criteria. As a wild relative of cultivated fruit trees, this species represents a source of genetic variability and can be very important in breeding programs and cultivation. For this reason, it would be of interest to use an effective method of in vitro mid-term conservation, which involves strategies to slow plant growth through physicochemical alterations of in vitro growth conditions. Multiplication of wild almond was carried out using zygotic embryos, as primary explants, with the purpose to develop a successful propagation protocol. Results showed that zygotic embryos can proliferate through direct or indirect organogenesis. During subculture, stage was obtained a great number of new plantlets identical to mother plants derived from the zygotic embryos. All in vitro plantlets obtained from subcultures underwent in vitro conservation by minimal growth in low temperature (4ºC) and darkness. The efficiency of this technique was evaluated for 3, 6, and 10 months of conservation period. Maintenance in these conditions reduced micro cuttings growth. Survival and regeneration rates for each period were evaluated and resulted that the maximal time of conservation without subculture on 4ºC was 10 months, but survival and regeneration rates were significantly reduced, specifically 15.6% and 7.6%. An optimal period of conservation in these conditions can be considered the 5-6 months storage, which can lead to 60-50% of survival and regeneration rates. This protocol may be beneficial for mass propagation, mid-term conservation, and for genetic manipulation of wild almond.Keywords: micropropagation, minimal growth, storage, wild almond
Procedia PDF Downloads 1281201 Design of Solar Charge Controller and Power Converter with the Multisim
Authors: Sohal Latif
Abstract:
Solar power is in the form of photovoltaic, also known as PV, which is a form of renewable energy that applies solar panels in producing electricity from the sun. It has a vital role in fulfilling the present need for clean and renewable energy to get rid of conventional and non-renewable energy sources that emit high levels of greenhouse gases. Solar energy is embraced because of its availability, easy accessibility, and effectiveness in the provision of power, chiefly in country areas. In solar charging, device charge entails a change of light power into electricity using photovoltaic or PV panels, which supply direct current electric power or DC. Here, the solar charge controller has a very crucial role to play regarding the voltages and the currents coming from the solar panels to take up the changing needs of a battery without overcharging the same. Certain devices, such as inverters, are required to transform the DC power produced by the solar panels into an AC to serve the normal electrical appliances and the current power network. This project was initiated for a project of a solar charge controller and power converter with the MULTISIM. The formation of this project begins with a literature survey to obtain basic knowledge about power converters, charge controllers, and photovoltaic systems. Fundamentals of the operation of solar panels include the process by which light is converted into electricity and a comparison of PWM and MPPT chargers with controllers. Knowledge of rectifiers is built to help achieve AC-to-DC and DC-AC change. Choosing a resistor, capacitance, MOSFET, and OP-AMP is done by the need of the system. The circuit diagrams of converters and charge controllers are designed using the Multisim program. Pulse width modulation, Bubba oscillator circuit, and inverter circuits are modeled and simulated. In the subsequent steps, the analysis of the simulation outcomes indicates the efficiency of the intended converter systems. The various outputs from the different configurations, with the transformer incorporated as well as without it, are then monitored for effective power conversion as well as power regulation.Keywords: solar charge controller, MULTISIM, converter, inverter
Procedia PDF Downloads 221200 Conduction Transfer Functions for the Calculation of Heat Demands in Heavyweight Facade Systems
Authors: Mergim Gasia, Bojan Milovanovica, Sanjin Gumbarevic
Abstract:
Better energy performance of the building envelope is one of the most important aspects of energy savings if the goals set by the European Union are to be achieved in the future. Dynamic heat transfer simulations are being used for the calculation of building energy consumption because they give more realistic energy demands compared to the stationary calculations that do not take the building’s thermal mass into account. Software used for these dynamic simulation use methods that are based on the analytical models since numerical models are insufficient for longer periods. The analytical models used in this research fall in the category of the conduction transfer functions (CTFs). Two methods for calculating the CTFs covered by this research are the Laplace method and the State-Space method. The literature review showed that the main disadvantage of these methods is that they are inadequate for heavyweight façade elements and shorter time periods used for the calculation. The algorithms for both the Laplace and State-Space methods are implemented in Mathematica, and the results are compared to the results from EnergyPlus and TRNSYS since these software use similar algorithms for the calculation of the building’s energy demand. This research aims to check the efficiency of the Laplace and the State-Space method for calculating the building’s energy demand for heavyweight building elements and shorter sampling time, and it also gives the means for the improvement of the algorithms used by these methods. As the reference point for the boundary heat flux density, the finite difference method (FDM) is used. Even though the dynamic heat transfer simulations are superior to the calculation based on the stationary boundary conditions, they have their limitations and will give unsatisfactory results if not properly used.Keywords: Laplace method, state-space method, conduction transfer functions, finite difference method
Procedia PDF Downloads 1331199 Hepatoprotective Assessment of L-Ascorbate 1-(2-Hydroxyethyl)-4,6-Dimethyl-1, 2-Dihydropyrimidine-2-on in Toxic Liver Damage Test
Authors: Vladimir Zobov, Nail Nazarov, Alexandra Vyshtakalyuk, Vyacheslav Semenov, Irina Galyametdinova, Vladimir Reznik
Abstract:
The aim of this study was to investigate hepatoprotective properties of the Xymedon derivative L-ascorbate 1- (2-hydroxyethyl)-4,6-dimethyl-1,2-dihydropyrimidine-2-one (XD), which exhibits high efficiency as actoprotector. The study was carried out on 68 male albino rats weighing 250-400 g using preventive exposure to the test preparation. Effectiveness of XD win comparison with effectiveness of Xymedon (original substance) after administration of the compounds in identical doses. Maximum dose was 20 mg/kg. The animals orally received Xymedon or its derivative in doses of 10 and 20 mg/kg over 4 days. In 1-1.5 h after drug administration, CCl4 in vegetable oil (1:1) in a dose of 2 ml/kg. Controls received CCl4 but without hepatoprotectors. Intact control group consisted of rats, not receiving CCl4 or other compounds. The next day after the last administration of CCl4 and compounds under study animals were dehematized under ether anesthesia, blood and liver samples were taken for biochemical and histological analysis. Xymedon and XD administered according to the preventice scheme, exerted hepatoprotective effects: Xymedon — in the dose of 20 mg/kg, XD — in doses of 10 and 20 mg/kg. The drugs under study had different effects on liver condition, affected by induction with CCl4. Xymedon had a more pronounced effect both on the ALT level, which can be elevated not only due to destructive changes in hepatocytes, but also as a cholestasis manifestation, and on the serum total protein level, which reflects protein synthesis in liver. XD had a more pronounced effect on AST level, which is one of the markers of hepatocyte damage. Lower effective dose of XD — 10 mg/kg, compared to Xymedon effective according to, and its pronounced effect on AST, the hepatocyte cytolysis marker, is indicative of its higher preventive effectiveness, compared to Xymedon. This work was performed with the financial support of Russian Science Foundation (grant No: 14-50-00014).Keywords: hepatoprotectors, pyrimidine derivatives, toxic liver damage, xymedon
Procedia PDF Downloads 302