Search results for: personnel organizational performance
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 14260

Search results for: personnel organizational performance

1960 Theoretical Comparisons and Empirical Illustration of Malmquist, Hicks–Moorsteen, and Luenberger Productivity Indices

Authors: Fatemeh Abbasi, Sahand Daneshvar

Abstract:

Productivity is one of the essential goals of companies to improve performance, which as a strategy-oriented method, determines the basis of the company's economic growth. The history of productivity goes back centuries, but most researchers defined productivity as the relationship between a product and the factors used in production in the early twentieth century. Productivity as the optimal use of available resources means that "more output using less input" can increase companies' economic growth and prosperity capacity. Also, having a quality life based on economic progress depends on productivity growth in that society. Therefore, productivity is a national priority for any developed country. There are several methods for calculating productivity growth measurements that can be divided into parametric and non-parametric methods. Parametric methods rely on the existence of a function in their hypotheses, while non-parametric methods do not require a function based on empirical evidence. One of the most popular non-parametric methods is Data Envelopment Analysis (DEA), which measures changes in productivity over time. The DEA evaluates the productivity of decision-making units (DMUs) based on mathematical models. This method uses multiple inputs and outputs to compare the productivity of similar DMUs such as banks, government agencies, companies, airports, Etc. Non-parametric methods are themselves divided into the frontier and non frontier approaches. The Malmquist productivity index (MPI) proposed by Caves, Christensen, and Diewert (1982), the Hicks–Moorsteen productivity index (HMPI) proposed by Bjurek (1996), or the Luenberger productivity indicator (LPI) proposed by Chambers (2002) are powerful tools for measuring productivity changes over time. This study will compare the Malmquist, Hicks–Moorsteen, and Luenberger indices theoretically and empirically based on DEA models and review their strengths and weaknesses.

Keywords: data envelopment analysis, Hicks–Moorsteen productivity index, Leuenberger productivity indicator, malmquist productivity index

Procedia PDF Downloads 194
1959 Uncanny Orania: White Complicity as the Abject of the Discursive Construction of Racism

Authors: Daphne Fietz

Abstract:

This paper builds on a reflection on an autobiographical experience of uncanniness during fieldwork in the white Afrikaner settlement Orania in South Africa. Drawing on Kristeva’s theory of abjection to establish a theory of Whiteness which is based on boundary threats, it is argued that the uncanny experience as the emergence of the abject points to a moment of crisis of the author’s Whiteness. The emanating abject directs the author to her closeness or convergence with Orania's inhabitants, that is a reciprocity based on mutual Whiteness. The experienced confluence appeals to the author’s White complicity to racism. With recourse to Butler’s theory of subjectivation, the abject, White complicity, inhabits both the outside of a discourse on racism, and of the 'self', as 'I' establish myself in relation to discourse. In this view, the qualities of the experienced abject are linked to the abject of discourse on racism, or, in other words, its frames of intelligibility. It then becomes clear, that discourse on (overt) racism functions as a necessary counter-image through which White morality is established instead of questioned, because here, by White reasoning, the abject of complicity to racism is successfully repressed, curbed, as completely impossible in the binary construction. Hence, such discourse endangers a preservation of racism in its pre-discursive and structural forms as long as its critique does not encompass its own location and performance in discourse. Discourse on overt racism is indispensable to White ignorance as it covers underlying racism and pre-empts further critique. This understanding directs us towards a form of critique which does necessitate self-reflection, uncertainty, and vigilance, which will be referred to as a discourse of relationality. Such a discourse diverges from the presumption of a detached author as a point of reference, and instead departs from attachment, dependence, mutuality and embraces the visceral as a resource of knowledge of relationality. A discourse of relationality points to another possibility of White engagement with Whiteness and racism and further promotes a conception of responsibility, which allows for and highlights dispossession and relationality in contrast to single agency and guilt.

Keywords: abjection, discourse, relationality, the visceral, whiteness

Procedia PDF Downloads 158
1958 A Comparative Study of Black Carbon Emission Characteristics from Marine Diesel Engines Using Light Absorption Method

Authors: Dongguk Im, Gunfeel Moon, Younwoo Nam, Kangwoo Chun

Abstract:

Recognition of the needs about protecting environment throughout worldwide is widespread. In the shipping industry, International Maritime Organization (IMO) has been regulating pollutants emitted from ships by MARPOL 73/78. Recently, the Marine Environment Protection Committee (MEPC) of IMO, at its 68th session, approved the definition of Black Carbon (BC) specified by the following physical properties (light absorption, refractory, insolubility and morphology). The committee also agreed to the need for a protocol for any voluntary measurement studies to identify the most appropriate measurement methods. Filter Smoke Number (FSN) based on light absorption is categorized as one of the IMO relevant BC measurement methods. EUROMOT provided a FSN measurement data (measured by smoke meter) of 31 different engines (low, medium and high speed marine engines) of member companies at the 3rd International Council on Clean Transportation (ICCT) workshop on marine BC. From the comparison of FSN, the results indicated that BC emission from low speed marine diesel engines was ranged from 0.009 to 0.179 FSN and it from medium and high speed marine diesel engine was ranged 0.012 to 3.2 FSN. In consideration of measured the low FSN from low speed engine, an experimental study was conducted using both a low speed marine diesel engine (2 stroke, power of 7,400 kW at 129 rpm) and a high speed marine diesel engine (4 stroke, power of 403 kW at 1,800 rpm) under E3 test cycle. The results revealed that FSN was ranged from 0.01 to 0.16 and 1.09 to 1.35 for low and high speed engines, respectively. The measurement equipment (smoke meter) ranges from 0 to 10 FSN. Considering measurement range of it, FSN values from low speed engines are near the detection limit (0.002 FSN or ~0.02 mg/m3). From these results, it seems to be modulated the measurement range of the measurement equipment (smoke meter) for enhancing measurement accuracy of marine BC and evaluation on performance of BC abatement technologies.

Keywords: black carbon, filter smoke number, international maritime organization, marine diesel engine (two and four stroke), particulate matter

Procedia PDF Downloads 276
1957 Solution of Reduced Mass in Solar Glider with Electric Engine

Authors: Piotr Żabicki, Paweł Skutta

Abstract:

The project of a glider with an electric motor charged by solar power is an step toward the future of Polish gliding. Due to the popularity of the SZD-50-3 glider and its type of usage, the project was developed based on this model. By placing an auxiliary engine in the glider, the pilot is guaranteed a safe return to the airport. Since it is a training glider, and routes are mainly flown by student pilots and instructors, the guarantee of returning to the airport allows flights in more challenging thermal conditions, which contributes to better pilot training. In case of worsening weather, the pilot has a reliable return option, which prevents time loss due to field landings and saves money by avoiding delays in training. The glider uses the NOVA 15 LW engine, a solar installation, and technical modifications to reduce the glider's weight. This includes the Misztal spar solution, previously used in the PZL 19 aircraft. Additionally, the use of lighter coverings and materials that handle loads from pulling, straining, and sharing improves the aerodynamic performance of the glider, enhancing its overall efficiency. Every component added to the glider's construction (battery, engine, etc.) has been placed to avoid shifting loads along the axis, thus preventing unintended spins and flat spins. Safety concerns were also addressed. In the event of a battery or engine fire, the pilot's cabin is designed as a detachable part of the structure and is made of composites covered with non-flammable resin. The batteries are also enclosed in separate boxes located in the former "luggage" compartment. Access to the installation connecting the engine, panel, and battery is convenient due to the detachable cabin from the structure and the fact that the entire installation runs under the structure. The batteries also have easy access due to the current closed hatch. Cooling for the battery is provided this way.

Keywords: engineering, girder, glider, solar, spar

Procedia PDF Downloads 7
1956 Investigation of the Material Behaviour of Polymeric Interlayers in Broken Laminated Glass

Authors: Martin Botz, Michael Kraus, Geralt Siebert

Abstract:

The use of laminated glass gains increasing importance in structural engineering. For safety reasons, at least two glass panes are laminated together with a polymeric interlayer. In case of breakage of one or all of the glass panes, the glass fragments are still connected to the interlayer due to adhesion forces and a certain residual load-bearing capacity is left in the system. Polymer interlayers used in the laminated glass show a viscoelastic material behavior, e.g. stresses and strains in the interlayer are dependent on load duration and temperature. In the intact stage only small strains appear in the interlayer, thus the material can be described in a linear way. In the broken stage, large strains can appear and a non-linear viscoelasticity material theory is necessary. Relaxation tests on two different types of polymeric interlayers are performed at different temperatures and strain amplitudes to determine the border to the non-linear material regime. Based on the small-scale specimen results further tests on broken laminated glass panes are conducted. So-called ‘through-crack-bending’ (TCB) tests are performed, in which the laminated glass has a defined crack pattern. The test set-up is realized in a way that one glass layer is still able to transfer compressive stresses but tensile stresses have to be transferred by the interlayer solely. The TCB-tests are also conducted under different temperatures but constant force (creep test). Aims of these experiments are to elaborate if the results of small-scale tests on the interlayer are transferable to a laminated glass system in the broken stage. In this study, limits of the applicability of linear-viscoelasticity are established in the context of two commercially available polymer-interlayers. Furthermore, it is shown that the results of small-scale tests agree to a certain degree to the results of the TCB large-scale experiments. In a future step, the results can be used to develop material models for the post breakage performance of laminated glass.

Keywords: glass breakage, laminated glass, relaxation test, viscoelasticity

Procedia PDF Downloads 122
1955 Improving Fingerprinting-Based Localization System Using Generative AI

Authors: Getaneh Berie Tarekegn

Abstract:

A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. It also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 60
1954 Single Tuned Shunt Passive Filter Based Current Harmonic Elimination of Three Phase AC-DC Converters

Authors: Mansoor Soomro

Abstract:

The evolution of power electronic equipment has been pivotal in making industrial processes productive, efficient and safe. Despite its attractive features, it has been due to nonlinear loads which make it vulnerable to power quality conditions. Harmonics is one of the power quality problem in which the harmonic frequency is integral multiple of supply frequency. Therefore, the supply voltage and supply frequency do not last within their tolerable limits. As a result, distorted current and voltage waveform may appear. Attributes of low power quality confirm that an electrical device or equipment is likely to malfunction, fail promptly or unable to operate under all applied conditions. The electrical power system is designed for delivering power reliably, namely maximizing power availability to customers. However, power quality events are largely untracked, and as a result, can take out a process as many as 20 to 30 times a year, costing utilities, customers and suppliers of load equipment, a loss of millions of dollars. The ill effects of current harmonics reduce system efficiency, cause overheating of connected equipment, result increase in electrical power and air conditioning costs. With the passage of time and the rapid growth of power electronic converters has highlighted the damages of current harmonics in the electrical power system. Therefore, it has become essential to address the bad influence of current harmonics while planning any suitable changes in the electrical installations. In this paper, an effort has been made to mitigate the effects of dominant 3rd order current harmonics. Passive filtering technique with six pulse multiplication converter has been employed to mitigate them. Since, the standards of power quality are to maintain the supply voltage and supply current within certain prescribed standard limits. For this purpose, the obtained results are validated as per specifications of IEEE 519-1992 and IEEE 519-2014 performance standards.

Keywords: current harmonics, power quality, passive filters, power electronic converters

Procedia PDF Downloads 301
1953 The Impact of Model Specification Decisions on the Teacher ValuE-added Effectiveness: Choosing the Correct Predictors

Authors: Ismail Aslantas

Abstract:

Value-Added Models (VAMs), the statistical methods for evaluating the effectiveness of teachers and schools based on student achievement growth, has attracted decision-makers’ and researchers’ attention over the last decades. As a result of this attention, many studies have conducted in recent years to discuss these statistical models from different aspects. This research focused on the importance of conceptual variables in VAM estimations; therefor, this research was undertaken to examine the extent to which value-added effectiveness estimates for teachers can be affected by using context predictions. Using longitudinal data over three years from the international school context, value-added teacher effectiveness was estimated by ordinary least-square value-added models, and the effectiveness of the teachers was examined. The longitudinal dataset in this study consisted of three major sources: students’ attainment scores up to three years and their characteristics, teacher background information, and school characteristics. A total of 1,027 teachers and their 35,355 students who were in eighth grade were examined for understanding the impact of model specifications on the value-added teacher effectiveness evaluation. Models were created using selection methods that adding a predictor on each step, then removing it and adding another one on a subsequent step and evaluating changes in model fit was checked by reviewing changes in R² values. Cohen’s effect size statistics were also employed in order to find out the degree of the relationship between teacher characteristics and their effectiveness. Overall, the results indicated that prior attainment score is the most powerful predictor of the current attainment score. 47.1 percent of the variation in grade 8 math score can be explained by the prior attainment score in grade 7. The research findings raise issues to be considered in VAM implementations for teacher evaluations and make suggestions to researchers and practitioners.

Keywords: model specification, teacher effectiveness, teacher performance evaluation, value-added model

Procedia PDF Downloads 135
1952 Exploring the Link between Intangible Capital and Urban Economic Development: The Case of Three UK Core Cities

Authors: Melissa Dickinson

Abstract:

In the context of intense global competitiveness and urban transformations, today’s cities are faced with enormous challenges. There is increasing pressure among cities and regions to respond promptly and efficiently to fierce market progressions, to offer a competitive advantage, higher flexibility, and to be pro-active in creating future markets. Consequently, competition among cities and regions within the dynamics of a worldwide spatial economic system is growing fiercer, amplifying the importance of intangible capital in shaping the competitive and dynamic economic performance of organisations and firms. Accordingly, this study addresses how intangible capital influences urban economic development within an urban environment. Despite substantial research on the economic, and strategic determinants of urban economic development this multidimensional phenomenon remains to be one of the greatest challenges for economic geographers. The research provides a unique contribution, exploring intangible capital through the lenses of entrepreneurial capital and social-network capital. Drawing on business surveys and in-depth interviews with key stakeholders in the case of the three UK Core Cities Birmingham, Bristol and Cardiff. This paper critically considers how entrepreneurial capital and social-network capital is a crucial source of competitiveness and urban economic development. This paper deals with questions concerning the complexity of operationalizing ‘network capital’ in different urban settings and the challenges that reside in characterising its effects. The paper will highlight the role of institutions in facilitating urban economic development. Particular emphasis will be placed on exploring the roles formal and informal institutions have in delivering, supporting and nurturing entrepreneurial capital and social-network capital, to facilitate urban economic development. Discussions will then consider how institutions moderate and contribute to the economic development of urban areas, to provide implications in terms of future policy formulation in the context of large and medium sized cities.

Keywords: urban economic development, network capital, entrepreneurialism, institutions

Procedia PDF Downloads 276
1951 Effects of Pre-Storage Invigoration Treatments on Ageing Dendrocalamus hamiltonii Seeds

Authors: Geetika Richa, M. L. Sharma

Abstract:

Bamboo as an ancient herbal medicine has been used for thousands of years in Asia and goes by many names such as tabashir, banslochan etc. It is often used for its tonic and astringent properties. Modern analysis of bamboos show high amount of vitamins and minerals which makes them valuable as a curative. Bamboo leaf decoction and young shoots are known as remedy for intestinal worms, healing of ulcers and stomach disorders. Bamboos are known to be propagated by large scale plantations but propagation through seeds occurs very limited as they have very short viability of few months. Seeds loses viability over a period of time even under controlled conditions and important factors that affect seed viability is the decline in reserve food material, decrease in membrane integrity and fall in endogenous level of growth hormones. Invigoration treatments that include hydration, dehydration, incorporation of bioactive chemicals such as growth regulators, nutrients and antioxidants etc. improve the seed performance. Our studies were aimed to determine the most effective invigoration treatments to enhance vigour and viability of seeds by following invigoration treatments, i.e., hardening. Treated seeds were stored at controlled temperature and humidity (in desiccators at 4°C). In hardening, chemicals were applied in 3 different concentrations to three replicates of 10 seeds. Hardening was done withGA3, IAA, (each with concentrations of 10 ppm, 20 ppm and 50 ppm), calcium oxychloride, neem leaf powder and clay (each with concentrations of 2%, 5% and 10%). Statistically all the hardening materials were effective but GA3 50 ppm was the most effective one in maintaining germination percentage and vigour index. Hardening treatments increased the germination percentage of seeds, i.e. 86.2%, over control which showed germination percentage of 80.2%. It was concluded that in order to maintain seed viability during storage for longer period of time, invigoration treatments have been found to be very effective.

Keywords: invigoration, seed quality, viability, hardening, membrane integrity, decoction

Procedia PDF Downloads 321
1950 Ni Mixed Oxides Type-Spinel for Energy: Application in Dry Reforming of Methane for Syngas (H2 and CO) Production

Authors: Bedarnia Ishak

Abstract:

In the recent years, the dry reforming of methane has received considerable attention from an environmental view point because it consumes and eliminates two gases (CH4 and CO2) responsible for global warming by greenhouse effect. Many catalysts containing noble metal (Rh, Ru, Pd, Pt and Ir) or transition metal (Ni, Co and Fe) have been reported to be active in this reaction. Compared to noble metals, Ni-materials are cheap but very easily deactivated by coking. Ni-based mixed oxides structurally well-defined like perovskites and spinels are being studied because they possibly make solid solutions and allow to vary the composition and thus the performances properties. In this work, nano-sized nickel ferrite oxides are synthesized using three different methods: Co-precipitation (CP), hydrothermal (HT) and sol gel (SG) methods and characterized by XRD, Raman, XPS, BET, TPR, SEM-EDX and TEM-EDX. XRD patterns of all synthesized oxides showed the presence of NiFe2O4 spinel, confirmed by Raman spectroscopy. Hematite was present only in CP sample. Depending on the synthesis method, the surface area, particle size, as well as the surface Ni/Fe atomic ratio (XPS) and the behavior upon reduction varied. The materials were tested in methane dry reforming with CO2 at 1 atm and 650-800 °C. The catalytic activity of the spinel samples was not very high (XCH4 = 5-20 mol% and XCO2 = 25-40 mol %) when no pre-reduction step was carried out. A significant contribution of RWGS explained the low values of H2/CO ratio obtained. The reoxidation step of the catalyst carried out after reaction showed little amounts of coke deposition. The reducing pretreatment was particularly efficient in the case of SG (XCH4 = 80 mol% and XCO2 = 92 mol%, at 800 °C), with H2/CO > 1. In conclusion, the influence of preparation was strong for most samples and the catalytic behavior could be interpreted by considering the distribution of cations among octahedral (Oh) and tetrahedral (Td) sites as in (Ni2+1-xFe3+x) Td (Ni2+xFe3+2-x) OhO2-4 influenced the reducibility of materials and thus their catalytic performance.

Keywords: NiFe2O4, dry reforming of methane, spinel oxide, oxide zenc

Procedia PDF Downloads 282
1949 Bulk Modification of Poly(Dimethylsiloxane) for Biomedical Applications

Authors: A. Aslihan Gokaltun, Martin L. Yarmush, Ayse Asatekin, O. Berk Usta

Abstract:

In the last decade microfabrication processes including rapid prototyping techniques have advanced rapidly and achieved a fairly matured stage. These advances encouraged and enabled the use of microfluidic devices by a wider range of users with applications in biological separations, and cell and organoid cultures. Accordingly, a significant current challenge in the field is controlling biomolecular interactions at interfaces and the development of novel biomaterials to satisfy the unique needs of the biomedical applications. Poly(dimethylsiloxane) (PDMS) is by far the most preferred material in the fabrication of microfluidic devices. This can be attributed its favorable properties, including: (1) simple fabrication by replica molding, (2) good mechanical properties, (3) excellent optical transparency from 240 to 1100 nm, (4) biocompatibility and non-toxicity, and (5) high gas permeability. However, high hydrophobicity (water contact angle ~108°±7°) of PDMS often limits its applications where solutions containing biological samples are concerned. In our study, we created a simple, easy method for modifying the surface chemistry of PDMS microfluidic devices through the addition of surface-segregating additives during manufacture. In this method, a surface segregating copolymer is added to precursors for silicone and the desired device is manufactured following the usual methods. When the device surface is in contact with an aqueous solution, the copolymer self-organizes to expose its hydrophilic segments to the surface, making the surface of the silicone device more hydrophilic. This can lead to several improved performance criteria including lower fouling, lower non-specific adsorption, and better wettability. Specifically, this approach is expected to be useful for the manufacture of microfluidic devices. It is also likely to be useful for manufacturing silicone tubing and other materials, biomaterial applications, and surface coatings.

Keywords: microfluidics, non-specific protein adsorption, PDMS, PEG, copolymer

Procedia PDF Downloads 267
1948 The Legal Effects of Coronavirus (COVID-19) on the Implementation of Administrative Contracts in Saudi Arabia: Application of Emergency Circumstances Theory

Authors: Ali Obaid Alyami

Abstract:

In Saudi Arabia, the pandemic of Coronavirus (COVID-19) has been affecting administrative contracts in many different ways. Lots of planned projects were stopped temporarily or implemented partially. Many contractors have suffered financial struggles and the absence of manpower. These administrative contracts are governed by Government Tenders and Procurement Law (GTPL) which was issued by a royal decree in 2019. This law addresses some challenges that could be stumbling blocks in the way of implementing a contract. One significant challenge is emergency circumstances that occur during the implementation of an administrative contract. The law provides some solutions for this disruption, but these solutions may not compensate for the whole damages that contractors suffer. This study will use the doctrinal methodology to analyze the rules of law and their application to the research problem. Most importantly, the issue that arises in this research is the possibility of governmental entities’ consideration, in administrative contracts, of the pandemic Coronavirus (COVID-19) as an emergency circumstance. This study points out the conditions for applying the theory of emergency circumstances on administrative contracts in addition to the definition of the theory and analyzing its elements. The other significant question is the limits on governmental entities to make a change in an administrative contract to achieve contractual rebalancing. GPTL and its implementing regulation set the conditions and limits of contractual rebalancing. However, this study finds that although GTPL provides rules for contractual rebalancing, there are some other mechanisms that contractors may take to fully compensate for the damages. For instance, when the loss cannot be minimized by GTPL, contractors might file lawsuits before the administrative judiciary. The study concludes that GTPL is a very comprehensive law system that stipulates specific rules for contractual rebalance and treats the emergency circumstances that obstruct the performance of administrative contracts.

Keywords: administrative contracts, emergency circumstances, balance of contract, administrative judiciary, government tenders, procurement law

Procedia PDF Downloads 76
1947 Vitamin Content of Swordfish (Xhiphias gladius) Affected by Salting and Frying

Authors: L. Piñeiro, N. Cobas, L. Gómez-Limia, S. Martínez, I. Franco

Abstract:

The swordfish (Xiphias gladius) is a large oceanic fish of high commercial value, which is widely distributed in waters of the world’s oceans. They are considered to be an important source of high quality proteins, vitamins and essential fatty acids, although only half of the population follows the recommendation of nutritionists to consume fish at least twice a week. Swordfish is consumed worldwide because of its low fat content and high protein content. It is generally sold as fresh, frozen, and as pieces or slices. The aim of this study was to evaluate the effect of salting and frying on the composition of the water-soluble vitamins (B2, B3, B9 and B12) and fat-soluble vitamins (A, D, and E) of swordfish. Three loins of swordfish from Pacific Ocean were analyzed. All the fishes had a weight between 50 and 70 kg and were transported to the laboratory frozen (-18 ºC). Before the processing, they were defrosted at 4 ºC. Each loin was sliced and salted in brine. After cleaning the slices, they were divided into portions (10×2 cm) and fried in olive oil. The identification and quantification of vitamins were carried out by high-performance liquid chromatography (HPLC), using methanol and 0.010% trifluoroacetic acid as mobile phases at a flow-rate of 0.7 mL min-1. The UV-Vis detector was used for the detection of the water- and fat-soluble vitamins (A and D), as well as the fluorescence detector for the detection of the vitamin E. During salting, water and fat-soluble vitamin contents remained constant, observing an evident decrease in the values of vitamin B2. The diffusion of salt into the interior of the pieces and the loss of constitution water that occur during this stage would be related to this significant decrease. In general, after frying water-soluble and fat-soluble vitamins showed a great thermolability with high percentages of retention with values among 50–100%. Vitamin B3 is the one that exhibited higher percentages of retention with values close to 100%. However, vitamin B9 presented the highest losses with a percentage of retention of less than 20%.

Keywords: frying, HPLC, salting, swordfish, vitamins

Procedia PDF Downloads 126
1946 Optimization of Platinum Utilization by Using Stochastic Modeling of Carbon-Supported Platinum Catalyst Layer of Proton Exchange Membrane Fuel Cells

Authors: Ali Akbar, Seungho Shin, Sukkee Um

Abstract:

The composition of catalyst layers (CLs) plays an important role in the overall performance and cost of the proton exchange membrane fuel cells (PEMFCs). Low platinum loading, high utilization, and more durable catalyst still remain as critical challenges for PEMFCs. In this study, a three-dimensional material network model is developed to visualize the nanostructure of carbon supported platinum Pt/C and Pt/VACNT catalysts in pursuance of maximizing the catalyst utilization. The quadruple-phase randomly generated CLs domain is formulated using quasi-random stochastic Monte Carlo-based method. This unique statistical approach of four-phase (i.e., pore, ionomer, carbon, and platinum) model is closely mimic of manufacturing process of CLs. Various CLs compositions are simulated to elucidate the effect of electrons, ions, and mass transport paths on the catalyst utilization factor. Based on simulation results, the effect of key factors such as porosity, ionomer contents and Pt weight percentage in Pt/C catalyst have been investigated at the represented elementary volume (REV) scale. The results show that the relationship between ionomer content and Pt utilization is in good agreement with existing experimental calculations. Furthermore, this model is implemented on the state-of-the-art Pt/VACNT CLs. The simulation results on Pt/VACNT based CLs show exceptionally high catalyst utilization as compared to Pt/C with different composition ratios. More importantly, this study reveals that the maximum catalyst utilization depends on the distance spacing between the carbon nanotubes for Pt/VACNT. The current simulation results are expected to be utilized in the optimization of nano-structural construction and composition of Pt/C and Pt/VACNT CLs.

Keywords: catalyst layer, platinum utilization, proton exchange membrane fuel cell, stochastic modeling

Procedia PDF Downloads 121
1945 An Evaluation on the Effectiveness of a 3D Printed Composite Compression Mold

Authors: Peng Hao Wang, Garam Kim, Ronald Sterkenburg

Abstract:

The applications of composite materials within the aviation industry has been increasing at a rapid pace.  However, the growing applications of composite materials have also led to growing demand for more tooling to support its manufacturing processes. Tooling and tooling maintenance represents a large portion of the composite manufacturing process and cost. Therefore, the industry’s adaptability to new techniques for fabricating high quality tools quickly and inexpensively will play a crucial role in composite material’s growing popularity in the aviation industry. One popular tool fabrication technique currently being developed involves additive manufacturing such as 3D printing. Although additive manufacturing and 3D printing are not entirely new concepts, the technique has been gaining popularity due to its ability to quickly fabricate components, maintain low material waste, and low cost. In this study, a team of Purdue University School of Aviation and Transportation Technology (SATT) faculty and students investigated the effectiveness of a 3D printed composite compression mold. A 3D printed composite compression mold was fabricated by 3D scanning a steel valve cover of an aircraft reciprocating engine. The 3D printed composite compression mold was used to fabricate carbon fiber versions of the aircraft reciprocating engine valve cover. The 3D printed composite compression mold was evaluated for its performance, durability, and dimensional stability while the fabricated carbon fiber valve covers were evaluated for its accuracy and quality. The results and data gathered from this study will determine the effectiveness of the 3D printed composite compression mold in a mass production environment and provide valuable information for future understanding, improvements, and design considerations of 3D printed composite molds.

Keywords: additive manufacturing, carbon fiber, composite tooling, molds

Procedia PDF Downloads 199
1944 Interactive IoT-Blockchain System for Big Data Processing

Authors: Abdallah Al-ZoubI, Mamoun Dmour

Abstract:

The spectrum of IoT devices is becoming widely diversified, entering almost all possible fields and finding applications in industry, health, finance, logistics, education, to name a few. The IoT active endpoint sensors and devices exceeded the 12 billion mark in 2021 and are expected to reach 27 billion in 2025, with over $34 billion in total market value. This sheer rise in numbers and use of IoT devices bring with it considerable concerns regarding data storage, analysis, manipulation and protection. IoT Blockchain-based systems have recently been proposed as a decentralized solution for large-scale data storage and protection. COVID-19 has actually accelerated the desire to utilize IoT devices as it impacted both demand and supply and significantly affected several regions due to logistic reasons such as supply chain interruptions, shortage of shipping containers and port congestion. An IoT-blockchain system is proposed to handle big data generated by a distributed network of sensors and controllers in an interactive manner. The system is designed using the Ethereum platform, which utilizes smart contracts, programmed in solidity to execute and manage data generated by IoT sensors and devices. such as Raspberry Pi 4, Rasbpian, and add-on hardware security modules. The proposed system will run a number of applications hosted by a local machine used to validate transactions. It then sends data to the rest of the network through InterPlanetary File System (IPFS) and Ethereum Swarm, forming a closed IoT ecosystem run by blockchain where a number of distributed IoT devices can communicate and interact, thus forming a closed, controlled environment. A prototype has been deployed with three IoT handling units distributed over a wide geographical space in order to examine its feasibility, performance and costs. Initial results indicated that big IoT data retrieval and storage is feasible and interactivity is possible, provided that certain conditions of cost, speed and thorough put are met.

Keywords: IoT devices, blockchain, Ethereum, big data

Procedia PDF Downloads 150
1943 Soil Bioremediation Monitoring Systems Powered by Microbial Fuel Cells

Authors: András Fülöp, Lejla Heilmann, Zsolt Szabó, Ákos Koós

Abstract:

Microbial fuel cells (MFCs) present a sustainable biotechnological solution to future energy demands. The aim of this study was to construct soil based, single cell, membrane-less MFC systems, operated without treatment to continuously power on-site monitoring and control systems during the soil bioremediation processes. Our Pseudomonas aeruginosa 541 isolate is an ideal choice for MFCs, because it is able to produce pyocyanin which behaves as electron-shuttle molecule, furthermore, it also has a significant antimicrobial effect. We tested several materials and structural configurations to obtain long term high power output. Comparing different configurations, a proton exchange membrane-less, 0.6 m long with 0.05 m diameter MFC tubes offered the best long-term performances. The long-term electricity production were tested from starch, yeast extract (YE), carboxymethyl cellulose (CMC) with humic acid (HA) as a mediator. In all cases, 3 kΩ external load have been used. The two best-operated systems were the Pseudomonas aeruginosa 541 containing MFCs with 1 % carboxymethyl cellulose and the MFCs with 1% yeast extract in the anode area and 35% hydrogel in the cathode chamber. The first had 3.3 ± 0.033 mW/m2 and the second had 4.1 ± 0.065 mW/m2 power density values. These systems have operated for 230 days without any treatment. The addition of 0.2 % HA and 1 % YE referred to the volume of the anode area resulted in 1.4 ± 0.035 mW/m2 power densities. The mixture of 1% starch with 0.2 % HA gave 1.82 ± 0.031 mW/m2. Using CMC as retard carbon source takes effect in the long-term bacterial survivor, thus enable the expression of the long term power output. The application of hydrogels in the cathode chamber significantly increased the performance of the MFC units due to their good water retention capacity.

Keywords: microbial fuel cell, bioremediation, Pseudomonas aeruginosa, biotechnological solution

Procedia PDF Downloads 291
1942 Plant Water Relations and Forage Quality in Leucaena leucocephala (Lam.) de Wit and Acacia saligna (Labill.) as Affected by Salinity Stress

Authors: Maher J. Tadros

Abstract:

This research was conducted to study the effect of different salinity concentrations on the plant water relation and forage quality on two multipurpose forest trees species seedlings Leucaena leucocephala (Lam.) de wit and Acacia saligna (Labill.). Five different salinity concentrations mixture between sodium chloride and calcium chloride (v/v, 1:1) were applied. The control (Distilled Water), 2000, 4000, 6000, and 8000 ppm were used to water the seedlings for 3 months. The research results presented showed a marked variation among the two species in response to salinity. The Leucaena was able to withstand the highest level of salinity compared to Acacia all over the studied parameters except in the relative water content. Although all the morphological characteristics studied for the two species showed a marked decrease under the different salinity concentrations, except the shoot/root ratio that showed a trend of increase. The water stress measure the leaf water potential was more negative with as the relative water content increase under that saline conditions compared to the control. The forage quality represented by the crude protein and nitrogen content were low at 6000 ppm compared to the 8000 ppm in L. Leucocephala that increased compared that level in A. saligna. Also the results showed that growing both Leucaena and Acacia provide a good source of forage when that grow under saline condition which will be of great benefits to the agricultural sector especially in the arid and semiarid areas were these species can provide forage with high quality forage all year around when grown under irrigation with saline. This research recommended such species to be utilized and grown for forages under saline conditions.

Keywords: plant water relations, growth performance, salinity stress, protein content, forage quality, multipurpose trees

Procedia PDF Downloads 393
1941 Reliability Assessment and Failure Detection in a Complex Human-Machine System Using Agent-Based and Human Decision-Making Modeling

Authors: Sanjal Gavande, Thomas Mazzuchi, Shahram Sarkani

Abstract:

In a complex aerospace operational environment, identifying failures in a procedure involving multiple human-machine interactions are difficult. These failures could lead to accidents causing loss of hardware or human life. The likelihood of failure further increases if operational procedures are tested for a novel system with multiple human-machine interfaces and with no prior performance data. The existing approach in the literature of reviewing complex operational tasks in a flowchart or tabular form doesn’t provide any insight into potential system failures due to human decision-making ability. To address these challenges, this research explores an agent-based simulation approach for reliability assessment and fault detection in complex human-machine systems while utilizing a human decision-making model. The simulation will predict the emergent behavior of the system due to the interaction between humans and their decision-making capability with the varying states of the machine and vice-versa. Overall system reliability will be evaluated based on a defined set of success-criteria conditions and the number of recorded failures over an assigned limit of Monte Carlo runs. The study also aims at identifying high-likelihood failure locations for the system. The research concludes that system reliability and failures can be effectively calculated when individual human and machine agent states are clearly defined. This research is limited to the operations phase of a system lifecycle process in an aerospace environment only. Further exploration of the proposed agent-based and human decision-making model will be required to allow for a greater understanding of this topic for application outside of the operations domain.

Keywords: agent-based model, complex human-machine system, human decision-making model, system reliability assessment

Procedia PDF Downloads 169
1940 Active Control Effects on Dynamic Response of Elevated Water Storage Tanks

Authors: Ali Etemadi, Claudia Fernanda Yasar

Abstract:

Elevated water storage tank structures (EWSTs) are high elevated-ponderous structural systems and very vulnerable to seismic vibrations. In past earthquake events, many of these structures exhibit poor performance and experienced severe damage. The dynamic analysis of the EWSTs under earthquake loads is, therefore, of significant importance for the design of the structure and a key issue for the development of modern methods, such as active control design. In this study, a reduced model of the EWSTs is explained, which is based on a tuned mass damper model (TMD). Vibration analysis of a structure under seismic excitation is presented and then used to propose an active vibration controller. MATLAB/Simulink is employed for dynamic analysis of the system and control of the seismic response. A single degree of freedom (SDOF) and two degree of freedom (2DOF) models of ELSTs are going to be used to study the concept of active vibration control. Lab-scale experimental models similar to pendulum are applied to suppress vibrations in ELST under seismic excitation. One of the most important phenomena in liquid storage tanks is the oscillation of fluid due to the movements of the tank body because of its base motions during an earthquake. Simulation results illustrate that the EWSTs vibration can be reduced by means of an input shaping technique that takes into account the dominant mode shape of the structure. Simulations with which to guide many of our designs are presented in detail. A simple and effective real-time control for seismic vibration damping can be, therefore, design and built-in practice.

Keywords: elevated water storage tank, tuned mass damper model, real time control, shaping control, seismic vibration control, the laplace transform

Procedia PDF Downloads 152
1939 Depth Camera Aided Dead-Reckoning Localization of Autonomous Mobile Robots in Unstructured GNSS-Denied Environments

Authors: David L. Olson, Stephen B. H. Bruder, Adam S. Watkins, Cleon E. Davis

Abstract:

In global navigation satellite systems (GNSS), denied settings such as indoor environments, autonomous mobile robots are often limited to dead-reckoning navigation techniques to determine their position, velocity, and attitude (PVA). Localization is typically accomplished by employing an inertial measurement unit (IMU), which, while precise in nature, accumulates errors rapidly and severely degrades the localization solution. Standard sensor fusion methods, such as Kalman filtering, aim to fuse precise IMU measurements with accurate aiding sensors to establish a precise and accurate solution. In indoor environments, where GNSS and no other a priori information is known about the environment, effective sensor fusion is difficult to achieve, as accurate aiding sensor choices are sparse. However, an opportunity arises by employing a depth camera in the indoor environment. A depth camera can capture point clouds of the surrounding floors and walls. Extracting attitude from these surfaces can serve as an accurate aiding source, which directly combats errors that arise due to gyroscope imperfections. This configuration for sensor fusion leads to a dramatic reduction of PVA error compared to traditional aiding sensor configurations. This paper provides the theoretical basis for the depth camera aiding sensor method, initial expectations of performance benefit via simulation, and hardware implementation, thus verifying its veracity. Hardware implementation is performed on the Quanser Qbot 2™ mobile robot, with a Vector-Nav VN-200™ IMU and Kinect™ camera from Microsoft.

Keywords: autonomous mobile robotics, dead reckoning, depth camera, inertial navigation, Kalman filtering, localization, sensor fusion

Procedia PDF Downloads 207
1938 AI-Driven Solutions for Optimizing Master Data Management

Authors: Srinivas Vangari

Abstract:

In the era of big data, ensuring the accuracy, consistency, and reliability of critical data assets is crucial for data-driven enterprises. Master Data Management (MDM) plays a crucial role in this endeavor. This paper investigates the role of Artificial Intelligence (AI) in enhancing MDM, focusing on how AI-driven solutions can automate and optimize various stages of the master data lifecycle. By integrating AI (Quantitative and Qualitative Analysis) into processes such as data creation, maintenance, enrichment, and usage, organizations can achieve significant improvements in data quality and operational efficiency. Quantitative analysis is employed to measure the impact of AI on key metrics, including data accuracy, processing speed, and error reduction. For instance, our study demonstrates an 18% improvement in data accuracy and a 75% reduction in duplicate records across multiple systems post-AI implementation. Furthermore, AI’s predictive maintenance capabilities reduced data obsolescence by 22%, as indicated by statistical analyses of data usage patterns over a 12-month period. Complementing this, a qualitative analysis delves into the specific AI-driven strategies that enhance MDM practices, such as automating data entry and validation, which resulted in a 28% decrease in manual errors. Insights from case studies highlight how AI-driven data cleansing processes reduced inconsistencies by 25% and how AI-powered enrichment strategies improved data relevance by 24%, thus boosting decision-making accuracy. The findings demonstrate that AI significantly enhances data quality and integrity, leading to improved enterprise performance through cost reduction, increased compliance, and more accurate, real-time decision-making. These insights underscore the value of AI as a critical tool in modern data management strategies, offering a competitive edge to organizations that leverage its capabilities.

Keywords: artificial intelligence, master data management, data governance, data quality

Procedia PDF Downloads 19
1937 Validity of a Timing System in the Alpine Ski Field: A Magnet-Based Timing System Using the Magnetometer Built into an Inertial Measurement Units

Authors: Carla Pérez-Chirinos Buxadé, Bruno Fernández-Valdés, Mónica Morral-Yepes, Sílvia Tuyà Viñas, Josep Maria Padullés Riu, Gerard Moras Feliu

Abstract:

There is a long way to explore all the possible applications inertial measurement units (IMUs) have in the sports field. The aim of this study was to evaluate the validity of a new application on the use of these wearable sensors, specifically it was to evaluate a magnet-based timing system (M-BTS) for timing gate-to-gate in an alpine ski slalom using the magnetometer embedded in an IMU. This was a validation study. The criterion validity of time measured by the M-BTS was assessed using the 95% error range against actual time obtained from photocells. The experiment was carried out with first-and second-year junior skiers performing a ski slalom on a ski training slope. Eight alpine skiers (17.4 ± 0.8 years, 176.4 ± 4.9 cm, 67.7 ± 2.0 kg, 128.8 ± 26.6 slalom FIS-Points) participated in the study. An IMU device was attached to the skier’s lower back. Skiers performed a 40-gate slalom from which four gates were assessed. The M-BTS consisted of placing four bar magnets buried into the snow surface on the inner side of each gate’s turning pole; the magnetometer built into the IMU detected the peak-shaped magnetic field when passing near the magnets at a certain speed. Four magnetic peaks were detected. The time compressed between peaks was calculated. Three inter-gate times were obtained for each system: photocells and M-BTS. The total time was defined as the time sum of the inter-gate times. The 95% error interval for the total time was 0.050 s for the ski slalom. The M-BTS is valid for timing gate-to-gate in an alpine ski slalom. Inter-gate times can provide additional data for analyzing a skier’s performance, such as asymmetries between left and right foot.

Keywords: gate crossing time, inertial measurement unit, timing system, wearable sensor

Procedia PDF Downloads 184
1936 Factors that Contribute to the Improvement of the Sense of Self-Efficacy of Special Educators in Inclusive Settings in Greece

Authors: Sotiria Tzivinikou, Dimitra Kagkara

Abstract:

Teacher’s sense of self-efficacy can affect significantly both teacher’s and student’s performance. More specific, self-efficacy is associated with the learning outcomes as well as student’s motivation and self-efficacy. For example, teachers with high sense of self-efficacy are more open to innovations and invest more effort in teaching. In addition to this, effective inclusive education is associated with higher levels of teacher’s self-efficacy. Pre-service teachers with high levels of self-efficacy could handle student’s behavior better and more effectively assist students with special educational needs. Teacher preparation programs are also important, because teacher’s efficacy beliefs are shaped early in learning, as a result the quality of teacher’s education programs can affect the sense of self-efficacy of pre-service teachers. Usually, a number of pre-service teachers do not consider themselves well prepared to work with students with special educational needs and do not have the appropriate sense of self-efficacy. This study aims to investigate the factors that contribute to the improvement of the sense of self-efficacy of pre-service special educators by using an academic practicum training program. The sample of this study is 159 pre-service special educators, who also participated in the academic practicum training program. For the purpose of this study were used quantitative methods for data collection and analysis. Teacher’s self-efficacy was assessed by the teachers themselves with the completion of a questionnaire which was based on the scale of Teacher’s Sense of Efficacy Scale. Pre and post measurements of teacher’s self-efficacy were taken. The results of the survey are consistent with those of the international literature. The results indicate that a significant number of pre-service special educators do not hold the appropriate sense of self-efficacy regarding teaching students with special educational needs. Moreover, a quality academic training program constitutes a crucial factor for the improvement of the sense of self-efficacy of pre-service special educators, as additional for the provision of high quality inclusive education.

Keywords: inclusive education, pre-service, self-efficacy, training program

Procedia PDF Downloads 252
1935 Development of a Predictive Model to Prevent Financial Crisis

Authors: Tengqin Han

Abstract:

Delinquency has been a crucial factor in economics throughout the years. Commonly seen in credit card and mortgage, it played one of the crucial roles in causing the most recent financial crisis in 2008. In each case, a delinquency is a sign of the loaner being unable to pay off the debt, and thus may cause a lost of property in the end. Individually, one case of delinquency seems unimportant compared to the entire credit system. China, as an emerging economic entity, the national strength and economic strength has grown rapidly, and the gross domestic product (GDP) growth rate has remained as high as 8% in the past decades. However, potential risks exist behind the appearance of prosperity. Among the risks, the credit system is the most significant one. Due to long term and a large amount of balance of the mortgage, it is critical to monitor the risk during the performance period. In this project, about 300,000 mortgage account data are analyzed in order to develop a predictive model to predict the probability of delinquency. Through univariate analysis, the data is cleaned up, and through bivariate analysis, the variables with strong predictive power are detected. The project is divided into two parts. In the first part, the analysis data of 2005 are split into 2 parts, 60% for model development, and 40% for in-time model validation. The KS of model development is 31, and the KS for in-time validation is 31, indicating the model is stable. In addition, the model is further validation by out-of-time validation, which uses 40% of 2006 data, and KS is 33. This indicates the model is still stable and robust. In the second part, the model is improved by the addition of macroeconomic economic indexes, including GDP, consumer price index, unemployment rate, inflation rate, etc. The data of 2005 to 2010 is used for model development and validation. Compared with the base model (without microeconomic variables), KS is increased from 41 to 44, indicating that the macroeconomic variables can be used to improve the separation power of the model, and make the prediction more accurate.

Keywords: delinquency, mortgage, model development, model validation

Procedia PDF Downloads 228
1934 Influence of Carbon Addition on the Activity of Silica Supported Copper and Cobalt Catalysts in NO Reduction with CO

Authors: N. Stoeva, I. Spassova, R. Nickolov, M. Khristova

Abstract:

Exhaust gases from stationary and mobile combustion sources contain nitrogen oxides that cause a variety of environmentally harmful effects. The most common approach of their elimination is the catalytic reaction in the exhaust using various reduction agents such as NH3, CO and hydrocarbons. Transition metals (Co, Ni, Cu, etc.) are the most widely used as active components for deposition on various supports. However, since the interaction between different catalyst components have been extensively studied in different types of reaction systems, the possible cooperation between active components and the support material and the underlying mechanisms have not been thoroughly investigated. The support structure may affect how these materials maintain an active phase. The objective is to investigate the addition of carbonaceous materials with different nature and texture characteristics on the properties of the resulting silica-carbon support and how it influences of the catalytic properties of the supported copper and cobalt catalysts for reduction of NO with CO. The versatility of the physico-chemical properties of the composites and the supported copper and cobalt catalysts are discussed with an emphasis on the relationship of the properties with the catalytic performance. The catalysts were prepared by sol-gel process and were characterized by XRD, XPS, AAS and BET analysis. The catalytic experiments were carried out in catalytic flow apparatus with isothermal flow reactor in the temperature range 20–300оС. After the catalytic test temperature-programmed desorption (TPD) was carried out. The transient response method was used to study the interaction of the gas phase with the catalyst surface. The role of the interaction between the support and the active phase on the catalyst’s activity in the studied reaction was discussed. We suppose the carbon particles with small sizes to participate in the formation of the active sites for the reduction of NO with CO along with their effect on the kind of deposited metal oxide phase. The existence of micropore texture for some of composites also influences by mass-transfer limitations.

Keywords: catalysts, no reduction, composites, bet analysis

Procedia PDF Downloads 424
1933 Combined Treatment of Aged Rats with Donepezil and the Gingko Extract EGb 761® Enhances Learning and Memory Superiorly to Monotherapy

Authors: Linda Blümel, Bettina Bert, Jan Brosda, Heidrun Fink, Melanie Hamann

Abstract:

Age-related cognitive decline can eventually lead to dementia, the most common mental illness in elderly people and an immense challenge for patients, their families and caregivers. Cholinesterase inhibitors constitute the most commonly used antidementia prescription medication. The standardized Ginkgo biloba leaf extract EGb 761® is approved for treating age-associated cognitive impairment and has been shown to improve the quality of life in patients suffering from mild dementia. A clinical trial with 96 Alzheimer´s disease patients indicated that the combined treatment with donepezil and EGb 761® had fewer side effects than donepezil alone. In an animal model of cognitive aging, we compared the effect of combined treatment with EGb 761® or donepezil monotherapy and vehicle. We compared the effect of chronic treatment (15 days of pretreatment) with donepezil (1.5 mg/kg p. o.), EGb 761® (100 mg/kg p. o.), or the combination of the two drugs, or vehicle in 18 – 20 month old male OFA rats. Learning and memory performance were assessed by Morris water maze testing, motor behavior in an open field paradigm. In addition to chronic treatment, the substances were administered orally 30 minutes before testing. Compared to the first day and to the control group, only the combination group showed a significant reduction in latency to reach the hidden platform on the second day of testing. Moreover, from the second day of testing onwards, the donepezil, the EGb 761® and the combination group required less time to reach the hidden platform compared to the first day. The control group did not reach the same latency reduction until day three. There were no effects on motor behavior. These results suggest a superiority of the combined treatment of donepezil with EGb 761® compared to monotherapy.

Keywords: age-related cognitive decline, dementia, ginkgo biloba leaf extract EGb 761®, learning and memory, old rats

Procedia PDF Downloads 368
1932 Theoretical Prediction on the Lifetime of Sessile Evaporating Droplet in Blade Cooling

Authors: Yang Shen, Yongpan Cheng, Jinliang Xu

Abstract:

The effective blade cooling is of great significance for improving the performance of turbine. The mist cooling emerges as the promising way compared with the transitional single-phase cooling. In the mist cooling, the injected droplet will evaporate rapidly, and cool down the blade surface due to the absorbed latent heat, hence the lifetime for evaporating droplet becomes critical for design of cooling passages for the blade. So far there have been extensive studies on the droplet evaporation, but usually the isothermal model is applied for most of the studies. Actually the surface cooling effect can affect the droplet evaporation greatly, it can prolong the droplet evaporation lifetime significantly. In our study, a new theoretical model for sessile droplet evaporation with surface cooling effect is built up in toroidal coordinate. Three evaporation modes are analyzed during the evaporation lifetime, include “Constant Contact Radius”(CCR) mode、“Constant Contact Angle”(CCA) mode and “stick-slip”(SS) mode. The dimensionless number E0 is introduced to indicate the strength of the evaporative cooling, it is defined based on the thermal properties of the liquid and the atmosphere. Our model can predict accurately the lifetime of evaporation by validating with available experimental data. Then the temporal variation of droplet volume, contact angle and contact radius are presented under CCR, CCA and SS mode, the following conclusions are obtained. 1) The larger the dimensionless number E0, the longer the lifetime of three evaporation cases is; 2) The droplet volume over time still follows “2/3 power law” in the CCA mode, as in the isothermal model without the cooling effect; 3) In the “SS” mode, the large transition contact angle can reduce the evaporation time in CCR mode, and increase the time in CCA mode, the overall lifetime will be increased; 4) The correction factor for predicting instantaneous volume of the droplet is derived to predict the droplet life time accurately. These findings may be of great significance to explore the dynamics and heat transfer of sessile droplet evaporation.

Keywords: blade cooling, droplet evaporation, lifetime, theoretical analysis

Procedia PDF Downloads 142
1931 Ni Mixed Oxides Type-Spinel for Energy: Application in Dry Reforming of Methane for Syngas (H2 & Co) Production

Authors: Bouhenni Mohamed Saif El Islam

Abstract:

In the recent years, the dry reforming of methane has received considerable attention from an environmental view point because it consumes and eliminates two gases (CH4 and CO2) responsible for global warming by greenhouse effect. Many catalysts containing noble metal (Rh, Ru, Pd, Pt and Ir) or transition metal (Ni, Co and Fe) have been reported to be active in this reaction. Compared to noble metals, Ni-materials are cheap but very easily deactivated by coking. Ni-based mixed oxides structurally well-defined like perovskites and spinels are being studied because they possibly make solid solutions and allow to vary the composition and thus the performances properties. In this work, nano-sized nickel ferrite oxides are synthesized using three different methods: Co-precipitation (CP), hydrothermal (HT) and sol gel (SG) methods and characterized by XRD, Raman, XPS, BET, TPR, SEM-EDX and TEM-EDX. XRD patterns of all synthesized oxides showed the presence of NiFe2O4 spinel, confirmed by Raman spectroscopy. Hematite was present only in CP sample. Depending on the synthesis method, the surface area, particle size, as well as the surface Ni/Fe atomic ratio (XPS) and the behavior upon reduction varied. The materials were tested in methane dry reforming with CO2 at 1 atm and 650-800 °C. The catalytic activity of the spinel samples was not very high (XCH4 = 5-20 mol% and XCO2 = 25-40 mol %) when no pre-reduction step was carried out. A significant contribution of RWGS explained the low values of H2/CO ratio obtained. The reoxidation step of the catalyst carried out after reaction showed little amounts of coke deposition. The reducing pretreatment was particularly efficient in the case of SG (XCH4 = 80 mol% and XCO2 = 92 mol%, at 800 °C), with H2/CO > 1. In conclusion, the influence of preparation was strong for most samples and the catalytic behavior could be interpreted by considering the distribution of cations among octahedral (Oh) and tetrahedral (Td) sites as in (Ni2+1-xFe3+x)Td (Ni2+xFe3+2-x)OhO2-4 influenced the reducibility of materials and thus their catalytic performance.

Keywords: NiFe2O4, dry reforming of methane, spinel oxide, XCO2

Procedia PDF Downloads 383