Search results for: 3d finite element model
7988 Petrogenetic Model of Formation of Orthoclase Gabbro of the Dzirula Crystalline Massif, the Caucasus
Authors: David Shengelia, Tamara Tsutsunava, Manana Togonidze, Giorgi Chichinadze, Giorgi Beridze
Abstract:
Orthoclase gabbro intrusive exposes in the Eastern part of the Dzirula crystalline massif of the Central Transcaucasian microcontinent. It is intruded in the Baikal quartz-diorite gneisses as a stock-like body. The intrusive is characterized by heterogeneity of rock composition: variability of mineral content and irregular distribution of rock-forming minerals. The rocks are represented by pyroxenites, gabbro-pyroxenites and gabbros of different composition – K-feldspar, pyroxene-hornblende and biotite bearing varieties. Scientific views on the genesis and age of the orthoclase gabbro intrusive are considerably different. Based on the long-term pertogeochemical and geochronological investigations of the intrusive with such an extraordinary composition the authors came to the following conclusions. According to geological and geophysical data, it is stated that in the Saurian orogeny horizontal tectonic layering of the Earth’s crust of the Central Transcaucasian microcontinent took place. That is precisely this fact that explains the formation of the orthoclase gabbro intrusive. During the tectonic doubling of the Earth’s crust of the mentioned microcontinent thick tectonic nappes of mafic and sialic layers overlap the sialic basement (‘inversion’ layer). The initial magma of the intrusive was of high-temperature basite-ultrabasite composition, crystallization products of which are pyroxenites and gabbro-pyroxenites. Petrochemical data of the magma attest to its formation in the Upper mantle and partially in the ‘crustal astenolayer’. Then, a newly formed overheated dry magma with phenocrysts of clinopyrocxene and basic plagioclase intruded into the ‘inversion’ layer. From the new medium it was enriched by the volatile components causing the selective melting and as a result the formation of leucocratic quartz-feldspar material. At the same time in the basic magma intensive transformation of pyroxene to hornblende was going on. The basic magma partially mixed with the newly formed acid magma. These different magmas intruded first into the allochthonous basite layer without its significant transformation and then into the upper sialic layer and crystallized here at a depth of 7-10 km. By petrochemical data the newly formed leucocratic granite magma belongs to the S type granites, but the above mentioned mixed magma – to H (hybrid) type. During the final stage of magmatic processes the gabbroic rocks impregnated with high-temperature feldspar-bearing material forming anorthoclase or orthoclase. Thus, so called ‘orthoclase gabbro’ includes the rocks of various genetic groups: 1. protolith of gabbroic intrusive; 2. hybrid rock – K-feldspar gabbro and 3. leucocratic quartz-feldspar bearing rock. Petrochemical and geochemical data obtained from the hybrid gabbro and from the inrusive protolith differ from each other. For the identification of petrogenetic model of the orthoclase gabbro intrusive formation LA-ICP-MS- U-Pb zircon dating has been conducted in all three genetic types of gabbro. The zircon age of the protolith – mean 221.4±1.9 Ma and of hybrid K-feldspar gabbro – mean 221.9±2.2 Ma, records crystallization time of the intrusive, but the zircon age of quartz-feldspar bearing rocks – mean 323±2.9 Ma, as well as the inherited age (323±9, 329±8.3, 332±10 and 335±11 Ma) of hybrid K-feldspar gabbro corresponds to the formation age of Late Variscan granitoids widespread in the Dzirula crystalline massif.Keywords: The Caucasus, isotope dating, orthoclase-bearing gabbro, petrogenetic model
Procedia PDF Downloads 3467987 Leveraging xAPI in a Corporate e-Learning Environment to Facilitate the Tracking, Modelling, and Predictive Analysis of Learner Behaviour
Authors: Libor Zachoval, Daire O Broin, Oisin Cawley
Abstract:
E-learning platforms, such as Blackboard have two major shortcomings: limited data capture as a result of the limitations of SCORM (Shareable Content Object Reference Model), and lack of incorporation of Artificial Intelligence (AI) and machine learning algorithms which could lead to better course adaptations. With the recent development of Experience Application Programming Interface (xAPI), a large amount of additional types of data can be captured and that opens a window of possibilities from which online education can benefit. In a corporate setting, where companies invest billions on the learning and development of their employees, some learner behaviours can be troublesome for they can hinder the knowledge development of a learner. Behaviours that hinder the knowledge development also raise ambiguity about learner’s knowledge mastery, specifically those related to gaming the system. Furthermore, a company receives little benefit from their investment if employees are passing courses without possessing the required knowledge and potential compliance risks may arise. Using xAPI and rules derived from a state-of-the-art review, we identified three learner behaviours, primarily related to guessing, in a corporate compliance course. The identified behaviours are: trying each option for a question, specifically for multiple-choice questions; selecting a single option for all the questions on the test; and continuously repeating tests upon failing as opposed to going over the learning material. These behaviours were detected on learners who repeated the test at least 4 times before passing the course. These findings suggest that gauging the mastery of a learner from multiple-choice questions test scores alone is a naive approach. Thus, next steps will consider the incorporation of additional data points, knowledge estimation models to model knowledge mastery of a learner more accurately, and analysis of the data for correlations between knowledge development and identified learner behaviours. Additional work could explore how learner behaviours could be utilised to make changes to a course. For example, course content may require modifications (certain sections of learning material may be shown to not be helpful to many learners to master the learning outcomes aimed at) or course design (such as the type and duration of feedback).Keywords: artificial intelligence, corporate e-learning environment, knowledge maintenance, xAPI
Procedia PDF Downloads 1267986 Revolutionizing Financial Forecasts: Enhancing Predictions with Graph Convolutional Networks (GCN) - Long Short-Term Memory (LSTM) Fusion
Authors: Ali Kazemi
Abstract:
Those within the volatile and interconnected international economic markets, appropriately predicting market trends, hold substantial fees for traders and financial establishments. Traditional device mastering strategies have made full-size strides in forecasting marketplace movements; however, monetary data's complicated and networked nature calls for extra sophisticated processes. This observation offers a groundbreaking method for monetary marketplace prediction that leverages the synergistic capability of Graph Convolutional Networks (GCNs) and Long Short-Term Memory (LSTM) networks. Our suggested algorithm is meticulously designed to forecast the traits of inventory market indices and cryptocurrency costs, utilizing a comprehensive dataset spanning from January 1, 2015, to December 31, 2023. This era, marked by sizable volatility and transformation in financial markets, affords a solid basis for schooling and checking out our predictive version. Our algorithm integrates diverse facts to construct a dynamic economic graph that correctly reflects market intricacies. We meticulously collect opening, closing, and high and low costs daily for key inventory marketplace indices (e.g., S&P 500, NASDAQ) and widespread cryptocurrencies (e.g., Bitcoin, Ethereum), ensuring a holistic view of marketplace traits. Daily trading volumes are also incorporated to seize marketplace pastime and liquidity, providing critical insights into the market's shopping for and selling dynamics. Furthermore, recognizing the profound influence of the monetary surroundings on financial markets, we integrate critical macroeconomic signs with hobby fees, inflation rates, GDP increase, and unemployment costs into our model. Our GCN algorithm is adept at learning the relational patterns amongst specific financial devices represented as nodes in a comprehensive market graph. Edges in this graph encapsulate the relationships based totally on co-movement styles and sentiment correlations, enabling our version to grasp the complicated community of influences governing marketplace moves. Complementing this, our LSTM algorithm is trained on sequences of the spatial-temporal illustration discovered through the GCN, enriched with historic fee and extent records. This lets the LSTM seize and expect temporal marketplace developments accurately. Inside the complete assessment of our GCN-LSTM algorithm across the inventory marketplace and cryptocurrency datasets, the version confirmed advanced predictive accuracy and profitability compared to conventional and opportunity machine learning to know benchmarks. Specifically, the model performed a Mean Absolute Error (MAE) of 0.85%, indicating high precision in predicting day-by-day charge movements. The RMSE was recorded at 1.2%, underscoring the model's effectiveness in minimizing tremendous prediction mistakes, which is vital in volatile markets. Furthermore, when assessing the model's predictive performance on directional market movements, it achieved an accuracy rate of 78%, significantly outperforming the benchmark models, averaging an accuracy of 65%. This high degree of accuracy is instrumental for techniques that predict the course of price moves. This study showcases the efficacy of mixing graph-based totally and sequential deep learning knowledge in economic marketplace prediction and highlights the fee of a comprehensive, records-pushed evaluation framework. Our findings promise to revolutionize investment techniques and hazard management practices, offering investors and economic analysts a powerful device to navigate the complexities of cutting-edge economic markets.Keywords: financial market prediction, graph convolutional networks (GCNs), long short-term memory (LSTM), cryptocurrency forecasting
Procedia PDF Downloads 717985 Fusion of MOLA-based DEMs and HiRISE Images for Large-Scale Mars Mapping
Authors: Ahmed F. Elaksher, Islam Omar
Abstract:
In this project, we used MOLA-based DEMs to orthorectify HiRISE optical images. The MOLA data was interpolated using the kriging interpolation technique. Corresponding tie points were then digitized from both datasets. These points were employed in co-registering both datasets using GIS analysis tools. Different transformation models, including the affine and projective transformation models, were used with different sets and distributions of tie points. Additionally, we evaluated the use of the MOLA elevations in co-registering the MOLA and HiRISE datasets. The planimetric RMSEs achieved for each model are reported. Results suggested the use of 3D-2D transformation models.Keywords: photogrammetry, Mars, MOLA, HiRISE
Procedia PDF Downloads 817984 Response of a Bridge Crane during an Earthquake
Authors: F. Fekak, A. Gravouil, M. Brun, B. Depale
Abstract:
During an earthquake, a bridge crane may be subjected to multiple impacts between crane wheels and rail. In order to model such phenomena, a time-history dynamic analysis with a multi-scale approach is performed. The high frequency aspect of the impacts between wheels and rails is taken into account by a Lagrange explicit event-capturing algorithm based on a velocity-impulse formulation to resolve contacts and impacts. An implicit temporal scheme is used for the rest of the structure. The numerical coupling between the implicit and the explicit schemes is achieved with a heterogeneous asynchronous time-integrator.Keywords: bridge crane, earthquake, dynamic analysis, explicit, implicit, impact
Procedia PDF Downloads 3087983 Blood Glucose Measurement and Analysis: Methodology
Authors: I. M. Abd Rahim, H. Abdul Rahim, R. Ghazali
Abstract:
There is numerous non-invasive blood glucose measurement technique developed by researchers, and near infrared (NIR) is the potential technique nowadays. However, there are some disagreements on the optimal wavelength range that is suitable to be used as the reference of the glucose substance in the blood. This paper focuses on the experimental data collection technique and also the analysis method used to analyze the data gained from the experiment. The selection of suitable linear and non-linear model structure is essential in prediction system, as the system developed need to be conceivably accurate.Keywords: linear, near-infrared (NIR), non-invasive, non-linear, prediction system
Procedia PDF Downloads 4637982 Including All Citizens Pathway (IACP): Transforming Post-Secondary Education Using Inclusion and Accessibility as Foundation
Authors: Fiona Whittington-Walsh
Abstract:
Including All Citizens Pathway (IACP) is addressing the systems wide discrimination that students with disabilities experience throughout the education system. IACP offers a wide, institutional support structure so that all students, including students with intellectual/developmental disabilities, are included and can succeed. The entire process from admissions, course selection, course instruction, graduation is designed to address systemic discrimination while supporting learners and faculty. The inclusive and accessible pedagogical model that is the foundation of IACP opens the doors of post-secondary education by making existing academic courses environments where all students can participate and succeed. IACP is about transforming teaching, not modifying, or adapting the curriculum or essential knowledge and skill sets that are required learning outcomes. Universal Design for Learning (UDL) principles are applied to instructional teaching strategies such as lectures, presentations, and assessment tools. Created in 2016 as a research pilot, IACP is one of the first fully inclusive for credit post-secondary options available. The pilot received numerous external and internal grants to support its initiative to investigate and assess the teaching strategies and techniques that support student learning of essential knowledge and skill sets. IACP pilot goals included: (1) provide a successful pilot as a model of inclusive and accessible pedagogy; (2) create a teacher’s guide to assist other instructors in transforming their teaching to reach a wide range of learners; (3) identify policy barriers located within the educational system; and (4) provide leadership and encouraging innovative and inclusive pedagogical practices. The pilot was a success and in 2020 the first cohort of students graduated with an exit credential that pre-exists IACP and consists of ten academic courses. The University has committed to continue IACP and has developed a sustainable model. Each new academic year a new cohort of IACP students starts their post-secondary educational journey, while two additional instructors are mentored with the pedagogy. The pedagogical foundation of IACP has far-reaching potential including, but not limited to, programs that offer services for international students whose first language is not English as well as influencing pedagogical reform in secondary and post-secondary education. IACP also supports universities in satisfying educational standards that are or will be included in accessibility/disability legislation. This session will present information about IACP, share examples of systems transformation, hear from students and instructors, and provide participatory experiential activities that demonstrate the transformative techniques. We will be drawing from the experiences of a recent course that explored research documenting the lived experiences of students with disabilities in post-secondary institutes in B.C (Whittington-Walsh). Students created theatrical scenes out of the data and presented it using Forum Theatre method. Forum Theatre was used to create conversations, challenge stereotypes, and build connections between ableism, disability justice, Indigeneity, and social policy.Keywords: disability justice, inclusive education, pedagogical transformation, systems transformation
Procedia PDF Downloads 167981 An Extended Domain-Specific Modeling Language for Marine Observatory Relying on Enterprise Architecture
Authors: Charbel Aoun, Loic Lagadec
Abstract:
A Sensor Network (SN) is considered as an operation of two phases: (1) the observation/measuring, which means the accumulation of the gathered data at each sensor node; (2) transferring the collected data to some processing center (e.g., Fusion Servers) within the SN. Therefore, an underwater sensor network can be defined as a sensor network deployed underwater that monitors underwater activity. The deployed sensors, such as Hydrophones, are responsible for registering underwater activity and transferring it to more advanced components. The process of data exchange between the aforementioned components perfectly defines the Marine Observatory (MO) concept which provides information on ocean state, phenomena and processes. The first step towards the implementation of this concept is defining the environmental constraints and the required tools and components (Marine Cables, Smart Sensors, Data Fusion Server, etc). The logical and physical components that are used in these observatories perform some critical functions such as the localization of underwater moving objects. These functions can be orchestrated with other services (e.g. military or civilian reaction). In this paper, we present an extension to our MO meta-model that is used to generate a design tool (ArchiMO). We propose new constraints to be taken into consideration at design time. We illustrate our proposal with an example from the MO domain. Additionally, we generate the corresponding simulation code using our self-developed domain-specific model compiler. On the one hand, this illustrates our approach in relying on Enterprise Architecture (EA) framework that respects: multiple views, perspectives of stakeholders, and domain specificity. On the other hand, it helps reducing both complexity and time spent in design activity, while preventing from design modeling errors during porting this activity in the MO domain. As conclusion, this work aims to demonstrate that we can improve the design activity of complex system based on the use of MDE technologies and a domain-specific modeling language with the associated tooling. The major improvement is to provide an early validation step via models and simulation approach to consolidate the system design.Keywords: smart sensors, data fusion, distributed fusion architecture, sensor networks, domain specific modeling language, enterprise architecture, underwater moving object, localization, marine observatory, NS-3, IMS
Procedia PDF Downloads 1827980 Jet Impingement Heat Transfer on a Rib-Roughened Flat Plate
Authors: A. H. Alenezi
Abstract:
Cooling by impingement jet is known to have a significant high local and average heat transfer coefficient which make it widely used in industrial cooling systems. The heat transfer characteristics of an impinging jet on rib-roughened flat plate has been investigated numerically. This paper was set out to investigate the effect of rib height on the heat transfer rate. Since the flow needs to have enough spacing after passing the rib to allow reattachment especially for high Reynolds numbers, this study focuses on finding the optimum rib height which would be the best to maximize the heat transfer rate downstream the plate. This investigation employs a round nozzle with hydraulic diameter (Dh) of 13.5 mm, Jet-to-target distance of (H/D) of 4, rib location=1.5D and and finally jet angels of 45˚ and 90˚ under the influence of Re =10,000.Keywords: jet impingement, CFD, turbulence model, heat transfer
Procedia PDF Downloads 3547979 A CM-Based Model for 802.11 Networks Security Policies Enforcement
Authors: Karl Mabiala Dondia, Jing Ma
Abstract:
In recent years, networks based on the 802.11 standards have gained a prolific deployment. The reason for this massive acceptance of the technology by both home users and corporations is assuredly due to the "plug-and-play" nature of the technology and the mobility. The lack of physical containment due to inherent nature of the wireless medium makes maintenance very challenging from a security standpoint. This study examines via continuous monitoring various predictable threats that 802.11 networks can face, how they are executed, where each attack may be executed and how to effectively defend against them. The key goal is to identify the key components of an effective wireless security policy.Keywords: wireless LAN, IEEE 802.11 standards, continuous monitoring, security policy
Procedia PDF Downloads 3847978 A Method for Calculating Dew Point Temperature in the Humidity Test
Authors: Wu Sa, Zhang Qian, Li Qi, Wang Ye
Abstract:
Currently in humidity tests having not put the Dew point temperature as a control parameter, this paper selects wet and dry bulb thermometer to measure the vapor pressure, and introduces several the saturation vapor pressure formulas easily calculated on the controller. Then establish the Dew point temperature calculation model to obtain the relationship between the Dew point temperature and vapor pressure. Finally check through the 100 groups of sample in the range of 0-100 ℃ from "Psychrometric handbook", find that the average error is small. This formula can be applied to calculate the Dew point temperature in the humidity test.Keywords: dew point temperature, psychrometric handbook, saturation vapor pressure, wet and dry bulb thermometer
Procedia PDF Downloads 4927977 Transportation Mode Choice Analysis for Accessibility of the Mehrabad International Airport by Statistical Models
Authors: Navid Mirzaei Varzeghani, Mahmoud Saffarzadeh, Ali Naderan, Amirhossein Taheri
Abstract:
Countries are progressing, and the world's busiest airports see year-on-year increases in travel demand. Passenger acceptability of an airport depends on the airport's appeals, which may include one of these routes between the city and the airport, as well as the facilities to reach them. One of the critical roles of transportation planners is to predict future transportation demand so that an integrated, multi-purpose system can be provided and diverse modes of transportation (rail, air, and land) can be delivered to a destination like an airport. In this study, 356 questionnaires were filled out in person over six days. First, the attraction of business and non-business trips was studied using data and a linear regression model. Lower travel costs, a range of ages more significant than 55, and other factors are essential for business trips. Non-business travelers, on the other hand, have prioritized using personal vehicles to get to the airport and ensuring convenient access to the airport. Business travelers are also less price-sensitive than non-business travelers regarding airport travel. Furthermore, carrying additional luggage (for example, more than one suitcase per person) undoubtedly decreases the attractiveness of public transit. Afterward, based on the manner and purpose of the trip, the locations with the highest trip generation to the airport were identified. The most famous district in Tehran was District 2, with 23 visits, while the most popular mode of transportation was an online taxi, with 12 trips from that location. Then, significant variables in separation and behavior of travel methods to access the airport were investigated for all systems. In this scenario, the most crucial factor is the time it takes to get to the airport, followed by the method's user-friendliness as a component of passenger preference. It has also been demonstrated that enhancing public transportation trip times reduces private transportation's market share, including taxicabs. Based on the responses of personal and semi-public vehicles, the desire of passengers to approach the airport via public transportation systems was explored to enhance present techniques and develop new strategies for providing the most efficient modes of transportation. Using the binary model, it was clear that business travelers and people who had already driven to the airport were the least likely to change.Keywords: multimodal transportation, demand modeling, travel behavior, statistical models
Procedia PDF Downloads 1797976 Nondestructive Monitoring of Atomic Reactions to Detect Precursors of Structural Failure
Authors: Volodymyr Rombakh
Abstract:
This article was written to substantiate the possibility of detecting the precursors of catastrophic destruction of a structure or device and stopping operation before it. Damage to solids results from breaking the bond between atoms, which requires energy. Modern theories of strength and fracture assume that such energy is due to stress. However, in a letter to W. Thomson (Lord Kelvin) dated December 18, 1856, J.C. Maxwell provided evidence that elastic energy cannot destroy solids. He proposed an equation for estimating a deformable body's energy, equal to the sum of two energies. Due to symmetrical compression, the first term does not change, but the second term is distortion without compression. Both types of energy are represented in the equation as a quadratic function of strain, but Maxwell repeatedly wrote that it is not stress but strain. Furthermore, he notes that the nature of the energy causing the distortion is unknown to him. An article devoted to theories of elasticity was published in 1850. Maxwell tried to express mechanical properties with the help of optics, which became possible only after the creation of quantum mechanics. However, Maxwell's work on elasticity is not cited in the theories of strength and fracture. The authors of these theories and their associates are still trying to describe the phenomena they observe based on classical mechanics. The study of Faraday's experiments, Maxwell's and Rutherford's ideas, made it possible to discover a previously unknown area of electromagnetic radiation. The properties of photons emitted in this reaction are fundamentally different from those of photons emitted in nuclear reactions and are caused by the transition of electrons in an atom. The photons released during all processes in the universe, including from plants and organs in natural conditions; their penetrating power in metal is millions of times greater than that of one of the gamma rays. However, they are not non-invasive. This apparent contradiction is because the chaotic motion of protons is accompanied by the chaotic radiation of photons in time and space. Such photons are not coherent. The energy of a solitary photon is insufficient to break the bond between atoms, one of the stages of which is ionization. The photographs registered the rail deformation by 113 cars, while the Gaiger Counter did not. The author's studies show that the cause of damage to a solid is the breakage of bonds between a finite number of atoms due to the stimulated emission of metastable atoms. The guarantee of the reliability of the structure is the ratio of the energy dissipation rate to the energy accumulation rate, but not the strength, which is not a physical parameter since it cannot be measured or calculated. The possibility of continuous control of this ratio is due to the spontaneous emission of photons by metastable atoms. The article presents calculation examples of the destruction of energy and photographs due to the action of photons emitted during the atomic-proton reaction.Keywords: atomic-proton reaction, precursors of man-made disasters, strain, stress
Procedia PDF Downloads 957975 A Case Study of Typhoon Tracks: Insights from the Interaction between Typhoon Hinnamnor and Ocean Currents in 2022
Authors: Wei-Kuo Soong
Abstract:
The forecasting of typhoon tracks remains a formidable challenge, primarily attributable to the paucity of observational data in the open sea and the intricate influence of weather systems at varying scales. This study investigates the case of Typhoon Hinnamnor in 2022, examining its trajectory and intensity fluctuations in relation to the interaction with a concurrent tropical cyclone and sea surface temperatures (SST). Utilizing the Weather Research and Forecasting Model (WRF), to simulate and analyze the interaction between Typhoon Hinnamnor and its environmental factors, shedding light on the mechanisms driving typhoon development and enhancing forecasting capabilities.Keywords: typhoon, sea surface temperature, forecasting, WRF
Procedia PDF Downloads 577974 Experimental Quantification of the Intra-Tow Resin Storage Evolution during RTM Injection
Authors: Mathieu Imbert, Sebastien Comas-Cardona, Emmanuelle Abisset-Chavanne, David Prono
Abstract:
Short cycle time Resin Transfer Molding (RTM) applications appear to be of great interest for the mass production of automotive or aeronautical lightweight structural parts. During the RTM process, the two components of a resin are mixed on-line and injected into the cavity of a mold where a fibrous preform has been placed. Injection and polymerization occur simultaneously in the preform inducing evolutions of temperature, degree of cure and viscosity that furthermore affect flow and curing. In order to adjust the processing conditions to reduce the cycle time, it is, therefore, essential to understand and quantify the physical mechanisms occurring in the part during injection. In a previous study, a dual-scale simulation tool has been developed to help determining the optimum injection parameters. This tool allows tracking finely the repartition of the resin and the evolution of its properties during reactive injections with on-line mixing. Tows and channels of the fibrous material are considered separately to deal with the consequences of the dual-scale morphology of the continuous fiber textiles. The simulation tool reproduces the unsaturated area at the flow front, generated by the tow/channel difference of permeability. Resin “storage” in the tows after saturation is also taken into account as it may significantly affect the repartition and evolution of the temperature, degree of cure and viscosity in the part during reactive injections. The aim of the current study is, thanks to experiments, to understand and quantify the “storage” evolution in the tows to adjust and validate the numerical tool. The presented study is based on four experimental repeats conducted on three different types of textiles: a unidirectional Non Crimp Fabric (NCF), a triaxial NCF and a satin weave. Model fluids, dyes and image analysis, are used to study quantitatively, the resin flow in the saturated area of the samples. Also, textiles characteristics affecting the resin “storage” evolution in the tows are analyzed. Finally, fully coupled on-line mixing reactive injections are conducted to validate the numerical model.Keywords: experimental, on-line mixing, high-speed RTM process, dual-scale flow
Procedia PDF Downloads 1697973 A Study of Different Retail Models That Penetrates South African Townships
Authors: Beaula, M. Kruger, Silindisipho, T. Belot
Abstract:
Small informal retailers are considered one of the most important features of developing countries around the world. Those small informal retailers form part of the local communities in South African townships and are estimated to be more than 100,000 across the country. The township economic landscape has changed over time in South Africa. The traditional small informal retailers in South African Townships have been faced with numerous challenges of increasing competition; an increase in the number of local retail shops and foreign-owned shops. There is evidence that the South African personal and disposable income has increased amongst black African consumers. Historically, people residing in townships were restricted to informal retail shops; however, this has changed due to the growing number of formal large retail chains entering into the township market. The larger retail chains are aware of the improved income levels of the middle-income townships residence and as a result, larger retailers have followed certain strategies such as; (1) retail format development; (2) diversification growth strategy; (3) market penetration growth strategy and (4) market expansion. This research did a comparative analysis between the different retail models developed by Pick n Pay, Spar and Shoprite. The research methodology employed for this study was of a qualitative nature and made use of a case study to conduct a comparative analysis between larger retailers. A questionnaire was also designed to obtain data from existing smaller retailers. The study found that larger retailers have developed smaller retail formats to compete with the traditional smaller retailers operating in South African townships. Only one out of the two large retailers offers entrepreneurs a franchise model. One of the big retailers offers the opportunity to employ between 15 to 20 employees while the others are subject to the outcome of a feasibility study. The response obtained from the entrepreneurs in the townships were mixed, while some found their presence as having a “negative impact,” which has increased competition; others saw them as a means to obtain a variety of products. This research found that the most beneficial retail model for both bigger retail and existing and new entrepreneurs are from Pick n Pay. The other retail format models are more beneficial for the bigger retailers and not to new and existing entrepreneurs.Keywords: Pick n Pay, retailers, shoprite, spar, townships
Procedia PDF Downloads 1997972 Music as Source Domain: A Cross-Linguistic Exploration of Conceptual Metaphors
Authors: Eleanor Sweeney, Chunyuan Di
Abstract:
The metaphors people use in everyday discourse do not arise randomly; rather, they develop from our physical experiences in our social and cultural environments. Conceptual Metaphor Theory (CMT) explains that through metaphor, we apply our embodied understanding of the physical world to non-material concepts to understand and express abstract concepts. Our most productive source domains derive from our embodied understanding and allow us to develop primary metaphors, and from primary metaphors, an elaborate, creative world of culturally constructed complex metaphors. Cognitive Linguistics researchers draw upon individual embodied experience for primary metaphors. Socioculturally embodied experience through music has long furnished linguistic expressions in diverse languages, as conceptual metaphors or everyday expressions. Can a socially embodied experience function in the same way as an individually embodied experience in the creation of conceptual metaphors? The authors argue that since music is inherently social and embodied, musical experiences function as a richly motivated source domain. The focus of this study is socially embodied musical experience which is then reflected and expressed through metaphors. This cross-linguistic study explores music as a source domain for metaphors of social alignment in English, French, and Chinese. The authors explored two public discourse sites, Facebook and Linguée, in order to collect linguistic metaphors from three different languages. By conducting this cross-linguistic study, cross-cultural similarities and differences in metaphors for which music is the source domain can be examined. Different musical elements, such as melody, speed, rhythm and harmony, are analyzed for their possible metaphoric meanings of social alignment. Our findings suggest that the general metaphor cooperation is music is a productive metaphor with some subcases, and that correlated social behaviors can be metaphorically expressed with certain elements in music. For example, since performance is a subset of the category behavior, there is a natural mapping from performance in music to behavior in social settings: social alignment is musical performance. Musical performance entails a collective social expectation that exerts control over individual behavior. When individual behavior does not align with the collective social expectation, music-related expressions are often used to express how the individual is violating social norms. Moreover, when individuals do align their behavior with social norms, similar musical expressions are used. Cooperation is a crucial social value in all cultures, indeed it is a key element of survival, and music provides a coherent, consistent, and rich source domain—one based upon a universal and definitive cultural practice.Keywords: Chinese, Conceptual Metaphor Theory, cross-linguistic, culturally embodied experience, English, French, metaphor, music
Procedia PDF Downloads 1757971 Thermal Method Production of the Hydroxyapatite from Bone By-Products from Meat Industry
Authors: Agnieszka Sobczak-Kupiec, Dagmara Malina, Klaudia Pluta, Wioletta Florkiewicz, Bozena Tyliszczak
Abstract:
Introduction: Request for compound of phosphorus grows continuously, thus, it is searched for alternative sources of this element. One of these sources could be by-products from meat industry which contain prominent quantity of phosphorus compounds. Hydroxyapatite, which is natural component of animal and human bones, is leading material applied in bone surgery and also in stomatology. This is material, which is biocompatible, bioactive and osteoinductive. Methodology: Hydroxyapatite preparation: As a raw material was applied deproteinized and defatted bone pulp called bone sludge, which was formed as waste in deproteinization process of bones, in which a protein hydrolysate was the main product. Hydroxyapatite was received in calcining process in chamber kiln with electric heating in air atmosphere in two stages. In the first stage, material was calcining in temperature 600°C within 3 hours. In the next stage unified material was calcining in three different temperatures (750°C, 850°C and 950°C) keeping material in maximum temperature within 3.0 hours. Bone sludge: Bone sludge was formed as waste in deproteinization process of bones, in which a protein hydrolysate was the main product. Pork bones coming from the partition of meat were used as a raw material for the production of the protein hydrolysate. After disintegration, a mixture of bone pulp and water with a small amount of lactic acid was boiled at temperature 130-135°C and under pressure4 bar. After 3-3.5 hours boiled-out bones were separated on a sieve, and the solution of protein-fat hydrolysate got into a decanter, where bone sludge was separated from it. Results of the study: The phase composition was analyzed by roentgenographic method. Hydroxyapatite was the only crystalline phase observed in all the calcining products. XRD investigation was shown that crystallization degree of hydroxyapatite was increased with calcining temperature. Conclusion: The researches were shown that phosphorus content is around 12%, whereas, calcium content amounts to 28% on average. The conducted researches on bone-waste calcining at the temperatures of 750-950°C confirmed that thermal utilization of deproteinized bone-waste was possible. X-ray investigations were confirmed that hydroxyapatite is the main component of calcining products, and also XRD investigation was shown that crystallization degree of hydroxyapatite was increased with calcining temperature. Contents of calcium and phosphorus were distinctly increased with calcining temperature, whereas contents of phosphorus soluble in acids were decreased. It could be connected with higher crystallization degree of material received in higher temperatures and its stable structure. Acknowledgements: “The authors would like to thank the The National Centre for Research and Development (Grant no: LIDER//037/481/L-5/13/NCBR/2014) for providing financial support to this project”.Keywords: bone by-products, bone sludge, calcination, hydroxyapatite
Procedia PDF Downloads 2907970 Anti-Obesity Effects of Pteryxin in Peucedanum japonicum Thunb Leaves through Different Pathways of Adipogenesis In-Vitro
Authors: Ruwani N. Nugara, Masashi Inafuku, Kensaku Takara, Hironori Iwasaki, Hirosuke Oku
Abstract:
Pteryxin from the partially purified hexane phase (HP) of Peucedanum japonicum Thunb (PJT) was identified as the active compound related to anti-obesity. Thus, in this study we investigated the mechanisms related to anti-obesity activity in-vitro. The HP was fractionated, and effect on the triglyceride (TG) content was evaluated in 3T3-L1 and HepG2 cells. Comprehensive spectroscopic analyses were used to identify the structure of the active compound. The dose dependent effect of active constituent on the TG content, and the gene expressions related to adipogenesis, fatty acid catabolism, energy expenditure, lipolysis and lipogenesis (20 μg/mL) were examined in-vitro. Furthermore, higher dosage of pteryxin (50μg/mL) was tested against 20μg/mL in 3T3-L1 adipocytes. The mRNA were subjected to SOLiD next generation sequencer and the obtained data were analyzed by Ingenuity Pathway Analysis (IPA). The active constituent was identified as pteryxin, a known compound in PJT. However, its biological activities against obesity have not been reported previously. Pteryxin dose dependently suppressed TG content in both 3T3-L1 adipocytes and HepG2 hepatocytes (P < 0.05). Sterol regulatory element-binding protein-1 (SREBP1 c), Fatty acid synthase (FASN), and acetyl-CoA carboxylase-1 (ACC1) were downregulated in pteryxin-treated adipocytes (by 18.0, 36.1 and 38.2%; P < 0.05, respectively) and hepatocytes (by 72.3, 62.9 and 38.8%, respectively; P < 0.05) indicating its suppressive effects on fatty acid synthesis. The hormone-sensitive lipase (HSL), a lipid catabolising gene was upregulated (by 15.1%; P < 0.05) in pteryxin-treated adipocytes suggesting improved lipolysis. Concordantly, the adipocyte size marker gene, paternally expressed gene1/mesoderm specific transcript (MEST) was downregulated (by 42.8%; P < 0.05), further accelerating the lipolytic activity. The upregulated trend of uncoupling protein 2 (UCP2; by 77.5%; P < 0.05) reflected the improved energy expenditure due to pteryxin. The 50μg/mL dosage of pteryxin completely suppressed PPARγ, MEST, SREBP 1C, HSL, Adiponectin, Fatty Acid Binding Protein (FABP) 4, and UCP’s in 3T3-L1 adipocytes. The IPA suggested that pteryxin at 20μg/mL and 50μg/mL suppress obesity in two different pathways, whereas the WNT signaling pathway play a key role in the higher dose of pteryxin in preadipocyte stage. Pteryxin in PJT play the key role in regulating lipid metabolism related gene network and improving energy production in vitro. Thus, the results suggests pteryxin as a new natural compound to be used as an anti-obesity drug in pharmaceutical industry.Keywords: obesity, peucedanum japonicum thunb, pteryxin, food science
Procedia PDF Downloads 4567969 Matlab/Simulink Simulation of Solar Energy Storage System
Authors: Mustafa A. Al-Refai
Abstract:
This paper investigates the energy storage technologies that can potentially enhance the use of solar energy. Water electrolysis systems are seen as the principal means of producing a large amount of hydrogen in the future. Starting from the analysis of the models of the system components, a complete simulation model was realized in the Matlab-Simulink environment. Results of the numerical simulations are provided. The operation of electrolysis and photovoltaic array combination is verified at various insulation levels. It is pointed out that solar cell arrays and electrolysers are producing the expected results with solar energy inputs that are continuously varying.Keywords: electrolyzer, simulink, solar energy, storage system
Procedia PDF Downloads 4397968 Digital Literacy Transformation and Implications in Institutions of Higher Learning in Kenya
Authors: Emily Cherono Sawe, Elisha Ondieki Makori
Abstract:
Knowledge and digital economies have brought challenges and potential opportunities for universities to innovate and improve the quality of learning. Disruption technologies and information dynamics continue to transform and change the landscape in teaching, scholarship, and research activities across universities. Digital literacy is a fundamental and imperative element in higher education and training, as witnessed during the new norm. COVID-19 caused unprecedented disruption in universities, where teaching and learning depended on digital innovations and applications. Academic services and activities were provided online, including library information services. Information professionals were forced to adopt various digital platforms in order to provide information services to patrons. University libraries’ roles in fulfilling educational responsibilities continue to evolve in response to changes in pedagogy, technology, economy, society, policies, and strategies of parent institutions. Libraries are currently undergoing considerable transformational change as a result of the inclusion of a digital environment. Academic libraries have been at the forefront of providing online learning resources and online information services, as well as supporting students and staff to develop digital literacy skills via online courses, tutorials, and workshops. Digital literacy transformation and information staff are crucial elements reminiscent of the prioritization of skills and knowledge for lifelong learning. The purpose of this baseline research is to assess the implications of digital literacy transformation in institutions of higher learning in Kenya and share appropriate strategies to leverage and sustain teaching and research. Objectives include examining the leverage and preparedness of the digital literacy environment in streamlining learning in the universities, exploring and benchmarking imperative digital competence for information professionals, establishing the perception of information professionals towards digital literacy skills, and determining lessons, best practices, and strategies to accelerate digital literacy transformation for effective research and learning in the universities. The study will adopt a descriptive research design using questionnaires and document analysis as the instruments for data collection. The targeted population is librarians and information professionals, as well as academics in public and private universities teaching information literacy programmes. Data and information are to be collected through an online structured questionnaire and digital face-to-face interviews. Findings and results will provide promising lessons together with best practices and strategies to transform and change digital literacies in university libraries in Kenya.Keywords: digital literacy, digital innovations, information professionals, librarians, higher education, university libraries, digital information literacy
Procedia PDF Downloads 1017967 STML: Service Type-Checking Markup Language for Services of Web Components
Authors: Saqib Rasool, Adnan N. Mian
Abstract:
Web components are introduced as the latest standard of HTML5 for writing modular web interfaces for ensuring maintainability through the isolated scope of web components. Reusability can also be achieved by sharing plug-and-play web components that can be used as off-the-shelf components by other developers. A web component encapsulates all the required HTML, CSS and JavaScript code as a standalone package which must be imported for integrating a web component within an existing web interface. It is then followed by the integration of web component with the web services for dynamically populating its content. Since web components are reusable as off-the-shelf components, these must be equipped with some mechanism for ensuring their proper integration with web services. The consistency of a service behavior can be verified through type-checking. This is one of the popular solutions for improving the quality of code in many programming languages. However, HTML does not provide type checking as it is a markup language and not a programming language. The contribution of this work is to introduce a new extension of HTML called Service Type-checking Markup Language (STML) for adding support of type checking in HTML for JSON based REST services. STML can be used for defining the expected data types of response from JSON based REST services which will be used for populating the content within HTML elements of a web component. Although JSON has five data types viz. string, number, boolean, object and array but STML is made to supports only string, number and object. This is because of the fact that both object and array are considered as string, when populated in HTML elements. In order to define the data type of any HTML element, developer just needs to add the custom STML attributes of st-string, st-number and st-boolean for string, number and boolean respectively. These all annotations of STML are used by the developer who is writing a web component and it enables the other developers to use automated type-checking for ensuring the proper integration of their REST services with the same web component. Two utilities have been written for developers who are using STML based web components. One of these utilities is used for automated type-checking during the development phase. It uses the browser console for showing the error description if integrated web service is not returning the response with expected data type. The other utility is a Gulp based command line utility for removing the STML attributes before going in production. This ensures the delivery of STML free web pages in the production environment. Both of these utilities have been tested to perform type checking of REST services through STML based web components and results have confirmed the feasibility of evaluating service behavior only through HTML. Currently, STML is designed for automated type-checking of integrated REST services but it can be extended to introduce a complete service testing suite based on HTML only, and it will transform STML from Service Type-checking Markup Language to Service Testing Markup Language.Keywords: REST, STML, type checking, web component
Procedia PDF Downloads 2587966 Simulation of the Evacuation of Ships Carrying Dangerous Goods from Tsunami
Authors: Yoshinori Matsuura, Saori Iwanaga
Abstract:
The Great East Japan Earthquake occurred at 14:46 on Friday, March 11, 2011. It was the most powerful known earthquake to have hit Japan. The earthquake triggered extremely destructive tsunami waves of up to 40.5 meters in height. We focus on the ship’s evacuation from tsunami. Then we analyze about ships evacuation from tsunami using multi-agent simulation and we want to prepare for a coming earthquake. We developed a simulation model of ships that set sail from the port in order to evacuate from the tsunami considering the ship carrying dangerous goods.Keywords: Ship’s evacuation, multi-agent simulation, tsunami
Procedia PDF Downloads 4607965 Process Modeling in an Aeronautics Context
Authors: Sophie Lemoussu, Jean-Charles Chaudemar, Robertus A. Vingerhoeds
Abstract:
Many innovative projects exist in the field of aeronautics, each addressing specific areas so to reduce weight, increase autonomy, reduction of CO2, etc. In many cases, such innovative developments are being carried out by very small enterprises (VSE’s) or small and medium sized-enterprises (SME’s). A good example concerns airships that are being studied as a real alternative to passenger and cargo transportation. Today, no international regulations propose a precise and sufficiently detailed framework for the development and certification of airships. The absence of such a regulatory framework requires a very close contact with regulatory instances. However, VSE’s/SME’s do not always have sufficient resources and internal knowledge to handle this complexity and to discuss these issues. This poses an additional challenge for those VSE’s/SME’s, in particular those that have system integration responsibilities and that must provide all the necessary evidence to demonstrate their ability to design, produce, and operate airships with the expected level of safety and reliability. The main objective of this research is to provide a methodological framework enabling VSE’s/SME’s with limited resources to organize the development of airships while taking into account the constraints of safety, cost, time and performance. This paper proposes to provide a contribution to this problematic by proposing a Model-Based Systems Engineering approach. Through a comprehensive process modeling approach applied to the development processes, the regulatory constraints, existing best practices, etc., a good image can be obtained as to the process landscape that may influence the development of airships. To this effect, not only the necessary regulatory information is taken on board, also other international standards and norms on systems engineering and project management are being modeled and taken into account. In a next step, the model can be used for analysis of the specific situation for given developments, derive critical paths for the development, identify eventual conflicting aspects between the norms, standards, and regulatory expectations, or also identify those areas where not enough information is available. Once critical paths are known, optimization approaches can be used and decision support techniques can be applied so to better support VSE’s/SME’s in their innovative developments. This paper reports on the adopted modeling approach, the retained modeling languages, and how they all fit together.Keywords: aeronautics, certification, process modeling, project management, regulation, SME, systems engineering, VSE
Procedia PDF Downloads 1657964 Hierarchy and Weight of Influence Factors on Labor Productivity in the Construction Industry of the Nepal
Authors: Shraddha Palikhe, Sunkuk Kim
Abstract:
The construction industry is the most labor intensive in Nepal. It is obvious that construction is a major sector and any productivity enhancement activity in this sector will have a positive impact in the overall improvement of the national economy. Previous studies have stated that Nepal has poor labor productivity among other south Asian countries. Though considerable research has been done on productivity factors in other countries, no study has addressed labor productivity issues in Nepal. Therefore, the main objective of this study is to identify and hierarchy the influence factors for poor labor productivity. In this study, a questionnaire approach is chosen as a method of the survey from thirty experts involved in the construction industry, such as Architects, Civil Engineers, Project Engineers and Site Engineers. A survey was conducted in Nepal, to identify the major factors impacting construction labor productivity. Analytic Hierarchy Process (AHP) analysis method was used to understand the underlying relationships among the factors, categorized into five groups, namely (1) Labor-management group; (2) Material management group; (3) Human labor group; (4) Technological group and (5) External group and was divided into 33 subfactors. AHP was used to establish the relative importance of the criteria. The AHP makes pairwise comparisons of relative importance between hierarchy elements grouped by labor productivity decision criteria. Respondents were asked to answer based on their experience of construction works. On the basis of the respondent’s response, weight of all the factors were calculated and ranked it. The AHP results were tabulated based on weight and ranking of influence factors. AHP model consists of five main criteria and 33 sub-criteria. Among five main criteria, the scenario assigns a weight of highest influential factor i.e. 26.15% to human labor group followed by 23.01% to technological group, 22.97% to labor management group, 17.61% material management group and 10.25% to external group. While in 33 sub-criteria, the most influential factor for poor productivity in Nepal are lack of monetary incentive (20.53%) for human labor group, unsafe working condition (17.55%) for technological group, lack of leadership (18.43%) for labor management group, unavailability of tools at site (25.03%) for material management group and strikes (35.01%) for external group. The results show that AHP model associated criteria are helpful to predict the current situation of labor productivity. It is essential to consider these influence factors to improve the labor productivity in the construction industry of Nepal.Keywords: construction, hierarchical analysis, influence factors, labor productivity
Procedia PDF Downloads 4067963 The Evaluation of the Performance of Different Filtering Approaches in Tracking Problem and the Effect of Noise Variance
Authors: Mohammad Javad Mollakazemi, Farhad Asadi, Aref Ghafouri
Abstract:
Performance of different filtering approaches depends on modeling of dynamical system and algorithm structure. For modeling and smoothing the data the evaluation of posterior distribution in different filtering approach should be chosen carefully. In this paper different filtering approaches like filter KALMAN, EKF, UKF, EKS and smoother RTS is simulated in some trajectory tracking of path and accuracy and limitation of these approaches are explained. Then probability of model with different filters is compered and finally the effect of the noise variance to estimation is described with simulations results.Keywords: Gaussian approximation, Kalman smoother, parameter estimation, noise variance
Procedia PDF Downloads 4457962 Optimization of the Aerodynamic Performances of an Unmanned Aerial Vehicle
Authors: Fares Senouci, Bachir Imine
Abstract:
This document provides numerical and experimental optimization of the aerodynamic performance of a drone equipped with three types of horizontal stabilizer. To build this optimal configuration, an experimental and numerical study was conducted on three parameters: the geometry of the stabilizer (horizontal form or reverse V form), the position of the horizontal stabilizer (up or down), and the landing gear position (closed or open). The results show that up-stabilizer position with respect to the horizontal plane of the fuselage provides better aerodynamic performance, and that the landing gear increases the lift in the zone of stability, that is to say where the flow is not separated.Keywords: aerodynamics, drag, lift, turbulence model, wind tunnel
Procedia PDF Downloads 2567961 Building a Scalable Telemetry Based Multiclass Predictive Maintenance Model in R
Authors: Jaya Mathew
Abstract:
Many organizations are faced with the challenge of how to analyze and build Machine Learning models using their sensitive telemetry data. In this paper, we discuss how users can leverage the power of R without having to move their big data around as well as a cloud based solution for organizations willing to host their data in the cloud. By using ScaleR technology to benefit from parallelization and remote computing or R Services on premise or in the cloud, users can leverage the power of R at scale without having to move their data around.Keywords: predictive maintenance, machine learning, big data, cloud based, on premise solution, R
Procedia PDF Downloads 3807960 Detection and Classification Strabismus Using Convolutional Neural Network and Spatial Image Processing
Authors: Anoop T. R., Otman Basir, Robert F. Hess, Eileen E. Birch, Brooke A. Koritala, Reed M. Jost, Becky Luu, David Stager, Ben Thompson
Abstract:
Strabismus refers to a misalignment of the eyes. Early detection and treatment of strabismus in childhood can prevent the development of permanent vision loss due to abnormal development of visual brain areas. We developed a two-stage method for strabismus detection and classification based on photographs of the face. The first stage detects the presence or absence of strabismus, and the second stage classifies the type of strabismus. The first stage comprises face detection using Haar cascade, facial landmark estimation, face alignment, aligned face landmark detection, segmentation of the eye region, and detection of strabismus using VGG 16 convolution neural networks. Face alignment transforms the face to a canonical pose to ensure consistency in subsequent analysis. Using facial landmarks, the eye region is segmented from the aligned face and fed into a VGG 16 CNN model, which has been trained to classify strabismus. The CNN determines whether strabismus is present and classifies the type of strabismus (exotropia, esotropia, and vertical deviation). If stage 1 detects strabismus, the eye region image is fed into stage 2, which starts with the estimation of pupil center coordinates using mask R-CNN deep neural networks. Then, the distance between the pupil coordinates and eye landmarks is calculated along with the angle that the pupil coordinates make with the horizontal and vertical axis. The distance and angle information is used to characterize the degree and direction of the strabismic eye misalignment. This model was tested on 100 clinically labeled images of children with (n = 50) and without (n = 50) strabismus. The True Positive Rate (TPR) and False Positive Rate (FPR) of the first stage were 94% and 6% respectively. The classification stage has produced a TPR of 94.73%, 94.44%, and 100% for esotropia, exotropia, and vertical deviations, respectively. This method also had an FPR of 5.26%, 5.55%, and 0% for esotropia, exotropia, and vertical deviation, respectively. The addition of one more feature related to the location of corneal light reflections may reduce the FPR, which was primarily due to children with pseudo-strabismus (the appearance of strabismus due to a wide nasal bridge or skin folds on the nasal side of the eyes).Keywords: strabismus, deep neural networks, face detection, facial landmarks, face alignment, segmentation, VGG 16, mask R-CNN, pupil coordinates, angle deviation, horizontal and vertical deviation
Procedia PDF Downloads 1007959 Nano-Sized Iron Oxides/ZnMe Layered Double Hydroxides as Highly Efficient Fenton-Like Catalysts for Degrading Specific Pharmaceutical Agents
Authors: Marius Sebastian Secula, Mihaela Darie, Gabriela Carja
Abstract:
Persistent organic pollutant discharged by various industries or urban regions into the aquatic ecosystems represent a serious threat to fauna and human health. The endocrine disrupting compounds are known to have toxic effects even at very low values of concentration. The anti-inflammatory agent Ibuprofen is an endocrine disrupting compound and is considered as model pollutant in the present study. The use of light energy to accomplish the latest requirements concerning wastewater discharge demands highly-performant and robust photo-catalysts. Many efforts have been paid to obtain efficient photo-responsive materials. Among the promising photo-catalysts, layered double hydroxides (LDHs) attracted significant consideration especially due to their composition flexibility, high surface area and tailored redox features. This work presents Fe(II) self-supported on ZnMeLDHs (Me =Al3+, Fe3+) as novel efficient photo-catalysts for Fenton-like catalysis. The co-precipitation method was used to prepare ZnAlLDH, ZnFeAlLDH and ZnCrLDH (Zn2+/Me3+ = 2 molar ratio). Fe(II) was self-supported on the LDHs matrices by using the reconstruction method, at two different values of weight concentration. X-ray diffraction (XRD), thermogravimetric analysis (TG/DTG), Fourier transform infrared (FTIR) and transmission electron microscopy (TEM) were used to investigate the structural, textural, and micromorphology of the catalysts. The Fe(II)/ZnMeLDHs nano-hybrids were tested for the degradation of a model pharmaceutical agent, the anti-inflammatory agent ibuprofen, by photocatalysis and photo-Fenton catalysis, respectively. The results point out that the embedment Fe(II) into ZnFeAlLDH and ZnCrLDH lead to a slight enhancement of ibuprofen degradation by light irradiation, whereas in case of ZnAlLDH, the degradation process is relatively low. A remarkable enhancement of ibuprofen degradation was found in the case of Fe(II)/ZnMeLDHs by photo-Fenton process. Acknowledgements: This work was supported by a grant of the Romanian National Authority for Scientific Research and Innovation, CNCS - UEFISCDI, project number PN-II-RU-TE-2014-4-0405.Keywords: layered double hydroxide, heterogeneous Fenton, micropollutant, photocatalysis
Procedia PDF Downloads 299