Search results for: complex event processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9634

Search results for: complex event processing

574 Effects of Radiation on Mixed Convection in Power Law Fluids along Vertical Wedge Embedded in a Saturated Porous Medium under Prescribed Surface Heat Flux Condition

Authors: Qaisar Ali, Waqar A. Khan, Shafiq R. Qureshi

Abstract:

Heat transfer in Power Law Fluids across cylindrical surfaces has copious engineering applications. These applications comprises of areas such as underwater pollution, bio medical engineering, filtration systems, chemical, petroleum, polymer, food processing, recovery of geothermal energy, crude oil extraction, pharmaceutical and thermal energy storage. The quantum of research work with diversified conditions to study the effects of combined heat transfer and fluid flow across porous media has increased considerably over last few decades. The most non-Newtonian fluids of practical interest are highly viscous and therefore are often processed in the laminar flow regime. Several studies have been performed to investigate the effects of free and mixed convection in Newtonian fluids along vertical and horizontal cylinder embedded in a saturated porous medium, whereas very few analysis have been performed on Power law fluids along wedge. In this study, boundary layer analysis under the effects of radiation-mixed convection in power law fluids along vertical wedge in porous medium have been investigated using an implicit finite difference method (Keller box method). Steady, 2-D laminar flow has been considered under prescribed surface heat flux condition. Darcy, Boussinesq and Roseland approximations are assumed to be valid. Neglecting viscous dissipation effects and the radiate heat flux in the flow direction, the boundary layer equations governing mixed convection flow over a vertical wedge are transformed into dimensionless form. The single mathematical model represents the case for vertical wedge, cone and plate by introducing the geometry parameter. Both similar and Non- similar solutions have been obtained and results for Non similar case have been presented/ plotted. Effects of radiation parameter, variable heat flux parameter, wedge angle parameter ‘m’ and mixed convection parameter have been studied for both Newtonian and Non-Newtonian fluids. The results are also compared with the available data for the analysis of heat transfer in the prescribed range of parameters and found in good agreement. Results for the details of dimensionless local Nusselt number, temperature and velocity fields have also been presented for both Newtonian and Non-Newtonian fluids. Analysis of data revealed that as the radiation parameter or wedge angle is increased, the Nusselt number decreases whereas it increases with increase in the value of heat flux parameter at a given value of mixed convection parameter. Also, it is observed that as viscosity increases, the skin friction co-efficient increases which tends to reduce the velocity. Moreover, pseudo plastic fluids are more heat conductive than Newtonian and dilatant fluids respectively. All fluids behave identically in pure forced convection domain.

Keywords: porous medium, power law fluids, surface heat flux, vertical wedge

Procedia PDF Downloads 311
573 Comparison of Sediment Rating Curve and Artificial Neural Network in Simulation of Suspended Sediment Load

Authors: Ahmad Saadiq, Neeraj Sahu

Abstract:

Sediment, which comprises of solid particles of mineral and organic material are transported by water. In river systems, the amount of sediment transported is controlled by both the transport capacity of the flow and the supply of sediment. The transport of sediment in rivers is important with respect to pollution, channel navigability, reservoir ageing, hydroelectric equipment longevity, fish habitat, river aesthetics and scientific interests. The sediment load transported in a river is a very complex hydrological phenomenon. Hence, sediment transport has attracted the attention of engineers from various aspects, and different methods have been used for its estimation. So, several experimental equations have been submitted by experts. Though the results of these methods have considerable differences with each other and with experimental observations, because the sediment measures have some limits, these equations can be used in estimating sediment load. In this present study, two black box models namely, an SRC (Sediment Rating Curve) and ANN (Artificial Neural Network) are used in the simulation of the suspended sediment load. The study is carried out for Seonath subbasin. Seonath is the biggest tributary of Mahanadi river, and it carries a vast amount of sediment. The data is collected for Jondhra hydrological observation station from India-WRIS (Water Resources Information System) and IMD (Indian Meteorological Department). These data include the discharge, sediment concentration and rainfall for 10 years. In this study, sediment load is estimated from the input parameters (discharge, rainfall, and past sediment) in various combination of simulations. A sediment rating curve used the water discharge to estimate the sediment concentration. This estimated sediment concentration is converted to sediment load. Likewise, for the application of these data in ANN, they are normalised first and then fed in various combinations to yield the sediment load. RMSE (root mean square error) and R² (coefficient of determination) between the observed load and the estimated load are used as evaluating criteria. For an ideal model, RMSE is zero and R² is 1. However, as the models used in this study are black box models, they don’t carry the exact representation of the factors which causes sedimentation. Hence, a model which gives the lowest RMSE and highest R² is the best model in this study. The lowest values of RMSE (based on normalised data) for sediment rating curve, feed forward back propagation, cascade forward back propagation and neural network fitting are 0.043425, 0.00679781, 0.0050089 and 0.0043727 respectively. The corresponding values of R² are 0.8258, 0.9941, 0.9968 and 0.9976. This implies that a neural network fitting model is superior to the other models used in this study. However, a drawback of neural network fitting is that it produces few negative estimates, which is not at all tolerable in the field of estimation of sediment load, and hence this model can’t be crowned as the best model among others, based on this study. A cascade forward back propagation produces results much closer to a neural network model and hence this model is the best model based on the present study.

Keywords: artificial neural network, Root mean squared error, sediment, sediment rating curve

Procedia PDF Downloads 323
572 Methodology for Risk Assessment of Nitrosamine Drug Substance Related Impurities in Glipizide Antidiabetic Formulations

Authors: Ravisinh Solanki, Ravi Patel, Chhaganbhai Patel

Abstract:

Purpose: The purpose of this study is to develop a methodology for the risk assessment and evaluation of nitrosamine impurities in Glipizide antidiabetic formulations. Nitroso compounds, including nitrosamines, have emerged as significant concerns in drug products, as highlighted by the ICH M7 guidelines. This study aims to identify known and potential sources of nitrosamine impurities that may contaminate Glipizide formulations and assess their presence. By determining observed or predicted levels of these impurities and comparing them with regulatory guidance, this research will contribute to ensuring the safety and quality of combination antidiabetic drug products on the market. Factors contributing to the presence of genotoxic nitrosamine contaminants in glipizide medications, such as secondary and tertiary amines, and nitroso group-complex forming molecules, will be investigated. Additionally, conditions necessary for nitrosamine formation, including the presence of nitrosating agents, and acidic environments, will be examined to enhance understanding and mitigation strategies. Method: The methodology for the study involves the implementation of the N-Nitroso Acid Precursor (NAP) test, as recommended by the WHO in 1978 and detailed in the 1980 International Agency for Research on Cancer monograph. Individual glass vials containing equivalent to 10mM quantities of Glipizide is prepared. These compounds are dissolved in an acidic environment and supplemented with 40 mM NaNO2. The resulting solutions are maintained at a temperature of 37°C for a duration of 4 hours. For the analysis of the samples, an HPLC method is employed for fit-for-purpose separation. LC resolution is achieved using a step gradient on an Agilent Eclipse Plus C18 column (4.6 X 100 mm, 3.5µ). Mobile phases A and B consist of 0.1% v/v formic acid in water and acetonitrile, respectively, following a gradient mode program. The flow rate is set at 0.6 mL/min, and the column compartment temperature is maintained at 35°C. Detection is performed using a PDA detector within the wavelength range of 190-400 nm. To determine the exact mass of formed nitrosamine drug substance related impurities (NDSRIs), the HPLC method is transferred to LC-TQ-MS/MS with the same mobile phase composition and gradient program. The injection volume is set at 5 µL, and MS analysis is conducted in Electrospray Ionization (ESI) mode within the mass range of 100−1000 Daltons. Results: The samples of NAP test were prepared according to the protocol. The samples were analyzed using HPLC and LC-TQ-MS/MS identify possible NDSRIs generated in different formulations of glipizide. It was found that the NAP test generated a various NDSRIs. The new finding, which has not been reported yet, discovered contamination of Glipizide. These NDSRIs are categorised based on the predicted carcinogenic potency and recommended its acceptable intact in medicines. The analytical method was found specific and reproducible.

Keywords: NDSRI, nitrosamine impurities, antidiabetic, glipizide, LC-MS/MS

Procedia PDF Downloads 27
571 Ecological Relationships Between Material, Colonizing Organisms, and Resulting Performances

Authors: Chris Thurlbourne

Abstract:

Due to the continual demand for material to build, and a limit of good environmental material credentials of 'normal' building materials, there is a need to look at new and reconditioned material types - both biogenic and non-biogenic - and a field of research that accompanies this. This research development focuses on biogenic and non-biogenic material engineering and the impact of our environment on new and reconditioned material types. In our building industry and all the industries involved in constructing our built environment, building material types can be broadly categorized into two types, biogenic and non-biogenic material properties. Both play significant roles in shaping our built environment. Regardless of their properties, all material types originate from our earth, whereas many are modified through processing to provide resistance to 'forces of nature', be it rain, wind, sun, gravity, or whatever the local environmental conditions throw at us. Modifications are succumbed to offer benefits in endurance, resistance, malleability in handling (building with), and ergonomic values - in all types of building material. We assume control of all building materials through rigorous quality control specifications and regulations to ensure materials perform under specific constraints. Yet materials confront an external environment that is not controlled with live forces undetermined, and of which materials naturally act and react through weathering, patination and discoloring, promoting natural chemical reactions such as rusting. The purpose of the paper is to present recent research that explores the after-life of specific new and reconditioned biogenic and non-biogenic material types and how the understanding of materials' natural processes of transformation when exposed to the external climate, can inform initial design decisions. With qualities to receive in a transient and contingent manner, ecological relationships between material, the colonizing organisms and resulting performances invite opportunities for new design explorations for the benefit of both the needs of human society and the needs of our natural environment. The research follows designing for the benefit of both and engaging in both biogenic and non-biogenic material engineering whilst embracing the continual demand for colonization - human and environment, and the aptitude of a material to be colonized by one or several groups of living organisms without necessarily undergoing any severe deterioration, but embracing weathering, patination and discoloring, and at the same time establishing new habitat. The research follows iterative prototyping processes where knowledge has been accumulated via explorations of specific material performances, from laboratory to construction mock-ups focusing on the architectural qualities embedded in control of production techniques and facilitating longer-term patinas of material surfaces to extend the aesthetic beyond common judgments. Experiments are therefore focused on how the inherent material qualities drive a design brief toward specific investigations to explore aesthetics induced through production, patinas and colonization obtained over time while exposed and interactions with external climate conditions.

Keywords: biogenic and non-biogenic, natural processes of transformation, colonization, patina

Procedia PDF Downloads 86
570 Japanese and Europe Legal Frameworks on Data Protection and Cybersecurity: Asymmetries from a Comparative Perspective

Authors: S. Fantin

Abstract:

This study is the result of the legal research on cybersecurity and data protection within the EUNITY (Cybersecurity and Privacy Dialogue between Europe and Japan) project, aimed at fostering the dialogue between the European Union and Japan. Based on the research undertaken therein, the author offers an outline of the main asymmetries in the laws governing such fields in the two regions. The research is a comparative analysis of the two legal frameworks, taking into account specific provisions, ratio legis and policy initiatives. Recent doctrine was taken into account, too, as well as empirical interviews with EU and Japanese stakeholders and project partners. With respect to the protection of personal data, the European Union has recently reformed its legal framework with a package which includes a regulation (General Data Protection Regulation), and a directive (Directive 680 on personal data processing in the law enforcement domain). In turn, the Japanese law under scrutiny for this study has been the Act on Protection of Personal Information. Based on a comparative analysis, some asymmetries arise. The main ones refer to the definition of personal information and the scope of the two frameworks. Furthermore, the rights of the data subjects are differently articulated in the two regions, while the nature of sanctions take two opposite approaches. Regarding the cybersecurity framework, the situation looks similarly misaligned. Japan’s main text of reference is the Basic Cybersecurity Act, while the European Union has a more fragmented legal structure (to name a few, Network and Information Security Directive, Critical Infrastructure Directive and Directive on the Attacks at Information Systems). On an relevant note, unlike a more industry-oriented European approach, the concept of cyber hygiene seems to be neatly embedded in the Japanese legal framework, with a number of provisions that alleviate operators’ liability by turning such a burden into a set of recommendations to be primarily observed by citizens. With respect to the reasons to fill such normative gaps, these are mostly grounded on three basis. Firstly, the cross-border nature of cybercrime brings to consider both magnitude of the issue and its regulatory stance globally. Secondly, empirical findings from the EUNITY project showed how recent data breaches and cyber-attacks had shared implications between Europe and Japan. Thirdly, the geopolitical context is currently going through the direction of bringing the two regions to significant agreements from a trade standpoint, but also from a data protection perspective (with an imminent signature by both parts of a so-called ‘Adequacy Decision’). The research conducted in this study reveals two asymmetric legal frameworks on cyber security and data protection. With a view to the future challenges presented by the strengthening of the collaboration between the two regions and the trans-national fashion of cybercrime, it is urged that solutions are found to fill in such gaps, in order to allow European Union and Japan to wisely increment their partnership.

Keywords: cybersecurity, data protection, European Union, Japan

Procedia PDF Downloads 121
569 Generic Early Warning Signals for Program Student Withdrawals: A Complexity Perspective Based on Critical Transitions and Fractals

Authors: Sami Houry

Abstract:

Complex systems exhibit universal characteristics as they near a tipping point. Among them are common generic early warning signals which precede critical transitions. These signals include: critical slowing down in which the rate of recovery from perturbations decreases over time; an increase in the variance of the state variable; an increase in the skewness of the state variable; an increase in the autocorrelations of the state variable; flickering between different states; and an increase in spatial correlations over time. The presence of the signals has management implications, as the identification of the signals near the tipping point could allow management to identify intervention points. Despite the applications of the generic early warning signals in various scientific fields, such as fisheries, ecology and finance, a review of literature did not identify any applications that address the program student withdrawal problem at the undergraduate distance universities. This area could benefit from the application of generic early warning signals as the program withdrawal rate amongst distance students is higher than the program withdrawal rate at face-to-face conventional universities. This research specifically assessed the generic early warning signals through an intensive case study of undergraduate program student withdrawal at a Canadian distance university. The university is non-cohort based due to its system of continuous course enrollment where students can enroll in a course at the beginning of every month. The assessment of the signals was achieved through the comparison of the incidences of generic early warning signals among students who withdrew or simply became inactive in their undergraduate program of study, the true positives, to the incidences of the generic early warning signals among graduates, the false positives. This was achieved through significance testing. Research findings showed support for the signal pertaining to the rise in flickering which is represented in the increase in the student’s non-pass rates prior to withdrawing from a program; moderate support for the signals of critical slowing down as reflected in the increase in the time a student spends in a course; and moderate support for the signals on increase in autocorrelation and increase in variance in the grade variable. The findings did not support the signal on the increase in skewness of the grade variable. The research also proposes a new signal based on the fractal-like characteristic of student behavior. The research also sought to extend knowledge by investigating whether the emergence of a program withdrawal status is self-similar or fractal-like at multiple levels of observation, specifically the program level and the course level. In other words, whether the act of withdrawal at the program level is also present at the course level. The findings moderately supported self-similarity as a potential signal. Overall, the assessment of the signals suggests that the signals, with the exception with the increase of skewness, could be utilized as a predictive management tool and potentially add one more tool, the fractal-like characteristic of withdrawal, as an additional signal in addressing the student program withdrawal problem.

Keywords: critical transitions, fractals, generic early warning signals, program student withdrawal

Procedia PDF Downloads 184
568 Artificial Neural Network and Satellite Derived Chlorophyll Indices for Estimation of Wheat Chlorophyll Content under Rainfed Condition

Authors: Muhammad Naveed Tahir, Wang Yingkuan, Huang Wenjiang, Raheel Osman

Abstract:

Numerous models used in prediction and decision-making process but most of them are linear in natural environment, and linear models reach their limitations with non-linearity in data. Therefore accurate estimation is difficult. Artificial Neural Networks (ANN) found extensive acceptance to address the modeling of the complex real world for the non-linear environment. ANN’s have more general and flexible functional forms than traditional statistical methods can effectively deal with. The link between information technology and agriculture will become more firm in the near future. Monitoring crop biophysical properties non-destructively can provide a rapid and accurate understanding of its response to various environmental influences. Crop chlorophyll content is an important indicator of crop health and therefore the estimation of crop yield. In recent years, remote sensing has been accepted as a robust tool for site-specific management by detecting crop parameters at both local and large scales. The present research combined the ANN model with satellite-derived chlorophyll indices from LANDSAT 8 imagery for predicting real-time wheat chlorophyll estimation. The cloud-free scenes of LANDSAT 8 were acquired (Feb-March 2016-17) at the same time when ground-truthing campaign was performed for chlorophyll estimation by using SPAD-502. Different vegetation indices were derived from LANDSAT 8 imagery using ERADAS Imagine (v.2014) software for chlorophyll determination. The vegetation indices were including Normalized Difference Vegetation Index (NDVI), Green Normalized Difference Vegetation Index (GNDVI), Chlorophyll Absorbed Ratio Index (CARI), Modified Chlorophyll Absorbed Ratio Index (MCARI) and Transformed Chlorophyll Absorbed Ratio index (TCARI). For ANN modeling, MATLAB and SPSS (ANN) tools were used. Multilayer Perceptron (MLP) in MATLAB provided very satisfactory results. For training purpose of MLP 61.7% of the data, for validation purpose 28.3% of data and rest 10% of data were used to evaluate and validate the ANN model results. For error evaluation, sum of squares error and relative error were used. ANN model summery showed that sum of squares error of 10.786, the average overall relative error was .099. The MCARI and NDVI were revealed to be more sensitive indices for assessing wheat chlorophyll content with the highest coefficient of determination R²=0.93 and 0.90 respectively. The results suggested that use of high spatial resolution satellite imagery for the retrieval of crop chlorophyll content by using ANN model provides accurate, reliable assessment of crop health status at a larger scale which can help in managing crop nutrition requirement in real time.

Keywords: ANN, chlorophyll content, chlorophyll indices, satellite images, wheat

Procedia PDF Downloads 146
567 Leadership Education for Law Enforcement Mid-Level Managers: The Mediating Role of Effectiveness of Training on Transformational and Authentic Leadership Traits

Authors: Kevin Baxter, Ron Grove, James Pitney, John Harrison, Ozlem Gumus

Abstract:

The purpose of this research is to determine the mediating effect of effectiveness of the training provided by Northwestern University’s School of Police Staff and Command (SPSC), on the ability of law enforcement mid-level managers to learn transformational and authentic leadership traits. This study will also evaluate the leadership styles, of course, graduates compared to non-attendees using a static group comparison design. The Louisiana State Police pay approximately $40,000 in salary, tuition, housing, and meals for each state police lieutenant attending the 10-week program of the SPSC. This school lists the development of transformational leaders as an increasing element. Additionally, the SPSC curriculum addresses all four components of authentic leadership - self-awareness, transparency, ethical/moral, and balanced processing. Upon return to law enforcement in roles of mid-level management, there are questions as to whether or not students revert to an “autocratic” leadership style. Insufficient evidence exists to support claims for the effectiveness of management training or leadership development. Though it is widely recognized that transformational styles are beneficial to law enforcement, there is little evidence that suggests police leadership styles are changing. Police organizations continue to hold to a more transactional style (i.e., most senior police leaders remain autocrats). Additionally, research in the application of transformational, transactional, and laissez-faire leadership related to police organizations is minimal. The population of the study is law enforcement mid-level managers from various states within the United States who completed leadership training presented by the SPSC. The sample will be composed of 66 active law enforcement mid-level managers (lieutenants and captains) who have graduated from SPSC and 65 active law enforcement mid-level managers (lieutenants and captains) who have not attended SPSC. Participants will answer demographics questions, Multifactor Leadership Questionnaire, Authentic Leadership Questionnaire, and the Kirkpatrick Hybrid Evaluation Survey. Analysis from descriptive statistics, group comparison, one-way MANCOVA, and the Kirkpatrick Evaluation Model survey will be used to determine training effectiveness in the four levels of reaction, learning, behavior, and results. Independent variables are SPSC graduates (two groups: upper and lower) and no-SPSC attendees, and dependent variables are transformational and authentic leadership scores. SPSC graduates are expected to have higher MLQ scores for transformational leadership traits and higher ALQ scores for authentic leadership traits than SPSC non-attendees. We also expect the graduates to rate the efficacy of SPSC leadership training as high. This study will validate (or invalidate) the benefits, costs, and resources required for leadership development from a nationally recognized police leadership program, and it will also help fill the gap in the literature that exists between law enforcement professional development and transformational and authentic leadership styles.

Keywords: training effectiveness, transformational leadership, authentic leadership, law enforcement mid-level manager

Procedia PDF Downloads 104
566 Literacy Practices in Immigrant Detention Centers: A Conceptual Exploration of Access, Resistance, and Connection

Authors: Mikel W. Cole, Stephanie M. Madison, Adam Henze

Abstract:

Since 2004, the U.S. immigrant detention system has imprisoned more than five million people. President John F. Kennedy famously dubbed this country a “Nation of Immigrants.” Like many of the nation’s imagined ideals, the historical record finds its practices have never lived up to the tenets championed as defining qualities.The United Nations High Commission on Refugees argues the educational needs of people in carceral spaces, especially those in immigrant detention centers, are urgent and supported by human rights guarantees. However, there is a genuine dearth of literacy research in immigrant detention centers, compounded by a general lack of access to these spaces. Denying access to literacy education in detention centers is one way the history of xenophobic immigration policy persists. In this conceptual exploration, first-hand accounts from detained individuals, their families, and the organizations that work with them have been shared with the authors. In this paper, the authors draw on experiences, reflections, and observations from serving as volunteers to develop a conceptual framework for the ways in which literacy practices are enacted in detention centers. Literacy is an essential tool for accessing those detained in immigrant detention centers and a critical tool for those being detained to access legal and other services. One of the most striking things about the detention center is how to behave; gaining access for a visit is neither intuitive nor straightforward. The men experiencing detention are also at a disadvantage. The lack of access to their own documents is a profound barrier to men navigating the complex immigration process. Literacy is much more than a skill for gathering knowledge or accessing carceral spaces; literacy is fundamentally a source of personal empowerment. Frequently men find a way to reclaim their sense of dignity through work on their own terms by exchanging their literacy services for products or credits at the commissary. They write cards and letters for fellow detainees, read mail, and manage the exchange of information between the men and their families. In return, the men who have jobs trade items from the commissary or transfer money to the accounts of the men doing the reading, writing, and drawing. Literacy serves as a form of resistance by providing an outlet for productive work. At its core, literacy is the exchange of ideas between an author and a reader and is a primary source of human connection for individuals in carceral spaces. Father’s Day and Christmas are particularly difficult at detention centers. Men weep when speaking about their children and the overwhelming hopelessness they feel by being separated from them. Yet card-writing campaigns have provided these men with words of encouragement as thousands of hand-written cards make their way to the detention center. There are undoubtedly more literacies being practiced in the immigrant detention center where we work and at other detention centers across the country, and these categories are early conceptions with which we are still wrestling.

Keywords: detention centers, education, immigration, literacy

Procedia PDF Downloads 126
565 Various Shaped ZnO and ZnO/Graphene Oxide Nanocomposites and Their Use in Water Splitting Reaction

Authors: Sundaram Chandrasekaran, Seung Hyun Hur

Abstract:

Exploring strategies for oxygen vacancy engineering under mild conditions and understanding the relationship between dislocations and photoelectrochemical (PEC) cell performance are challenging issues for designing high performance PEC devices. Therefore, it is very important to understand that how the oxygen vacancies (VO) or other defect states affect the performance of the photocatalyst in photoelectric transfer. So far, it has been found that defects in nano or micro crystals can have two possible significances on the PEC performance. Firstly, an electron-hole pair produced at the interface of photoelectrode and electrolyte can recombine at the defect centers under illumination of light, thereby reducing the PEC performances. On the other hand, the defects could lead to a higher light absorption in the longer wavelength region and may act as energy centers for the water splitting reaction that can improve the PEC performances. Even if the dislocation growth of ZnO has been verified by the full density functional theory (DFT) calculations and local density approximation calculations (LDA), it requires further studies to correlate the structures of ZnO and PEC performances. Exploring the hybrid structures composed of graphene oxide (GO) and ZnO nanostructures offer not only the vision of how the complex structure form from a simple starting materials but also the tools to improve PEC performances by understanding the underlying mechanisms of mutual interactions. As there are few studies for the ZnO growth with other materials and the growth mechanism in those cases has not been clearly explored yet, it is very important to understand the fundamental growth process of nanomaterials with the specific materials, so that rational and controllable syntheses of efficient ZnO-based hybrid materials can be designed to prepare nanostructures that can exhibit significant PEC performances. Herein, we fabricated various ZnO nanostructures such as hollow sphere, bucky bowl, nanorod and triangle, investigated their pH dependent growth mechanism, and correlated the PEC performances with them. Especially, the origin of well-controlled dislocation-driven growth and its transformation mechanism of ZnO nanorods to triangles on the GO surface were discussed in detail. Surprisingly, the addition of GO during the synthesis process not only tunes the morphology of ZnO nanocrystals and also creates more oxygen vacancies (oxygen defects) in the lattice of ZnO, which obviously suggest that the oxygen vacancies be created by the redox reaction between GO and ZnO in which the surface oxygen is extracted from the surface of ZnO by the functional groups of GO. On the basis of our experimental and theoretical analysis, the detailed mechanism for the formation of specific structural shapes and oxygen vacancies via dislocation, and its impact in PEC performances are explored. In water splitting performance, the maximum photocurrent density of GO-ZnO triangles was 1.517mA/cm-2 (under UV light ~ 360 nm) vs. RHE with high incident photon to current conversion Efficiency (IPCE) of 10.41%, which is the highest among all samples fabricated in this study and also one of the highest IPCE reported so far obtained from GO-ZnO triangular shaped photocatalyst.

Keywords: dislocation driven growth, zinc oxide, graphene oxide, water splitting

Procedia PDF Downloads 294
564 Influence of Mandrel’s Surface on the Properties of Joints Produced by Magnetic Pulse Welding

Authors: Ines Oliveira, Ana Reis

Abstract:

Magnetic Pulse Welding (MPW) is a cold solid-state welding process, accomplished by the electromagnetically driven, high-speed and low-angle impact between two metallic surfaces. It has the same working principle of Explosive Welding (EXW), i.e. is based on the collision of two parts at high impact speed, in this case, propelled by electromagnetic force. Under proper conditions, i.e., flyer velocity and collision point angle, a permanent metallurgical bond can be achieved between widely dissimilar metals. MPW has been considered a promising alternative to the conventional welding processes and advantageous when compared to other impact processes. Nevertheless, MPW current applications are mostly academic. Despite the existing knowledge, the lack of consensus regarding several aspects of the process calls for further investigation. As a result, the mechanical resistance, morphology and structure of the weld interface in MPW of Al/Cu dissimilar pair were investigated. The effect of process parameters, namely gap, standoff distance and energy, were studied. It was shown that welding only takes place if the process parameters are within an optimal range. Additionally, the formation of intermetallic phases cannot be completely avoided in the weld of Al/Cu dissimilar pair by MPW. Depending on the process parameters, the intermetallic compounds can appear as continuous layer or small pockets. The thickness and the composition of the intermetallic layer depend on the processing parameters. Different intermetallic phases can be identified, meaning that different temperature-time regimes can occur during the process. It is also found that lower pulse energies are preferred. The relationship between energy increase and melting is possibly related to multiple sources of heating. Higher values of pulse energy are associated with higher induced currents in the part, meaning that more Joule heating will be generated. In addition, more energy means higher flyer velocity, the air existing in the gap between the parts to be welded is expelled, and this aerodynamic drag (fluid friction) is proportional to the square of the velocity, further contributing to the generation of heat. As the kinetic energy also increases with the square of velocity, the dissipation of this energy through plastic work and jet generation will also contribute to an increase in temperature. To reduce intermetallic phases, porosity, and melt pockets, pulse energy should be minimized. The bond formation is affected not only by the gap, standoff distance, and energy but also by the mandrel’s surface conditions. No correlation was clearly identified between surface roughness/scratch orientation and joint strength. Nevertheless, the aspect of the interface (thickness of the intermetallic layer, porosity, presence of macro/microcracks) is clearly affected by the surface topology. Welding was not established on oil contaminated surfaces, meaning that the jet action is not enough to completely clean the surface.

Keywords: bonding mechanisms, impact welding, intermetallic compounds, magnetic pulse welding, wave formation

Procedia PDF Downloads 209
563 3D CFD Model of Hydrodynamics in Lowland Dam Reservoir in Poland

Authors: Aleksandra Zieminska-Stolarska, Ireneusz Zbicinski

Abstract:

Introduction: The objective of the present work was to develop and validate a 3D CFD numerical model for simulating flow through 17 kilometers long dam reservoir of a complex bathymetry. In contrast to flowing waters, dam reservoirs were not emphasized in the early years of water quality modeling, as this issue has never been the major focus of urban development. Starting in the 1970s, however, it was recognized that natural and man-made lakes are equal, if not more important than estuaries and rivers from a recreational standpoint. The Sulejow Reservoir (Central Poland) was selected as the study area as representative of many lowland dam reservoirs and due availability of a large database of the ecological, hydrological and morphological parameters of the lake. Method: 3D, 2-phase and 1-phase CFD models were analysed to determine hydrodynamics in the Sulejow Reservoir. Development of 3D, 2-phase CFD model of flow requires a construction of mesh with millions of elements and overcome serious convergence problems. As 1-phase CFD model of flow in relation to 2-phase CFD model excludes from the simulations the dynamics of waves only, which should not change significantly water flow pattern for the case of lowland, dam reservoirs. In 1-phase CFD model, the phases (water-air) are separated by a plate which allows calculations of one phase (water) flow only. As the wind affects velocity of flow, to take into account the effect of the wind on hydrodynamics in 1-phase CFD model, the plate must move with speed and direction equal to the speed and direction of the upper water layer. To determine the velocity at which the plate will move on the water surface and interacts with the underlying layers of water and apply this value in 1-phase CFD model, the 2D, 2-phase model was elaborated. Result: Model was verified on the basis of the extensive flow measurements (StreamPro ADCP, USA). Excellent agreement (an average error less than 10%) between computed and measured velocity profiles was found. As a result of work, the following main conclusions can be presented: •The results indicate that the flow field in the Sulejow Reservoir is transient in nature, with swirl flows in the lower part of the lake. Recirculating zones, with the size of even half kilometer, may increase water retention time in this region •The results of simulations confirm the pronounced effect of the wind on the development of the water circulation zones in the reservoir which might affect the accumulation of nutrients in the epilimnion layer and result e.g. in the algae bloom. Conclusion: The resulting model is accurate and the methodology develop in the frame of this work can be applied to all types of storage reservoir configurations, characteristics, and hydrodynamics conditions. Large recirculating zones in the lake which increase water retention time and might affect the accumulation of nutrients were detected. Accurate CFD model of hydrodynamics in large water body could help in the development of forecast of water quality, especially in terms of eutrophication and water management of the big water bodies.

Keywords: CFD, mathematical modelling, dam reservoirs, hydrodynamics

Procedia PDF Downloads 400
562 A Nonlinear Feature Selection Method for Hyperspectral Image Classification

Authors: Pei-Jyun Hsieh, Cheng-Hsuan Li, Bor-Chen Kuo

Abstract:

For hyperspectral image classification, feature reduction is an important pre-processing for avoiding the Hughes phenomena due to the difficulty for collecting training samples. Hence, lots of researches developed feature selection methods such as F-score, HSIC (Hilbert-Schmidt Independence Criterion), and etc., to improve hyperspectral image classification. However, most of them only consider the class separability in the original space, i.e., a linear class separability. In this study, we proposed a nonlinear class separability measure based on kernel trick for selecting an appropriate feature subset. The proposed nonlinear class separability was formed by a generalized RBF kernel with different bandwidths with respect to different features. Moreover, it considered the within-class separability and the between-class separability. A genetic algorithm was applied to tune these bandwidths such that the smallest with-class separability and the largest between-class separability simultaneously. This indicates the corresponding feature space is more suitable for classification. In addition, the corresponding nonlinear classification boundary can separate classes very well. These optimal bandwidths also show the importance of bands for hyperspectral image classification. The reciprocals of these bandwidths can be viewed as weights of bands. The smaller bandwidth, the larger weight of the band, and the more importance for classification. Hence, the descending order of the reciprocals of the bands gives an order for selecting the appropriate feature subsets. In the experiments, three hyperspectral image data sets, the Indian Pine Site data set, the PAVIA data set, and the Salinas A data set, were used to demonstrate the selected feature subsets by the proposed nonlinear feature selection method are more appropriate for hyperspectral image classification. Only ten percent of samples were randomly selected to form the training dataset. All non-background samples were used to form the testing dataset. The support vector machine was applied to classify these testing samples based on selected feature subsets. According to the experiments on the Indian Pine Site data set with 220 bands, the highest accuracies by applying the proposed method, F-score, and HSIC are 0.8795, 0.8795, and 0.87404, respectively. However, the proposed method selects 158 features. F-score and HSIC select 168 features and 217 features, respectively. Moreover, the classification accuracies increase dramatically only using first few features. The classification accuracies with respect to feature subsets of 10 features, 20 features, 50 features, and 110 features are 0.69587, 0.7348, 0.79217, and 0.84164, respectively. Furthermore, only using half selected features (110 features) of the proposed method, the corresponding classification accuracy (0.84168) is approximate to the highest classification accuracy, 0.8795. For other two hyperspectral image data sets, the PAVIA data set and Salinas A data set, we can obtain the similar results. These results illustrate our proposed method can efficiently find feature subsets to improve hyperspectral image classification. One can apply the proposed method to determine the suitable feature subset first according to specific purposes. Then researchers can only use the corresponding sensors to obtain the hyperspectral image and classify the samples. This can not only improve the classification performance but also reduce the cost for obtaining hyperspectral images.

Keywords: hyperspectral image classification, nonlinear feature selection, kernel trick, support vector machine

Procedia PDF Downloads 262
561 Optimal Control of Generators and Series Compensators within Multi-Space-Time Frame

Authors: Qian Chen, Lin Xu, Ping Ju, Zhuoran Li, Yiping Yu, Yuqing Jin

Abstract:

The operation of power grid is becoming more and more complex and difficult due to its rapid development towards high voltage, long distance, and large capacity. For instance, many large-scale wind farms have connected to power grid, where their fluctuation and randomness is very likely to affect the stability and safety of the grid. Fortunately, many new-type equipments based on power electronics have been applied to power grid, such as UPFC (Unified Power Flow Controller), TCSC (Thyristor Controlled Series Compensation), STATCOM (Static Synchronous Compensator) and so on, which can help to deal with the problem above. Compared with traditional equipment such as generator, new-type controllable devices, represented by the FACTS (Flexible AC Transmission System), have more accurate control ability and respond faster. But they are too expensive to use widely. Therefore, on the basis of the comparison and analysis of the controlling characteristics between traditional control equipment and new-type controllable equipment in both time and space scale, a coordinated optimizing control method within mutil-time-space frame is proposed in this paper to bring both kinds of advantages into play, which can better both control ability and economical efficiency. Firstly, the coordination of different space sizes of grid is studied focused on the fluctuation caused by large-scale wind farms connected to power grid. With generator, FSC (Fixed Series Compensation) and TCSC, the coordination method on two-layer regional power grid vs. its sub grid is studied in detail. The coordination control model is built, the corresponding scheme is promoted, and the conclusion is verified by simulation. By analysis, interface power flow can be controlled by generator and the specific line power flow between two-layer regions can be adjusted by FSC and TCSC. The smaller the interface power flow adjusted by generator, the bigger the control margin of TCSC, instead, the total consumption of generator is much higher. Secondly, the coordination of different time sizes is studied to further the amount of the total consumption of generator and the control margin of TCSC, where the minimum control cost can be acquired. The coordination method on two-layer ultra short-term correction vs. AGC (Automatic Generation Control) is studied with generator, FSC and TCSC. The optimal control model is founded, genetic algorithm is selected to solve the problem, and the conclusion is verified by simulation. Finally, the aforementioned method within multi-time-space scale is analyzed with practical cases, and simulated on PSASP (Power System Analysis Software Package) platform. The correctness and effectiveness are verified by the simulation result. Moreover, this coordinated optimizing control method can contribute to the decrease of control cost and will provide reference to the following studies in this field.

Keywords: FACTS, multi-space-time frame, optimal control, TCSC

Procedia PDF Downloads 265
560 Sustainable Living Where the Immaterial Matters

Authors: Maria Hadjisoteriou, Yiorgos Hadjichristou

Abstract:

This paper aims to explore and provoke a debate, through the work of the design studio, “living where the immaterial matters” of the architecture department of the University of Nicosia, on the role that the “immaterial matter” can play in enhancing innovative sustainable architecture and viewing the cities as sustainable organisms that always grow and alter. The blurring, juxtaposing binary of immaterial and matter, as the theoretical backbone of the Unit is counterbalanced by the practicalities of the contested sites of the last divided capital Nicosia with its ambiguous green line and the ghost city of Famagusta in the island of Cyprus. Jonathan Hill argues that the ‘immaterial is as important to architecture as the material concluding that ‘Immaterial–Material’ weaves the two together, so that they are in conjunction not opposition’. This understanding of the relationship of the immaterial vs material set the premises and the departing point of our argument, and talks about new recipes for creating hybrid public space that can lead to the unpredictability of a complex and interactive, sustainable city. We hierarchized the human experience as a priority. We distinguish the notion of space and place referring to Heidegger’s ‘building dwelling thinking’: ‘a distinction between space and place, where spaces gain authority not from ‘space’ appreciated mathematically but ‘place’ appreciated through human experience’. Following the above, architecture and the city are seen as one organism. The notions of boundaries, porous borders, fluidity, mobility, and spaces of flows are the lenses of the investigation of the unit’s methodology, leading to the notion of a new hybrid urban environment, where the main constituent elements are in a flux relationship. The material and the immaterial flows of the town are seen interrelated and interwoven with the material buildings and their immaterial contents, yielding to new sustainable human built environments. The above premises consequently led to choices of controversial sites. Indisputably a provoking site was the ghost town of Famagusta where the time froze back in 1974. Inspired by the fact that the nature took over the a literally dormant, decaying city, a sustainable rebirthing was seen as an opportunity where both nature and built environment, material and immaterial are interwoven in a new emergent urban environment. Similarly, we saw the dividing ‘green line’ of Nicosia completely failing to prevent the trespassing of images, sounds and whispers, smells and symbols that define the two prevailing cultures and becoming a porous creative entity which tends to start reuniting instead of separating , generating sustainable cultures and built environments. The authors would like to contribute to the debate by introducing a question about a new recipe of cooking the built environment. Can we talk about a new ‘urban recipe’: ‘cooking architecture and city’ to deliver an ever changing urban sustainable organism, whose identity will mainly depend on the interrelationship of the immaterial and material constituents?

Keywords: blurring zones, porous borders, spaces of flow, urban recipe

Procedia PDF Downloads 420
559 A Column Generation Based Algorithm for Airline Cabin Crew Rostering Problem

Authors: Nan Xu

Abstract:

In airlines, the crew scheduling problem is usually decomposed into two stages: crew pairing and crew rostering. In the crew pairing stage, pairings are generated such that each flight is covered by exactly one pairing and the overall cost is minimized. In the crew rostering stage, the pairings generated in the crew pairing stage are combined with off days, training and other breaks to create individual work schedules. The paper focuses on cabin crew rostering problem, which is challenging due to the extremely large size and the complex working rules involved. In our approach, the objective of rostering consists of two major components. The first is to minimize the number of unassigned pairings and the second is to ensure the fairness to crew members. There are two measures of fairness to crew members, the number of overnight duties and the total fly-hour over a given period. Pairings should be assigned to each crew member so that their actual overnight duties and fly hours are as close to the expected average as possible. Deviations from the expected average are penalized in the objective function. Since several small deviations are preferred than a large deviation, the penalization is quadratic. Our model of the airline crew rostering problem is based on column generation. The problem is decomposed into a master problem and subproblems. The mater problem is modeled as a set partition problem and exactly one roster for each crew is picked up such that the pairings are covered. The restricted linear master problem (RLMP) is considered. The current subproblem tries to find columns with negative reduced costs and add them to the RLMP for the next iteration. When no column with negative reduced cost can be found or a stop criteria is met, the procedure ends. The subproblem is to generate feasible crew rosters for each crew member. A separate acyclic weighted graph is constructed for each crew member and the subproblem is modeled as resource constrained shortest path problems in the graph. Labeling algorithm is used to solve it. Since the penalization is quadratic, a method to deal with non-additive shortest path problem using labeling algorithm is proposed and corresponding domination condition is defined. The major contribution of our model is: 1) We propose a method to deal with non-additive shortest path problem; 2) Operation to allow relaxing some soft rules is allowed in our algorithm, which can improve the coverage rate; 3) Multi-thread techniques are used to improve the efficiency of the algorithm when generating Line-of-Work for crew members. Here a column generation based algorithm for the airline cabin crew rostering problem is proposed. The objective is to assign a personalized roster to crew member which minimize the number of unassigned pairings and ensure the fairness to crew members. The algorithm we propose in this paper has been put into production in a major airline in China and numerical experiments show that it has a good performance.

Keywords: aircrew rostering, aircrew scheduling, column generation, SPPRC

Procedia PDF Downloads 145
558 An in silico Approach for Exploring the Intercellular Communication in Cancer Cells

Authors: M. Cardenas-Garcia, P. P. Gonzalez-Perez

Abstract:

Intercellular communication is a necessary condition for cellular functions and it allows a group of cells to survive as a population. Throughout this interaction, the cells work in a coordinated and collaborative way which facilitates their survival. In the case of cancerous cells, these take advantage of intercellular communication to preserve their malignancy, since through these physical unions they can send signs of malignancy. The Wnt/β-catenin signaling pathway plays an important role in the formation of intercellular communications, being also involved in a large number of cellular processes such as proliferation, differentiation, adhesion, cell survival, and cell death. The modeling and simulation of cellular signaling systems have found valuable support in a wide range of modeling approaches, which cover a wide spectrum ranging from mathematical models; e.g., ordinary differential equations, statistical methods, and numerical methods– to computational models; e.g., process algebra for modeling behavior and variation in molecular systems. Based on these models, different simulation tools have been developed from mathematical ones to computational ones. Regarding cellular and molecular processes in cancer, its study has also found a valuable support in different simulation tools that, covering a spectrum as mentioned above, have allowed the in silico experimentation of this phenomenon at the cellular and molecular level. In this work, we simulate and explore the complex interaction patterns of intercellular communication in cancer cells using the Cellulat bioinformatics tool, a computational simulation tool developed by us and motivated by two key elements: 1) a biochemically inspired model of self-organizing coordination in tuple spaces, and 2) the Gillespie’s algorithm, a stochastic simulation algorithm typically used to mimic systems of chemical/biochemical reactions in an efficient and accurate way. The main idea behind the Cellulat simulation tool is to provide an in silico experimentation environment that complements and guides in vitro experimentation in intra and intercellular signaling networks. Unlike most of the cell signaling simulation tools, such as E-Cell, BetaWB and Cell Illustrator which provides abstractions to model only intracellular behavior, Cellulat is appropriate for modeling both intracellular signaling and intercellular communication, providing the abstractions required to model –and as a result, simulate– the interaction mechanisms that involve two or more cells, that is essential in the scenario discussed in this work. During the development of this work we made evident the application of our computational simulation tool (Cellulat) for the modeling and simulation of intercellular communication between normal and cancerous cells, and in this way, propose key molecules that may prevent the arrival of malignant signals to the cells that surround the tumor cells. In this manner, we could identify the significant role that has the Wnt/β-catenin signaling pathway in cellular communication, and therefore, in the dissemination of cancer cells. We verified, using in silico experiments, how the inhibition of this signaling pathway prevents that the cells that surround a cancerous cell are transformed.

Keywords: cancer cells, in silico approach, intercellular communication, key molecules, modeling and simulation

Procedia PDF Downloads 249
557 The Inverse Problem in the Process of Heat and Moisture Transfer in Multilayer Walling

Authors: Bolatbek Rysbaiuly, Nazerke Rysbayeva, Aigerim Rysbayeva

Abstract:

Relevance: Energy saving elevated to public policy in almost all developed countries. One of the areas for energy efficiency is improving and tightening design standards. In the tie with the state standards, make high demands for thermal protection of buildings. Constructive arrangement of layers should ensure normal operation in which the humidity of materials of construction should not exceed a certain level. Elevated levels of moisture in the walls can be attributed to a defective condition, as moisture significantly reduces the physical, mechanical and thermal properties of materials. Absence at the design stage of modeling the processes occurring in the construction and predict the behavior of structures during their work in the real world leads to an increase in heat loss and premature aging structures. Method: To solve this problem, widely used method of mathematical modeling of heat and mass transfer in materials. The mathematical modeling of heat and mass transfer are taken into the equation interconnected layer [1]. In winter, the thermal and hydraulic conductivity characteristics of the materials are nonlinear and depends on the temperature and moisture in the material. In this case, the experimental method of determining the coefficient of the freezing or thawing of the material becomes much more difficult. Therefore, in this paper we propose an approximate method for calculating the thermal conductivity and moisture permeability characteristics of freezing or thawing material. Questions. Following the development of methods for solving the inverse problem of mathematical modeling allows us to answer questions that are closely related to the rational design of fences: Where the zone of condensation in the body of the multi-layer fencing; How and where to apply insulation rationally his place; Any constructive activities necessary to provide for the removal of moisture from the structure; What should be the temperature and humidity conditions for the normal operation of the premises enclosing structure; What is the longevity of the structure in terms of its components frost materials. Tasks: The proposed mathematical model to solve the following problems: To assess the condition of the thermo-physical designed structures at different operating conditions and select appropriate material layers; Calculate the temperature field in a structurally complex multilayer structures; When measuring temperature and moisture in the characteristic points to determine the thermal characteristics of the materials constituting the surveyed construction; Laboratory testing to significantly reduce test time, and eliminates the climatic chamber and expensive instrumentation experiments and research; Allows you to simulate real-life situations that arise in multilayer enclosing structures associated with freezing, thawing, drying and cooling of any layer of the building material.

Keywords: energy saving, inverse problem, heat transfer, multilayer walling

Procedia PDF Downloads 397
556 Detection, Analysis and Determination of the Origin of Copy Number Variants (CNVs) in Intellectual Disability/Developmental Delay (ID/DD) Patients and Autistic Spectrum Disorders (ASD) Patients by Molecular and Cytogenetic Methods

Authors: Pavlina Capkova, Josef Srovnal, Vera Becvarova, Marie Trkova, Zuzana Capkova, Andrea Stefekova, Vaclava Curtisova, Alena Santava, Sarka Vejvalkova, Katerina Adamova, Radek Vodicka

Abstract:

ASDs are heterogeneous and complex developmental diseases with a significant genetic background. Recurrent CNVs are known to be a frequent cause of ASD. These CNVs can have, however, a variable expressivity which results in a spectrum of phenotypes from asymptomatic to ID/DD/ASD. ASD is associated with ID in ~75% individuals. Various platforms are used to detect pathogenic mutations in the genome of these patients. The performed study is focused on a determination of the frequency of pathogenic mutations in a group of ASD patients and a group of ID/DD patients using various strategies along with a comparison of their detection rate. The possible role of the origin of these mutations in aetiology of ASD was assessed. The study included 35 individuals with ASD and 68 individuals with ID/DD (64 males and 39 females in total), who underwent rigorous genetic, neurological and psychological examinations. Screening for pathogenic mutations involved karyotyping, screening for FMR1 mutations and for metabolic disorders, a targeted MLPA test with probe mixes Telomeres 3 and 5, Microdeletion 1 and 2, Autism 1, MRX and a chromosomal microarray analysis (CMA) (Illumina or Affymetrix). Chromosomal aberrations were revealed in 7 (1 in the ASD group) individuals by karyotyping. FMR1 mutations were discovered in 3 (1 in the ASD group) individuals. The detection rate of pathogenic mutations in ASD patients with a normal karyotype was 15.15% by MLPA and CMA. The frequencies of the pathogenic mutations were 25.0% by MLPA and 35.0% by CMA in ID/DD patients with a normal karyotype. CNVs inherited from asymptomatic parents were more abundant than de novo changes in ASD patients (11.43% vs. 5.71%) in contrast to the ID/DD group where de novo mutations prevailed over inherited ones (26.47% vs. 16.18%). ASD patients shared more frequently their mutations with their fathers than patients from ID/DD group (8.57% vs. 1.47%). Maternally inherited mutations predominated in the ID/DD group in comparison with the ASD group (14.7% vs. 2.86 %). CNVs of an unknown significance were found in 10 patients by CMA and in 3 patients by MLPA. Although the detection rate is the highest when using CMA, recurrent CNVs can be easily detected by MLPA. CMA proved to be more efficient in the ID/DD group where a larger spectrum of rare pathogenic CNVs was revealed. This study determined that maternally inherited highly penetrant mutations and de novo mutations more often resulted in ID/DD without ASD in patients. The paternally inherited mutations could be, however, a source of the greater variability in the genome of the ASD patients and contribute to the polygenic character of the inheritance of ASD. As the number of the subjects in the group is limited, a larger cohort is needed to confirm this conclusion. Inherited CNVs have a role in aetiology of ASD possibly in combination with additional genetic factors - the mutations elsewhere in the genome. The identification of these interactions constitutes a challenge for the future. Supported by MH CZ – DRO (FNOl, 00098892), IGA UP LF_2016_010, TACR TE02000058 and NPU LO1304.

Keywords: autistic spectrum disorders, copy number variant, chromosomal microarray, intellectual disability, karyotyping, MLPA, multiplex ligation-dependent probe amplification

Procedia PDF Downloads 348
555 A 4-Month Low-carb Nutrition Intervention Study Aimed to Demonstrate the Significance of Addressing Insulin Resistance in 2 Subjects with Type-2 Diabetes for Better Management

Authors: Shashikant Iyengar, Jasmeet Kaur, Anup Singh, Arun Kumar, Ira Sahay

Abstract:

Insulin resistance (IR) is a condition that occurs when cells in the body become less responsive to insulin, leading to higher levels of both insulin and glucose in the blood. This condition is linked to metabolic syndromes, including diabetes. It is crucial to address IR promptly after diagnosis to prevent long-term complications associated with high insulin and high blood glucose. This four-month case study highlights the importance of treating the underlying condition to manage diabetes effectively. Insulin is essential for regulating blood sugar levels by facilitating the uptake of glucose into cells for energy or storage. In IR individuals, cells are less efficient at taking up glucose from the blood resulting in elevated blood glucose levels. As a result of IR, beta cells produce more insulin to make up for the body's inability to use insulin effectively. This leads to high insulin levels, a condition known as hyperinsulinemia, which further impairs glucose metabolism and can contribute to various chronic diseases. In addition to regulating blood glucose, insulin has anti-catabolic effects, preventing the breakdown of molecules in the body, such as inhibiting glycogen breakdown in the liver, inhibiting gluconeogenesis, and inhibiting lipolysis. If a person is insulin-sensitive or metabolically healthy, an optimal level of insulin prevents fat cells from releasing fat and promotes the storage of glucose and fat in the body. Thus optimal insulin levels are crucial for maintaining energy balance and plays a key role in metabolic processes. During the four-month study, researchers looked at the impact of a low-carb dietary (LCD) intervention on two male individuals (A & B) who had Type-2 diabetes. Althoughvneither of these individuals were obese, they were both slightly overweight and had abdominal fat deposits. Before the trial began, important markers such as fasting blood glucose (FBG), triglycerides (TG), high-density lipoprotein (HDL) cholesterol, and Hba1c were measured. These markers are essential in defining metabolic health, their individual values and variability are integral in deciphering metabolic health. The ratio of TG to HDL is used as a surrogate marker for IR. This ratio has a high correlation with the prevalence of metabolic syndrome and with IR itself. It is a convenient measure because it can be calculated from a standard lipid profile and does not require more complex tests. In this four-month trial, an improvement in insulin sensitivity was observed through the ratio of TG/HDL, which, in turn, improves fasting blood glucose levels and HbA1c. For subject A, HbA1c dropped from 13 to 6.28, and for subject B, it dropped from 9.4 to 5.7. During the trial, neither of the subjects were taking any diabetic medications. The significant improvements in their health markers, such as better glucose control, along with an increase in energy levels, demonstrate that incorporating LCD interventions can effectively manage diabetes.

Keywords: metabolic disorder, insulin resistance, type-2 diabetes, low-carb nutrition

Procedia PDF Downloads 39
554 Removal of Heavy Metals by Ultrafiltration Assisted with Chitosan or Carboxy-Methyl Cellulose

Authors: Boukary Lam, Sebastien Deon, Patrick Fievet, Nadia Crini, Gregorio Crini

Abstract:

Treatment of heavy metal-contaminated industrial wastewater has become a major challenge over the last decades. Conventional processes for the treatment of metal-containing effluents do not always simultaneously satisfy both legislative and economic criteria. In this context, coupling of processes can then be a promising alternative to the conventional approaches used by industry. The polymer-assisted ultrafiltration (PAUF) process is one of these coupling processes. Its principle is based on a sequence of steps with reaction (e.g., complexation) between metal ions and a polymer and a step involving the rejection of the formed species by means of a UF membrane. Unlike free ions, which can cross the UF membrane due to their small size, the polymer/ion species, the size of which is larger than pore size, are rejected. The PAUF process was deeply investigated herein in the case of removal of nickel ions by adding chitosan and carboxymethyl cellulose (CMC). Experiments were conducted with synthetic solutions containing 1 to 100 ppm of nickel ions with or without the presence of NaCl (0.05 to 0.2 M), and an industrial discharge water (containing several metal ions) with and without polymer. Chitosan with a molecular weight of 1.8×105 g mol⁻¹ and a degree of acetylation close to 15% was used. CMC with a degree of substitution of 0.7 and a molecular weight of 9×105 g mol⁻¹ was employed. Filtration experiments were performed under cross-flow conditions with a filtration cell equipped with a polyamide thin film composite flat-sheet membrane (3.5 kDa). Without the step of polymer addition, it was found that nickel rejection decreases from 80 to 0% with increasing metal ion concentration and salt concentration. This behavior agrees qualitatively with the Donnan exclusion principle: the increase in the electrolyte concentration screens the electrostatic interaction between ions and the membrane fixed the charge, which decreases their rejection. It was shown that addition of a sufficient amount of polymer (greater than 10⁻² M of monomer unit) can offset this decrease and allow good metal removal. However, the permeation flux was found to be somewhat reduced due to the increase in osmotic pressure and viscosity. It was also highlighted that the increase in pH (from 3 to 9) has a strong influence on removal performances: the higher pH value, the better removal performance. The two polymers have shown similar performance enhancement at natural pH. However, chitosan has proved more efficient in slightly basic conditions (above its pKa) whereas CMC has demonstrated very weak rejection performances when pH is below its pKa. In terms of metal rejection, chitosan is thus probably the better option for basic or strongly acid (pH < 4) conditions. Nevertheless, CMC should probably be preferred to chitosan in natural conditions (5 < pH < 8) since its impact on the permeation flux is less significant. Finally, ultrafiltration of an industrial discharge water has shown that the increase in metal ion rejection induced by the polymer addition is very low due to the competing phenomenon between the various ions present in the complex mixture.

Keywords: carboxymethyl cellulose, chitosan, heavy metals, nickel ion, polymer-assisted ultrafiltration

Procedia PDF Downloads 161
553 Profiling of Bacterial Communities Present in Feces, Milk, and Blood of Lactating Cows Using 16S rRNA Metagenomic Sequencing

Authors: Khethiwe Mtshali, Zamantungwa T. H. Khumalo, Stanford Kwenda, Ismail Arshad, Oriel M. M. Thekisoe

Abstract:

Ecologically, the gut, mammary glands and bloodstream consist of distinct microbial communities of commensals, mutualists and pathogens, forming a complex ecosystem of niches. The by-products derived from these body sites i.e. faeces, milk and blood, respectively, have many uses in rural communities where they aid in the facilitation of day-to-day household activities and occasional rituals. Thus, although livestock rearing plays a vital role in the sustenance of the livelihoods of rural communities, it may serve as a potent reservoir of different pathogenic organisms that could have devastating health and economic implications. This study aimed to simultaneously explore the microbial profiles of corresponding faecal, milk and blood samples from lactating cows using 16S rRNA metagenomic sequencing. Bacterial communities were inferred through the Divisive Amplicon Denoising Algorithm 2 (DADA2) pipeline coupled with SILVA database v138. All downstream analyses were performed in R v3.6.1. Alpha-diversity metrics showed significant differences between faeces and blood, faeces and milk, but did not vary significantly between blood and milk (Kruskal-Wallis, P < 0.05). Beta-diversity metrics on Principal Coordinate Analysis (PCoA) and Non-Metric Dimensional Scaling (NMDS) clustered samples by type, suggesting that microbial communities of the studied niches are significantly different (PERMANOVA, P < 0.05). A number of taxa were significantly differentially abundant (DA) between groups based on the Wald test implemented in the DESeq2 package (Padj < 0.01). The majority of the DA taxa were significantly enriched in faeces than in milk and blood, except for the genus Anaplasma, which was significantly enriched in blood and was, in turn, the most abundant taxon overall. A total of 30 phyla, 74 classes, 156 orders, 243 families and 408 genera were obtained from the overall analysis. The most abundant phyla obtained between the three body sites were Firmicutes, Bacteroidota, and Proteobacteria. A total of 58 genus-level taxa were simultaneously detected between the sample groups, while bacterial signatures of at least 8 of these occurred concurrently in corresponding faeces, milk and blood samples from the same group of animals constituting a pool. The important taxa identified in this study could be categorized into four potentially pathogenic clusters: i) arthropod-borne; ii) food-borne and zoonotic; iii) mastitogenic and; iv) metritic and abortigenic. This study provides insight into the microbial composition of bovine faeces, milk, and blood and its extent of overlapping. It further highlights the potential risk of disease occurrence and transmission between the animals and the inhabitants of the sampled rural community, pertaining to their unsanitary practices associated with the use of cattle by-products.

Keywords: microbial profiling, 16S rRNA, NGS, feces, milk, blood, lactating cows, small-scale farmers

Procedia PDF Downloads 109
552 Nuancing the Indentured Migration in Amitav Ghosh's Sea of Poppies

Authors: Murari Prasad

Abstract:

This paper is motivated by the implications of indentured migration depicted in Amitav Ghosh’s critically acclaimed novel, Sea of Poppies (2008). Ghosh’s perspective on the experiences of North Indian indentured labourers moving from their homeland to a distant and unknown location across the seas suggests a radical attitudinal change among the migrants on board the Ibis, a schooner chartered to carry the recruits from Calcutta to Mauritius in the late 1830s. The novel unfolds the life-altering trauma of the bonded servants, including their efforts to maintain a sense of self while negotiating significant social and cultural transformations during the voyage which leads to the breakdown of familiar life-worlds. Equally, the migrants are introduced to an alternative network of relationships to ensure their survival away from land. They relinquish their entrenched beliefs and prejudices and commit themselves to a new brotherhood formed by ‘ship siblings.’ With the official abolition of direct slavery in 1833, the supply of cheap labour to the sugar plantation in British colonies as far-flung as Mauritius and Fiji to East Africa and the Caribbean sharply declined. Around the same time, China’s attempt to prohibit the illegal importation of opium from British India into China threatened the lucrative opium trade. To run the ever-profitable plantation colonies with cheap labour, Indian peasants, wrenched from their village economies, were indentured to plantations as girmitiyas (vernacularized from ‘agreement’) by the colonial government using the ploy of an optional form of recruitment. After the British conquest of the Isle of France in 1810, Mauritius became Britain’s premier sugar colony bringing waves of Indian immigrants to the island. In the articulations of their subjectivities one notices how the recruits cope with the alienating drudgery of indenture, mitigate the hardships of the voyage and forge new ties with pragmatic acts of cultural syncretism in a forward-looking autonomous community of ‘ship-siblings’ following the fracture of traditional identities. This paper tests the hypothesis that Ghosh envisions a kind of futuristic/utopian political collectivity in a hierarchically rigid, racially segregated and identity-obsessed world. In order to ground the claim and frame the complex representations of alliance and love across the boundaries of caste, religion, gender and nation, the essential methodology here is a close textual analysis of the novel. This methodology will be geared to explicate the utopian futurity that the novel gestures towards by underlining new regulations of life during voyage and dissolution of multiple differences among the indentured migrants on board the Ibis.

Keywords: indenture, colonial, opium, sugar plantation

Procedia PDF Downloads 396
551 Highly Selective Phosgene Free Synthesis of Methylphenylcarbamate from Aniline and Dimethyl Carbonate over Heterogeneous Catalyst

Authors: Nayana T. Nivangune, Vivek V. Ranade, Ashutosh A. Kelkar

Abstract:

Organic carbamates are versatile compounds widely employed as pesticides, fungicides, herbicides, dyes, pharmaceuticals, cosmetics and in the synthesis of polyurethanes. Carbamates can be easily transformed into isocyanates by thermal cracking. Isocyantes are used as precursors for manufacturing agrochemicals, adhesives and polyurethane elastomers. Manufacture of polyurethane foams is a major application of aromatic ioscyanates and in 2007 the global consumption of polyurethane was about 12 million metric tons/year and the average annual growth rate was about 5%. Presently Isocyanates/carbamates are manufactured by phosgene based process. However, because of high toxicity of phoegene and formation of waste products in large quantity; there is a need to develop alternative and safer process for the synthesis of isocyanates/carbamates. Recently many alternative processes have been investigated and carbamate synthesis by methoxycarbonylation of aromatic amines using dimethyl carbonate (DMC) as a green reagent has emerged as promising alternative route. In this reaction methanol is formed as a by-product, which can be converted to DMC either by oxidative carbonylation of methanol or by reacting with urea. Thus, the route based on DMC has a potential to provide atom efficient and safer route for the synthesis of carbamates from DMC and amines. Lot of work is being carried out on the development of catalysts for this reaction and homogeneous zinc salts were found to be good catalysts for the reaction. However, catalyst/product separation is challenging with these catalysts. There are few reports on the use of supported Zn catalysts; however, deactivation of the catalyst is the major problem with these catalysts. We wish to report here methoxycarbonylation of aniline to methylphenylcarbamate (MPC) using amino acid complexes of Zn as highly active and selective catalysts. The catalysts were characterized by XRD, IR, solid state NMR and XPS analysis. Methoxycarbonylation of aniline was carried out at 170 °C using 2.5 wt% of the catalyst to achieve >98% conversion of aniline with 97-99% selectivity to MPC as the product. Formation of N-methylated products in small quantity (1-2%) was also observed. Optimization of the reaction conditions was carried out using zinc-proline complex as the catalyst. Selectivity was strongly dependent on the temperature and aniline:DMC ratio used. At lower aniline:DMC ratio and at higher temperature, selectivity to MPC decreased (85-89% respectively) with the formation of N-methylaniline (NMA), N-methyl methylphenylcarbamate (MMPC) and N,N-dimethyl aniline (NNDMA) as by-products. Best results (98% aniline conversion with 99% selectivity to MPC in 4 h) were observed at 170oC and aniline:DMC ratio of 1:20. Catalyst stability was verified by carrying out recycle experiment. Methoxycarbonylation preceded smoothly with various amine derivatives indicating versatility of the catalyst. The catalyst is inexpensive and can be easily prepared from zinc salt and naturally occurring amino acids. The results are important and provide environmentally benign route for MPC synthesis with high activity and selectivity.

Keywords: aniline, heterogeneous catalyst, methoxycarbonylation, methylphenyl carbamate

Procedia PDF Downloads 273
550 Making Sense of C. G. Jung’s Red Book and Black Books: Masonic Rites and Trauma

Authors: Lynn Brunet

Abstract:

In 2019 the author published a book-length study examining Jung’s Red Book. This study consisted of a close reading of each of the chapters in Liber Novus, focussing on the fantasies themselves and Jung’s accompanying paintings. It found that the plots, settings, characters and symbolism in each of these fantasies are not entirely original but remarkably similar to those found in some of the higher degrees of Continental Freemasonry. Jung was the grandson of his namesake, C.G. Jung (1794–1864), who was a Freemason and one-time Grand Master of the Swiss Masonic Lodge. The study found that the majority of Jung’s fantasies are very similar to those of the Ancient and Accepted Scottish Rite, practiced in Switzerland during the time of Jung’s childhood. It argues that the fantasies appear to be memories of a series of terrifying initiatory ordeals conducted using spurious versions of the Masonic rites. Spurious Freemasonry is a term that Masons use for the ‘irregular’ or illegitimate use of the rituals and are not sanctioned by the Order. Since the 1980s there have been multiple reports of ritual trauma amongst a wide variety of organizations, cults and religious groups that psychologists, counsellors, social workers, and forensic scientists have confirmed. The abusive use of Masonic rites features frequently in these reports. This initial study allows a reading of The Red Book that makes sense of the obscure references, bizarre scenarios and intense emotional trauma described by Jung throughout Liber Novus. It suggests that Jung appears to have undergone a cruel initiatory process as a child. The author is currently examining the extra material found in Jung’s Black Books and the results are confirming the original discoveries and demonstrating a number of aspects not covered in the first publication. These include the complex layering of ancient gods and belief systems in answer to Jung’s question, ‘In which underworld am I?’ It demonstrates that the majority of these ancient systems and their gods are discussed in a handbook for the Scottish Rite, Morals and Dogma by Albert Pike, but that the way they are presented by Philemon and his soul is intended to confuse him rather than clarify their purpose. This new study also examines Jung’s soul’s question ‘I am not a human being. What am I then?’ While further themes that emerge from the Black Books include his struggle with vanity and whether he should continue creating his ‘holy book’; and a comparison between Jung’s ‘mystery plays’ and examples from the Theatre of the Absurd. Overall, it demonstrates that Jung’s experience, while inexplicable in his own time, is now known to be the secret and abusive practice of initiation of the young found in a range of cults and religious groups in many first world countries. This paper will present a brief outline of the original study and then examine the themes that have emerged from the extra material found in the Black Books.

Keywords: C. G. Jung, the red book, the black books, masonic themes, trauma and dissociation, initiation rites, secret societies

Procedia PDF Downloads 132
549 Preliminary Study of Water-Oil Separation Process in Three-Phase Separators Using Factorial Experimental Designs and Simulation

Authors: Caroline M. B. De Araujo, Helenise A. Do Nascimento, Claudia J. Da S. Cavalcanti, Mauricio A. Da Motta Sobrinho, Maria F. Pimentel

Abstract:

Oil production is often followed by the joint production of water and gas. During the journey up to the surface, due to severe conditions of temperature and pressure, the mixing between these three components normally occurs. Thus, the three phases separation process must be one of the first steps to be performed after crude oil extraction, where the water-oil separation is the most complex and important step, since the presence of water into the process line can increase corrosion and hydrates formation. A wide range of methods can be applied in order to proceed with oil-water separation, being more commonly used: flotation, hydrocyclones, as well as the three phase separator vessels. Facing what has been presented so far, it is the aim of this paper to study a system consisting of a three-phase separator, evaluating the influence of three variables: temperature, working pressure and separator type, for two types of oil (light and heavy), by performing two factorial design plans 23, in order to find the best operating condition. In this case, the purpose is to obtain the greatest oil flow rate in the product stream (m3/h) as well as the lowest percentage of water in the oil stream. The simulation of the three-phase separator was performed using Aspen Hysys®2006 simulation software in stationary mode, and the evaluation of the factorial experimental designs was performed using the software Statistica®. From the general analysis of the four normal probability plots of effects obtained, it was observed that interaction effects of two and three factors did not show statistical significance at 95% confidence, since all the values were very close to zero. Similarly, the main effect "separator type" did not show significant statistical influence in any situation. As in this case, it has been assumed that the volumetric flow of water, oil and gas were equal in the inlet stream, the effect separator type, in fact, may not be significant for the proposed system. Nevertheless, the main effect “temperature” was significant for both responses (oil flow rate and mass fraction of water in the oil stream), considering both light and heavy oil, so that the best operation condition occurs with the temperature at its lowest level (30oC), since the higher the temperature, the liquid oil components pass into the vapor phase, going to the gas stream. Furthermore, the higher the temperature, the higher the formation water vapor, so that ends up going into the lighter stream (oil stream), making the separation process more difficult. Regarding the “working pressure”, this effect showed to be significant only for the oil flow rate, so that the best operation condition occurs with the pressure at its highest level (9bar), since a higher operating pressure, in this case, indicated a lower pressure drop inside the vessel, generating lower level of turbulence inside the separator. In conclusion, the best-operating condition obtained for the proposed system, at the studied range, occurs for temperature is at its lowest level and the working pressure is at its highest level.

Keywords: factorial experimental design, oil production, simulation, three-phase separator

Procedia PDF Downloads 286
548 Intermodal Strategies for Redistribution of Agrifood Products in the EU: The Case of Vegetable Supply Chain from Southeast of Spain

Authors: Juan C. Pérez-Mesa, Emilio Galdeano-Gómez, Jerónimo De Burgos-Jiménez, José F. Bienvenido-Bárcena, José F. Jiménez-Guerrero

Abstract:

Environmental cost and transport congestion on roads resulting from product distribution in Europe have to lead to the creation of various programs and studies seeking to reduce these negative impacts. In this regard, apart from other institutions, the European Commission (EC) has designed plans in recent years promoting a more sustainable transportation model in an attempt to ultimately shift traffic from the road to the sea by using intermodality to achieve a model rebalancing. This issue proves especially relevant in supply chains from peripheral areas of the continent, where the supply of certain agrifood products is high. In such cases, the most difficult challenge is managing perishable goods. This study focuses on new approaches that strengthen the modal shift, as well as the reduction of externalities. This problem is analyzed by attempting to promote intermodal system (truck and short sea shipping) for transport, taking as point of reference highly perishable products (vegetables) exported from southeast Spain, which is the leading supplier to Europe. Methodologically, this paper seeks to contribute to the literature by proposing a different and complementary approach to establish a comparison between intermodal and the “only road” alternative. For this purpose, the multicriteria decision is utilized in a p-median model (P-M) adapted to the transport of perishables and to a means of shipping selection problem, which must consider different variables: transit cost, including externalities, time, and frequency (including agile response time). This scheme avoids bias in decision-making processes. By observing the results, it can be seen that the influence of the externalities as drivers of the modal shift is reduced when transit time is introduced as a decision variable. These findings confirm that the general strategies, those of the EC, based on environmental benefits lose their capacity for implementation when they are applied to complex circumstances. In general, the different estimations reveal that, in the case of perishables, intermodality would be a secondary and viable option only for very specific destinations (for example, Hamburg and nearby locations, the area of influence of London, Paris, and the Netherlands). Based on this framework, the general outlook on this subject should be modified. Perhaps the government should promote specific business strategies based on new trends in the supply chain, not only on the reduction of externalities, and find new approaches that strengthen the modal shift. A possible option is to redefine ports, conceptualizing them as digitalized redistribution and coordination centers and not only as areas of cargo exchange.

Keywords: environmental externalities, intermodal transport, perishable food, transit time

Procedia PDF Downloads 96
547 Adopting Data Science and Citizen Science to Explore the Development of African Indigenous Agricultural Knowledge Platform

Authors: Steven Sam, Ximena Schmidt, Hugh Dickinson, Jens Jensen

Abstract:

The goal of this study is to explore the potential of data science and citizen science approaches to develop an interactive, digital, open infrastructure that pulls together African indigenous agriculture and food systems data from multiple sources, making it accessible and reusable for policy, research and practice in modern food production efforts. The World Bank has recognised that African Indigenous Knowledge (AIK) is innovative and unique among local and subsistent smallholder farmers, and it is central to sustainable food production and enhancing biodiversity and natural resources in many poor, rural societies. AIK refers to tacit knowledge held in different languages, cultures and skills passed down from generation to generation by word of mouth. AIK is a key driver of food production, preservation, and consumption for more than 80% of citizens in Africa, and can therefore assist modern efforts of reducing food insecurity and hunger. However, the documentation and dissemination of AIK remain a big challenge confronting librarians and other information professionals in Africa, and there is a risk of losing AIK owing to urban migration, modernisation, land grabbing, and the emergence of relatively small-scale commercial farming businesses. There is also a clear disconnect between the AIK and scientific knowledge and modern efforts for sustainable food production. The study combines data science and citizen science approaches through active community participation to generate and share AIK for facilitating learning and promoting knowledge that is relevant for policy intervention and sustainable food production through a curated digital platform based on FAIR principles. The study adopts key informant interviews along with participatory photo and video elicitation approach, where farmers are given digital devices (mobile phones) to record and document their every practice involving agriculture, food production, processing, and consumption by traditional means. Data collected are analysed using the UK Science and Technology Facilities Council’s proven methodology of citizen science (Zooniverse) and data science. Outcomes are presented in participatory stakeholder workshops, where the researchers outline plans for creating the platform and developing the knowledge sharing standard framework and copyrights agreement. Overall, the study shows that learning from AIK, by investigating what local communities know and have, can improve understanding of food production and consumption, in particular in times of stress or shocks affecting the food systems and communities. Thus, the platform can be useful for local populations, research, and policy-makers, and it could lead to transformative innovation in the food system, creating a fundamental shift in the way the North supports sustainable, modern food production efforts in Africa.

Keywords: Africa indigenous agriculture knowledge, citizen science, data science, sustainable food production, traditional food system

Procedia PDF Downloads 82
546 Organic Light Emitting Devices Based on Low Symmetry Coordination Structured Lanthanide Complexes

Authors: Zubair Ahmed, Andrea Barbieri

Abstract:

The need to reduce energy consumption has prompted a considerable research effort for developing alternative energy-efficient lighting systems to replace conventional light sources (i.e., incandescent and fluorescent lamps). Organic light emitting device (OLED) technology offers the distinctive possibility to fabricate large area flat devices by vacuum or solution processing. Lanthanide β-diketonates complexes, due to unique photophysical properties of Ln(III) ions, have been explored as emitting layers in OLED displays and in solid-state lighting (SSL) in order to achieve high efficiency and color purity. For such applications, the excellent photoluminescence quantum yield (PLQY) and stability are the two key points that can be achieved simply by selecting the proper organic ligands around the Ln ion in a coordination sphere. Regarding the strategies to enhance the PLQY, the most common is the suppression of the radiationless deactivation pathways due to the presence of high-frequency oscillators (e.g., OH, –CH groups) around the Ln centre. Recently, a different approach to maximize the PLQY of Ln(β-DKs) has been proposed (named 'Escalate Coordination Anisotropy', ECA). It is based on the assumption that coordinating the Ln ion with different ligands will break the centrosymmetry of the molecule leading to less forbidden transitions (loosening the constraints of the Laporte rule). The OLEDs based on such complexes are available, but with low efficiency and stability. In order to get efficient devices, there is a need to develop some new Ln complexes with enhanced PLQYs and stabilities. For this purpose, the Ln complexes, both visible and (NIR) emitting, of variant coordination structures based on the various fluorinated/non-fluorinated β-diketones and O/N-donor neutral ligands were synthesized using a one step in situ method. In this method, the β-diketones, base, LnCl₃.nH₂O and neutral ligands were mixed in a 3:3:1:1 M ratio in ethanol that gave air and moisture stable complexes. Further, they were characterized by means of elemental analysis, NMR spectroscopy and single crystal X-ray diffraction. Thereafter, their photophysical properties were studied to select the best complexes for the fabrication of stable and efficient OLEDs. Finally, the OLEDs were fabricated and investigated using these complexes as emitting layers along with other organic layers like NPB,N,N′-Di(1-naphthyl)-N,N′-diphenyl-(1,1′-biphenyl)-4,4′-diamine (hole-transporting layer), BCP, 2,9-Dimethyl-4,7-diphenyl-1,10-phenanthroline (hole-blocker) and Alq3 (electron-transporting layer). The layers were sequentially deposited under high vacuum environment by thermal evaporation onto ITO glass substrates. Moreover, co-deposition techniques were used to improve charge transport in the devices and to avoid quenching phenomena. The devices show strong electroluminescence at 612, 998, 1064 and 1534 nm corresponding to ⁵D₀ →⁷F₂(Eu), ²F₅/₂ → ²F₇/₂ (Yb), ⁴F₃/₂→ ⁴I₉/₂ (Nd) and ⁴I1₃/₂→ ⁴I1₅/₂ (Er). All the devices fabricated show good efficiency as well as stability.

Keywords: electroluminescence, lanthanides, paramagnetic NMR, photoluminescence

Procedia PDF Downloads 119
545 Shock-Induced Densification in Glass Materials: A Non-Equilibrium Molecular Dynamics Study

Authors: Richard Renou, Laurent Soulard

Abstract:

Lasers are widely used in glass material processing, from waveguide fabrication to channel drilling. The gradual damage of glass optics under UV lasers is also an important issue to be addressed. Glass materials (including metallic glasses) can undergo a permanent densification under laser-induced shock loading. Despite increased interest on interactions between laser and glass materials, little is known about the structural mechanisms involved under shock loading. For example, the densification process in silica glasses occurs between 8 GPa and 30 GPa. Above 30 GPa, the glass material returns to the original density after relaxation. Investigating these unusual mechanisms in silica glass will provide an overall better understanding in glass behaviour. Non-Equilibrium Molecular Dynamics simulations (NEMD) were carried out in order to gain insight on the silica glass microscopic structure under shock loading. The shock was generated by the use of a piston impacting the glass material at high velocity (from 100m/s up to 2km/s). Periodic boundary conditions were used in the directions perpendicular to the shock propagation to model an infinite system. One-dimensional shock propagations were therefore studied. Simulations were performed with the STAMP code developed by the CEA. A very specific structure is observed in a silica glass. Oxygen atoms around Silicon atoms are organized in tetrahedrons. Those tetrahedrons are linked and tend to form rings inside the structure. A significant amount of empty cavities is also observed in glass materials. In order to understand how a shock loading is impacting the overall structure, the tetrahedrons, the rings and the cavities were thoroughly analysed. An elastic behaviour was observed when the shock pressure is below 8 GPa. This is consistent with the Hugoniot Elastic Limit (HEL) of 8.8 GPa estimated experimentally for silica glasses. Behind the shock front, the ring structure and the cavity distribution are impacted. The ring volume is smaller, and most cavities disappear with increasing shock pressure. However, the tetrahedral structure is not affected. The elasticity of the glass structure is therefore related to a ring shrinking and a cavity closing. Above the HEL, the shock pressure is high enough to impact the tetrahedral structure. An increasing number of hexahedrons and octahedrons are formed with the pressure. The large rings break to form smaller ones. The cavities are however not impacted as most cavities are already closed under an elastic shock. After the material relaxation, a significant amount of hexahedrons and octahedrons is still observed, and most of the cavities remain closed. The overall ring distribution after relaxation is similar to the equilibrium distribution. The densification process is therefore related to two structural mechanisms: a change in the coordination of silicon atoms and a cavity closing. To sum up, non-equilibrium molecular dynamics were carried out to investigate silica behaviour under shock loading. Analysing the structure lead to interesting conclusions upon the elastic and the densification mechanisms in glass materials. This work will be completed with a detailed study of the mechanism occurring above 30 GPa, where no sign of densification is observed after the material relaxation.

Keywords: densification, molecular dynamics simulations, shock loading, silica glass

Procedia PDF Downloads 221