Search results for: Computational Fluid Dynamics
497 Structural Invertibility and Optimal Sensor Node Placement for Error and Input Reconstruction in Dynamic Systems
Authors: Maik Kschischo, Dominik Kahl, Philipp Wendland, Andreas Weber
Abstract:
Understanding and modelling of real-world complex dynamic systems in biology, engineering and other fields is often made difficult by incomplete knowledge about the interactions between systems states and by unknown disturbances to the system. In fact, most real-world dynamic networks are open systems receiving unknown inputs from their environment. To understand a system and to estimate the state dynamics, these inputs need to be reconstructed from output measurements. Reconstructing the input of a dynamic system from its measured outputs is an ill-posed problem if only a limited number of states is directly measurable. A first requirement for solving this problem is the invertibility of the input-output map. In our work, we exploit the fact that invertibility of a dynamic system is a structural property, which depends only on the network topology. Therefore, it is possible to check for invertibility using a structural invertibility algorithm which counts the number of node disjoint paths linking inputs and outputs. The algorithm is efficient enough, even for large networks up to a million nodes. To understand structural features influencing the invertibility of a complex dynamic network, we analyze synthetic and real networks using the structural invertibility algorithm. We find that invertibility largely depends on the degree distribution and that dense random networks are easier to invert than sparse inhomogeneous networks. We show that real networks are often very difficult to invert unless the sensor nodes are carefully chosen. To overcome this problem, we present a sensor node placement algorithm to achieve invertibility with a minimum set of measured states. This greedy algorithm is very fast and also guaranteed to find an optimal sensor node-set if it exists. Our results provide a practical approach to experimental design for open, dynamic systems. Since invertibility is a necessary condition for unknown input observers and data assimilation filters to work, it can be used as a preprocessing step to check, whether these input reconstruction algorithms can be successful. If not, we can suggest additional measurements providing sufficient information for input reconstruction. Invertibility is also important for systems design and model building. Dynamic models are always incomplete, and synthetic systems act in an environment, where they receive inputs or even attack signals from their exterior. Being able to monitor these inputs is an important design requirement, which can be achieved by our algorithms for invertibility analysis and sensor node placement.Keywords: data-driven dynamic systems, inversion of dynamic systems, observability, experimental design, sensor node placement
Procedia PDF Downloads 152496 Life Cycle Assessment Applied to Supermarket Refrigeration System: Effects of Location and Choice of Architecture
Authors: Yasmine Salehy, Yann Leroy, Francois Cluzel, Hong-Minh Hoang, Laurence Fournaison, Anthony Delahaye, Bernard Yannou
Abstract:
Taking into consideration all the life cycle of a product is now an important step in the eco-design of a product or a technology. Life cycle assessment (LCA) is a standard tool to evaluate the environmental impacts of a system or a process. Despite the improvement in refrigerant regulation through protocols, the environmental damage of refrigeration systems remains important and needs to be improved. In this paper, the environmental impacts of refrigeration systems in a typical supermarket are compared using the LCA methodology under different conditions. The system is used to provide cold at two levels of temperature: medium and low temperature during a life period of 15 years. The most commonly used architectures of supermarket cold production systems are investigated: centralized direct expansion systems and indirect systems using a secondary loop to transport the cold. The variation of power needed during seasonal changes and during the daily opening/closure periods of the supermarket are considered. R134a as the primary refrigerant fluid and two types of secondary fluids are considered. The composition of each system and the leakage rate of the refrigerant through its life cycle are taken from the literature and industrial data. Twelve scenarios are examined. They are based on the variation of three parameters, 1. location: France (Paris), Spain (Toledo) and Sweden (Stockholm), 2. different sources of electric consumption: photovoltaic panels and low voltage electric network and 3. architecture: direct and indirect refrigeration systems. OpenLCA, SimaPro softwares, and different impact assessment methods were compared; CML method is used to evaluate the midpoint environmental indicators. This study highlights the significant contribution of electric consumption in environmental damages compared to the impacts of refrigerant leakage. The secondary loop allows lowering the refrigerant amount in the primary loop which results in a decrease in the climate change indicators compared to the centralized direct systems. However, an exhaustive cost evaluation (CAPEX and OPEX) of both systems shows more important costs related to the indirect systems. A significant difference between the countries has been noticed, mostly due to the difference in electric production. In Spain, using photovoltaic panels helps to reduce efficiently the environmental impacts and the related costs. This scenario is the best alternative compared to the other scenarios. Sweden is a country with less environmental impacts. For both France and Sweden, the use of photovoltaic panels does not bring a significant difference, due to a less sunlight exposition than in Spain. Alternative solutions exist to reduce the impact of refrigerating systems, and a brief introduction is presented.Keywords: eco-design, industrial engineering, LCA, refrigeration system
Procedia PDF Downloads 191495 Risk and Reliability Based Probabilistic Structural Analysis of Railroad Subgrade Using Finite Element Analysis
Authors: Asif Arshid, Ying Huang, Denver Tolliver
Abstract:
Finite Element (FE) method coupled with ever-increasing computational powers has substantially advanced the reliability of deterministic three dimensional structural analyses of a structure with uniform material properties. However, railways trackbed is made up of diverse group of materials including steel, wood, rock and soil, while each material has its own varying levels of heterogeneity and imperfections. It is observed that the application of probabilistic methods for trackbed structural analysis while incorporating the material and geometric variabilities is deeply underworked. The authors developed and validated a 3-dimensional FE based numerical trackbed model and in this study, they investigated the influence of variability in Young modulus and thicknesses of granular layers (Ballast and Subgrade) on the reliability index (-index) of the subgrade layer. The influence of these factors is accounted for by changing their Coefficients of Variance (COV) while keeping their means constant. These variations are formulated using Gaussian Normal distribution. Two failure mechanisms in subgrade namely Progressive Shear Failure and Excessive Plastic Deformation are examined. Preliminary results of risk-based probabilistic analysis for Progressive Shear Failure revealed that the variations in Ballast depth are the most influential factor for vertical stress at the top of subgrade surface. Whereas, in case of Excessive Plastic Deformations in subgrade layer, the variations in its own depth and Young modulus proved to be most important while ballast properties remained almost indifferent. For both these failure moods, it is also observed that the reliability index for subgrade failure increases with the increase in COV of ballast depth and subgrade Young modulus. The findings of this work is of particular significance in studying the combined effect of construction imperfections and variations in ground conditions on the structural performance of railroad trackbed and evaluating the associated risk involved. In addition, it also provides an additional tool to supplement the deterministic analysis procedures and decision making for railroad maintenance.Keywords: finite element analysis, numerical modeling, probabilistic methods, risk and reliability analysis, subgrade
Procedia PDF Downloads 140494 Geospatial Analysis of Spatio-Temporal Dynamic and Environmental Impact of Informal Settlement: A Case of Adama City, Ethiopia
Authors: Zenebu Adere Tola
Abstract:
Informal settlements behave dynamically over space and time and the number of people living in such housing areas is growing worldwide. In the cities of developing countries especially in sub-Saharan Africa, poverty, unemployment rate, poor living condition, lack transparency and accountability, lack of good governance are the major factors to contribute for the people to hold land informally and built houses for residential or other purposes. In most of Ethiopian cities informal settlement is highly seen in peripheral areas this is because people can easily to hold land for housing from local farmers, brokers, speculators without permission from concerning bodies. In Adama informal settlement has created risky living conditions and led to environmental problems in natural areas the main reason for this was the lack of sufficient knowledge about informal settlement development. On the other side there is a strong need to transform informal into formal settlements and to gain more control about the actual spatial development of informal settlements. In another hand to tackle the issue it is at least very important to understand the scale of the problem. To understand the scale of the problem it is important to use up-to-date technology. For this specific problem, it is good to use high-resolution imagery to detect informal settlement in Adama city. The main objective of this study is to assess the spatiotemporal dynamics and environmental impacts of informal settlement using OBIA. Specifically, the objective of this study is to; identify informal settlement in the study area, determine the change in the extent and pattern of informal settlement and to assess the environmental and social impacts of informal settlement in the study area. The methods to be used to detect the informal settlement is object-oriented image analysis. Consequently, reliable procedures for detecting the spatial behavior of informal settlements are required in order to react at an early stage to changing housing situations. Thus, obtaining spatial information about informal settlement areas which is up to date is vital for any actions of enhancement in terms of urban or regional planning. Using data for this study aerial photography for growth and change of informal settlements in Adama city. Software ECognition software for classy to built-up and non-built areas. Thus, obtaining spatial information about informal settlement areas which is up to date is vital for any actions of enhancement in terms of urban or regional planning.Keywords: informal settlement, change detection, environmental impact, object based analysis
Procedia PDF Downloads 84493 A Survey Study Exploring Principal Leadership and Teachers’ Expectations in the Social Working Life of Two Swedish Schools
Authors: Anette Forssten Seiser, Ulf Blossing, Mats Ekholm
Abstract:
The expectation on principals to manage, lead and develop their schools and teachers are high. However, principals are not left alone without guidelines. Policy texts, curricula and syllabuses guide the orientation of their leadership. Moreover, principals’ traits and experience as well as professional norms, are decisive. However, in this study we argue for the importance to deepen the knowledge of how the practice of leadership is shaped in the daily social working life with the teachers at the school. Teachers’ experiences and expectations of leadership influence the principal’s actions, sometimes perhaps contrary to what is emphasized in official texts like the central guidelines. The expectations of teachers make up the norms of the school and thus constitute the local school culture. The aim of this study is to deepen the knowledge of teachers’ expectations on their principals to manage, lead and develop their schools. Two questions are used to guide the study: 1) How do teachers’ and principals’ expectations differ in realistic situations? 2) How do teachers’ experience-based expectations differ from more ideal expectations? To investigate teachers’ expectations of their principals, we use a social psychological perspective framed within an organisational development perspective. A social role is defined by the fact that, within the framework of the role, different people who fulfil the same role exhibit greater similarities than differences in their actions. The way a social role is exercised depends on the expectations placed on the role’s position but also on the expectations of the function of the role. The way in which the social role is embodied in practice also depends on how the person fulfilling the role perceives and understands those expectations. Based on interviews with school principals a questionnaire was constructed. Nine possible real-life and critical incidents were described that are important when it comes to role shaping in the dynamics between teachers and principals. Teachers were asked to make a choice between three, four, or five possible and realistic courses of action for the principal. The teachers were also asked to make two choices between these different options in real-life situations, one ideal as if they were working as a principal themselves, and one experience based – how they estimated that their own principal would act in such a situation. The sample consist of two elementary schools in Sweden. School A consists of two principals and 38 teachers and school B of two principals and 22 teachers. The response rate among the teachers is 95 percent in school A and 86 percent in school B. All four principals answered our questions. The results show that the expectations of teachers and principals can be understood as variations of being harmonic or disharmonic. The harmonic expectations can be interpreted to lead to an attuned leadership, while the disharmonic expectations lead to a more tensed leadership. Harmonious expectations and an attuned leadership are prominent. The results are compared to earlier research on leadership. Attuned and more tensed leadership are discussed in relation to school development and future research.Keywords: critical incidents, principal leadership, school culture, school development, teachers' expectations
Procedia PDF Downloads 96492 Hydroxyapatite Nanorods as Novel Fillers for Improving the Properties of PBSu
Authors: M. Nerantzaki, I. Koliakou, D. Bikiaris
Abstract:
This study evaluates the hypothesis that the incorporation of fibrous hydroxyapatite nanoparticles (nHA) with high crystallinity and high aspect ratio, synthesized by hydrothermal method, into Poly(butylene succinate) (PBSu), improves the bioactivity of the aliphatic polyester and affects new bone growth inhibiting resorption and enhancing bone formation. Hydroxyapatite nanorods were synthesized using a simple hydrothermal procedure. First, the HPO42- -containing solution was added drop-wise into the Ca2+-containing solution, while the molar ratio of Ca/P was adjusted at 1.67. The HA precursor was then treated hydrothermally at 200°C for 72 h. The resulting powder was characterized using XRD, FT-IR, TEM, and EDXA. Afterwards, PBSu nanocomposites containing 2.5wt% (nHA) were prepared by in situ polymerization technique for the first time and were examined as potential scaffolds for bone engineering applications. For comparison purposes composites containing either 2.5wt% micro-Bioglass (mBG) or 2.5wt% mBG-nHA were prepared and studied, too. The composite scaffolds were characterized using SEM, FTIR, and XRD. Mechanical testing (Instron 3344) and Contact Angle measurements were also carried out. Enzymatic degradation was studied in an aqueous solution containing a mixture of R. Oryzae and P. Cepacia lipases at 37°C and pH=7.2. In vitro biomineralization test was performed by immersing all samples in simulated body fluid (SBF) for 21 days. Biocompatibility was assessed using rat Adipose Stem Cells (rASCs), genetically modified by nucleofection with DNA encoding SB100x transposase and pT2-Venus-neo transposon expression plasmids in order to attain fluorescence images. Cell proliferation and viability of cells on the scaffolds were evaluated using fluoresce microscopy and MTT (3-(4,5-dimethylthiazol-2-yl)-2,5 diphenyltetrazolium bromide) assay. Finally, osteogenic differentiation was assessed by staining rASCs with alizarine red using cetylpyridinium chloride (CPC) method. TEM image of the fibrous HAp nanoparticles, synthesized in the present study clearly showed the fibrous morphology of the synthesized powder. The addition of nHA decreased significantly the contact angle of the samples, indicating that the materials become more hydrophilic and hence they absorb more water and subsequently degrade more rapidly. In vitro biomineralization test confirmed that all samples were bioactive as mineral deposits were detected by X-ray diffractometry after incubation in SBF. Metabolic activity of rASCs on all PBSu composites was high and increased from day 1 of culture to day 14. On day 28 metabolic activity of rASCs cultured on samples enriched with bioceramics was significantly decreased due to possible differentiation of rASCs to osteoblasts. Staining rASCs with alizarin red after 28 days in culture confirmed our initial hypothesis as the presence of calcium was detected, suggesting osteogenic differentiation of rACS on PBSu/nHAp/mBG 2.5% and PBSu/mBG 2.5% composite scaffolds.Keywords: biomaterials, hydroxyapatite nanorods, poly(butylene succinate), scaffolds
Procedia PDF Downloads 310491 Beyond Geometry: The Importance of Surface Properties in Space Syntax Research
Authors: Christoph Opperer
Abstract:
Space syntax is a theory and method for analyzing the spatial layout of buildings and urban environments to understand how they can influence patterns of human movement, social interaction, and behavior. While direct visibility is a key factor in space syntax research, important visual information such as light, color, texture, etc., are typically not considered, even though psychological studies have shown a strong correlation to the human perceptual experience within physical space – with light and color, for example, playing a crucial role in shaping the perception of spaciousness. Furthermore, these surface properties are often the visual features that are most salient and responsible for drawing attention to certain elements within the environment. This paper explores the potential of integrating these factors into general space syntax methods and visibility-based analysis of space, particularly for architectural spatial layouts. To this end, we use a combination of geometric (isovist) and topological (visibility graph) approaches together with image-based methods, allowing a comprehensive exploration of the relationship between spatial geometry, visual aesthetics, and human experience. Custom-coded ray-tracing techniques are employed to generate spherical panorama images, encoding three-dimensional spatial data in the form of two-dimensional images. These images are then processed through computer vision algorithms to generate saliency-maps, which serve as a visual representation of areas most likely to attract human attention based on their visual properties. The maps are subsequently used to weight the vertices of isovists and the visibility graph, placing greater emphasis on areas with high saliency. Compared to traditional methods, our weighted visibility analysis introduces an additional layer of information density by assigning different weights or importance levels to various aspects within the field of view. This extends general space syntax measures to provide a more nuanced understanding of visibility patterns that better reflect the dynamics of human attention and perception. Furthermore, by drawing parallels to traditional isovist and VGA analysis, our weighted approach emphasizes a crucial distinction, which has been pointed out by Ervin and Steinitz: the difference between what is possible to see and what is likely to be seen. Therefore, this paper emphasizes the importance of including surface properties in visibility-based analysis to gain deeper insights into how people interact with their surroundings and to establish a stronger connection with human attention and perception.Keywords: space syntax, visibility analysis, isovist, visibility graph, visual features, human perception, saliency detection, raytracing, spherical images
Procedia PDF Downloads 77490 New Teaching Tools for a Modern Representation of Chemical Bond in the Course of Food Science
Authors: Nicola G. G. Cecca
Abstract:
In Italian IPSSEOAs, high schools that give a vocational education to students that will work in the field of Enogastronomy and Hotel Management, the course of Food Science allows the students to start and see food as a mixture of substances that they will transform during their profession. These substances are characterized not only by a chemical composition but also by a molecular structure that makes them nutritionally active. But the increasing number of new products proposed by Food Industry, the modern techniques of production and transformation, the innovative preparations required by customers have made many information reported in the most wide spread Food Science textbooks not up-to-date or too poor for the people who will work in catering sector. Often Authors offer information aged to Bohr’s Atomic Model and to the ‘Octet Rule’ proposed by G.N. Lewis to describe the Chemical Bond, without giving any reference to new as Orbital Atomic Model and Molecular Orbital Theory that, in the meantime, start to be old themselves. Furthermore, this antiquated information precludes an easy understanding of a wide range of properties of nutritive substances and many reactions in which the food constituents are involved. In this paper, our attention is pointed out to use GEOMAG™ to represent the dynamics with which the chemical bond is formed during the synthesis of the molecules. GEOMAG™ is a toy, produced by the Swiss Company Geomagword S.A., pointed to stimulate in children, aged between 6-10 years, their fantasy and their handling ability and constituted by metallic spheres and metallic magnetic bars coated by coloured plastic materials. The simulation carried out with GEOMAG™ is based on the similitude existing between the Coulomb’s force and the magnetic attraction’s force and in particular between the formulae with which they are calculated. The electrostatic force (F in Newton) that allows the formation of the chemical bond can be calculated by mean Fc = kc q1 q2/d2 where: q1 e q2 are the charge of particles [in Coulomb], d is the distance between the particles [in meters] and kc is the Coulomb’s constant. It is surprising to observe that the attraction’s force (Fm) acting between the magnetic extremities of GEOMAG™ used to simulate the chemical bond can be calculated in the same way by using the formula Fm = km m1 m2/d2 where: m1 e m2 represent the strength of the poles [A•m], d is the distance between the particles [m], km = μ/4π in which μ is the magnetic permeability of medium [N•A-2]. The magnetic attraction can be tested by students by trying to keep the magnetic elements of GEOMAG™ separate by hands or trying to measure by mean an appropriate dynamometric system. Furthermore, by using a dynamometric system to measure the magnetic attraction between the GEOMAG™ elements is possible draw a graphic F=f(d) to verify that the curve obtained during the simulation is very similar to that one hypnotized, around the 1920’s by Linus Pauling to describe the formation of H2+ in according with Molecular Orbital Theory.Keywords: chemical bond, molecular orbital theory, magnetic attraction force, GEOMAG™
Procedia PDF Downloads 271489 Quantifying Multivariate Spatiotemporal Dynamics of Malaria Risk Using Graph-Based Optimization in Southern Ethiopia
Authors: Yonas Shuke Kitawa
Abstract:
Background: Although malaria incidence has substantially fallen sharply over the past few years, the rate of decline varies by district, time, and malaria type. Despite this turn-down, malaria remains a major public health threat in various districts of Ethiopia. Consequently, the present study is aimed at developing a predictive model that helps to identify the spatio-temporal variation in malaria risk by multiple plasmodium species. Methods: We propose a multivariate spatio-temporal Bayesian model to obtain a more coherent picture of the temporally varying spatial variation in disease risk. The spatial autocorrelation in such a data set is typically modeled by a set of random effects that assign a conditional autoregressive prior distribution. However, the autocorrelation considered in such cases depends on a binary neighborhood matrix specified through the border-sharing rule. Over here, we propose a graph-based optimization algorithm for estimating the neighborhood matrix that merely represents the spatial correlation by exploring the areal units as the vertices of a graph and the neighbor relations as the series of edges. Furthermore, we used aggregated malaria count in southern Ethiopia from August 2013 to May 2019. Results: We recognized that precipitation, temperature, and humidity are positively associated with the malaria threat in the area. On the other hand, enhanced vegetation index, nighttime light (NTL), and distance from coastal areas are negatively associated. Moreover, nonlinear relationships were observed between malaria incidence and precipitation, temperature, and NTL. Additionally, lagged effects of temperature and humidity have a significant effect on malaria risk by either species. More elevated risk of P. falciparum was observed following the rainy season, and unstable transmission of P. vivax was observed in the area. Finally, P. vivax risks are less sensitive to environmental factors than those of P. falciparum. Conclusion: The improved inference was gained by employing the proposed approach in comparison to the commonly used border-sharing rule. Additionally, different covariates are identified, including delayed effects, and elevated risks of either of the cases were observed in districts found in the central and western regions. As malaria transmission operates in a spatially continuous manner, a spatially continuous model should be employed when it is computationally feasible.Keywords: disease mapping, MSTCAR, graph-based optimization algorithm, P. falciparum, P. vivax, waiting matrix
Procedia PDF Downloads 82488 Artificial Neural Network Based Parameter Prediction of Miniaturized Solid Rocket Motor
Authors: Hao Yan, Xiaobing Zhang
Abstract:
The working mechanism of miniaturized solid rocket motors (SRMs) is not yet fully understood. It is imperative to explore its unique features. However, there are many disadvantages to using common multi-objective evolutionary algorithms (MOEAs) in predicting the parameters of the miniaturized SRM during its conceptual design phase. Initially, the design variables and objectives are constrained in a lumped parameter model (LPM) of this SRM, which leads to local optima in MOEAs. In addition, MOEAs require a large number of calculations due to their population strategy. Although the calculation time for simulating an LPM just once is usually less than that of a CFD simulation, the number of function evaluations (NFEs) is usually large in MOEAs, which makes the total time cost unacceptably long. Moreover, the accuracy of the LPM is relatively low compared to that of a CFD model due to its assumptions. CFD simulations or experiments are required for comparison and verification of the optimal results obtained by MOEAs with an LPM. The conceptual design phase based on MOEAs is a lengthy process, and its results are not precise enough due to the above shortcomings. An artificial neural network (ANN) based parameter prediction is proposed as a way to reduce time costs and improve prediction accuracy. In this method, an ANN is used to build a surrogate model that is trained with a 3D numerical simulation. In design, the original LPM is replaced by a surrogate model. Each case uses the same MOEAs, in which the calculation time of the two models is compared, and their optimization results are compared with 3D simulation results. Using the surrogate model for the parameter prediction process of the miniaturized SRMs results in a significant increase in computational efficiency and an improvement in prediction accuracy. Thus, the ANN-based surrogate model does provide faster and more accurate parameter prediction for an initial design scheme. Moreover, even when the MOEAs converge to local optima, the time cost of the ANN-based surrogate model is much lower than that of the simplified physical model LPM. This means that designers can save a lot of time during code debugging and parameter tuning in a complex design process. Designers can reduce repeated calculation costs and obtain accurate optimal solutions by combining an ANN-based surrogate model with MOEAs.Keywords: artificial neural network, solid rocket motor, multi-objective evolutionary algorithm, surrogate model
Procedia PDF Downloads 91487 The Audiovisual Media as a Metacritical Ludicity Gesture in the Musical-Performatic and Scenic Works of Caetano Veloso and David Bowie
Authors: Paulo Da Silva Quadros
Abstract:
This work aims to point out comparative parameters between the artistic production of two exponents of the contemporary popular culture scene: Caetano Veloso (Brazil) and David Bowie (England). Both Caetano Veloso and David Bowie were pioneers in establishing an aesthetic game between various artistic expressions at the service of the music-visual scene, that is, the conceptual interconnections between several forms of aesthetic processes, such as fine arts, theatre, cinema, poetry, and literature. There are also correlations in their expressive attitudes of art, especially regarding the dialogue between the fields of art and politics (concern with respect to human rights, human dignity, racial issues, tolerance, gender issues, and sexuality, among others); the constant tension and cunning game between market, free expression and critical sense; the sophisticated, playful mechanisms of metalanguage and aesthetic metacritique. Fact is that both of them almost came to cooperate with each other in the 1970s when Caetano was in exile in England, and when both had at the same time the same music producer, who tried to bring them closer, noticing similar aesthetic qualities in both artistic works, which was later glimpsed by some music critics. Among many of the most influential issues in Caetano's and Bowie's game of artistic-aesthetic expression are, for example, the ideas advocated by the sensation of strangeness (Albert Camus), art as transcendence (Friedrich Nietzsche), the deconstruction and reconstruction of auratic reconfiguration of artistic signs (Walter Benjamin and Andy Warhol). For deepen more theoretical issues, the following authors will be used as supportive interpretative references: Hans-Georg Gadamer, Immanuel Kant, Friedrich Schiller, Johan Huizinga. In addition to the aesthetic meanings of Ars Ludens characteristics of the two artists, the following supporting references will be also added: the question of technique (Martin Heidegger), the logic of sense (Gilles Deleuze), art as an event and the sense of the gesture of art ( Maria Teresa Cruz), the society of spectacle (Guy Debord), Verarbeitung and Durcharbeitung (Sigmund Freud), the poetics of interpretation and the sign of relation (Cremilda Medina). The purpose of such interpretative references is to seek to understand, from a cultural reading perspective (cultural semiology), some significant elements in the dynamics of aesthetic and media interconnections of both artists, which made them as some of the most influential interlocutors in contemporary music aesthetic thought, as a playful vivid experience of life and art.Keywords: Caetano Veloso, David Bowie, music aesthetics, symbolic playfulness, cultural reading
Procedia PDF Downloads 169486 Fly ash Contamination in Groundwater and its Implications on Local Climate Change
Authors: Rajkumar Ghosh
Abstract:
Fly ash, a byproduct of coal combustion, has become a prevalent environmental concern due to its potential impact on both groundwater quality and local climate change. This study aims to provide an in-depth analysis of the various mechanisms through which fly ash contaminates groundwater, as well as the possible consequences of this contamination on local climate change. The presence of fly ash in groundwater not only poses a risk to human health but also has the potential to influence local climate change through complex interactions. Although fly ash has various applications in construction and other industries, improper disposal and lack of containment measures have led to its infiltration into groundwater systems. Through a comprehensive review of existing literature and case studies, the interactions between fly ash and groundwater systems, assess the effects on hydrology, and discuss the implications for the broader climate. This section reviews the pathways through which fly ash enters groundwater, including leaching from disposal sites, infiltration through soil, and migration from surface water bodies. The physical and chemical characteristics of fly ash that contribute to its mobility and persistence in groundwater. The introduction of fly ash into groundwater can alter its chemical composition, leading to an increase in the concentration of heavy metals, metalloids, and other potentially toxic elements. The mechanisms of contaminant transport and highlight the potential risks to human health and ecosystems. Fly ash contamination in groundwater may influence the hydrological cycle through changes in groundwater recharge, discharge, and flow dynamics. This section examines the implications of altered hydrology on local water availability, aquatic habitats, and overall ecosystem health. The presence of fly ash in groundwater may have direct and indirect effects on local climate change. The role of fly ash as a potent greenhouse gas absorber and its contribution to radiative forcing. Additionally, investigation of the possible feedback mechanisms between groundwater contamination and climate change, such as altered vegetation patterns and changes in local temperature and precipitation patterns. In this section, potential mitigation and remediation techniques to minimize fly ash contamination in groundwater are analyzed. These may include improved waste management practices, engineered barriers, groundwater remediation technologies, and sustainable fly ash utilization. This paper highlights the critical link between fly ash contamination in groundwater and its potential contribution to local climate change. It emphasizes the importance of addressing this issue promptly through a combination of preventive measures, effective management strategies, and continuous monitoring. By understanding the interconnections between fly ash contamination, groundwater quality, and local climate, towards creating a more resilient and sustainable environment for future generations. The findings of this research can assist policymakers and environmental managers in formulating sustainable strategies to mitigate fly ash contamination and minimize its contribution to climate change.Keywords: groundwater, climate, sustainable environment, fly ash contamination
Procedia PDF Downloads 90485 An International Curriculum Development for Languages and Technology
Authors: Miguel Nino
Abstract:
When considering the challenges of a changing and demanding globalizing world, it is important to reflect on how university students will be prepared for the realities of internationalization, marketization and intercultural conversation. The present study is an interdisciplinary program designed to respond to the needs of the global community. The proposal bridges the humanities and science through three different fields: Languages, graphic design and computer science, specifically, fundamentals of programming such as python, java script and software animation. Therefore, the goal of the four year program is twofold: First, enable students for intercultural communication between English and other languages such as Spanish, Mandarin, French or German. Second, students will acquire knowledge in practical software and relevant employable skills to collaborate in assisted computer projects that most probable will require essential programing background in interpreted or compiled languages. In order to become inclusive and constructivist, the cognitive linguistics approach is suggested for the three different fields, particularly for languages that rely on the traditional method of repetition. This methodology will help students develop their creativity and encourage them to become independent problem solving individuals, as languages enhance their common ground of interaction for culture and technology. Participants in this course of study will be evaluated in their second language acquisition at the Intermediate-High level. For graphic design and computer science students will apply their creative digital skills, as well as their critical thinking skills learned from the cognitive linguistics approach, to collaborate on a group project design to find solutions for media web design problems or marketing experimentation for a company or the community. It is understood that it will be necessary to apply programming knowledge and skills to deliver the final product. In conclusion, the program equips students with linguistics knowledge and skills to be competent in intercultural communication, where English, the lingua franca, remains the medium for marketing and product delivery. In addition to their employability, students can expand their knowledge and skills in digital humanities, computational linguistics, or increase their portfolio in advertising and marketing. These students will be the global human capital for the competitive globalizing community.Keywords: curriculum, international, languages, technology
Procedia PDF Downloads 443484 Reinforcement of Calcium Phosphate Cement with E-Glass Fibre
Authors: Kanchan Maji, Debasmita Pani, Sudip Dasgupta
Abstract:
Calcium phosphate cement (CPC) due to its high bioactivity and optimum bioresorbability shows excellent bone regeneration capability. Despite it has limited applications as bone implant due to its macro-porous microstructure causing its poor mechanical strength. The reinforcement of apatitic CPCs with biocompatible fibre glass phase is an attractive area of research to improve its mechanical strength. Here we study the setting behaviour of Si-doped and un-doped alpha tri-calcium phosphate (α-TCP) based CPC and its reinforcement with the addition of E-glass fibre. Alpha tri-calcium phosphate powders were prepared by solid state sintering of CaCO3, CaHPO4 and tetra ethyl ortho silicate (TEOS) was used as silicon source to synthesise Si doped α-TCP powders. Alpha tri-calcium phosphate based CPC hydrolyzes to form hydroxyapatite (HA) crystals having excellent osteoconductivity and bone-replacement capability thus self-hardens through the entanglement of HA crystals. Setting time, phase composition, hydrolysis conversion rate, microstructure, and diametral tensile strength (DTS) of un-doped CPC and Si-doped CPC were studied and compared. Both initial and final setting time of the developed cement was delayed because of Si addition. Crystalline phases of HA (JCPDS 9-432), α-TCP (JCPDS 29-359) and β-TCP (JCPDS 9-169) were detected in the X-ray diffraction (XRD) pattern after immersion of CPC in simulated body fluid (SBF) for 0 hours to 10 days. The intensities of the α-TCP peaks of (201) and (161) at 2θ of 22.2°and 24.1° decreased when the time of immersion of CPC in SBF increased from 0 hours to 10 days, due to its transformation into HA. As Si incorporation in the crystal lattice stabilised the TCP phase, Si doped CPC showed a little slower rate of conversion into HA phase as compared to un-doped CPC. The SEM image of the microstructure of hardened CPC showed lower grain size of HA in un-doped CPC because of premature setting and faster hydrolysis of un-doped CPC in SBF as compared that in Si-doped CPC. Premature setting caused generation of micro and macro porosity in un-doped CPC structure which resulted in its lower mechanical strength as compared to that in Si-doped CPC. This lower porosity and greater compactness in the microstructure attributes to greater DTS values observed in Si-doped CPC. E-glass fibres of the average diameter of 12 μm were cut into approximately 1 mm in length and immersed in SBF to deposit carbonated apatite on its surface. This was performed to promote HA crystal growth and entanglement along the fibre surface to promote stronger interface between dispersed E-glass fibre and CPC matrix. It was found that addition of 10 wt% of E-glass fibre into Si-doped α-TCP increased the average DTS of CPC from 8 MPa to 15 MPa as the fibres could resist the propagation of crack by deflecting the crack tip. Our study shows that biocompatible E-glass fibre in optimum proportion in CPC matrix can enhance the mechanical strength of CPC without affecting its bioactivity.Keywords: Calcium phosphate cement, biocompatibility, e-glass fibre, diametral tensile strength
Procedia PDF Downloads 346483 Expert Supporting System for Diagnosing Lymphoid Neoplasms Using Probabilistic Decision Tree Algorithm and Immunohistochemistry Profile Database
Authors: Yosep Chong, Yejin Kim, Jingyun Choi, Hwanjo Yu, Eun Jung Lee, Chang Suk Kang
Abstract:
For the past decades, immunohistochemistry (IHC) has been playing an important role in the diagnosis of human neoplasms, by helping pathologists to make a clearer decision on differential diagnosis, subtyping, personalized treatment plan, and finally prognosis prediction. However, the IHC performed in various tumors of daily practice often shows conflicting and very challenging results to interpret. Even comprehensive diagnosis synthesizing clinical, histologic and immunohistochemical findings can be helpless in some twisted cases. Another important issue is that the IHC data is increasing exponentially and more and more information have to be taken into account. For this reason, we reached an idea to develop an expert supporting system to help pathologists to make a better decision in diagnosing human neoplasms with IHC results. We gave probabilistic decision tree algorithm and tested the algorithm with real case data of lymphoid neoplasms, in which the IHC profile is more important to make a proper diagnosis than other human neoplasms. We designed probabilistic decision tree based on Bayesian theorem, program computational process using MATLAB (The MathWorks, Inc., USA) and prepared IHC profile database (about 104 disease category and 88 IHC antibodies) based on WHO classification by reviewing the literature. The initial probability of each neoplasm was set with the epidemiologic data of lymphoid neoplasm in Korea. With the IHC results of 131 patients sequentially selected, top three presumptive diagnoses for each case were made and compared with the original diagnoses. After the review of the data, 124 out of 131 were used for final analysis. As a result, the presumptive diagnoses were concordant with the original diagnoses in 118 cases (93.7%). The major reason of discordant cases was that the similarity of the IHC profile between two or three different neoplasms. The expert supporting system algorithm presented in this study is in its elementary stage and need more optimization using more advanced technology such as deep-learning with data of real cases, especially in differentiating T-cell lymphomas. Although it needs more refinement, it may be used to aid pathological decision making in future. A further application to determine IHC antibodies for a certain subset of differential diagnoses might be possible in near future.Keywords: database, expert supporting system, immunohistochemistry, probabilistic decision tree
Procedia PDF Downloads 225482 Wheeled Robot Stable Braking Process under Asymmetric Traction Coefficients
Authors: Boguslaw Schreyer
Abstract:
During the wheeled robot’s braking process, the extra dynamic vertical forces act on all wheels: left, right, front or rear. Those forces are directed downward on the front wheels while directed upward on the rear wheels. In order to maximize the deceleration, therefore, minimize the braking time and braking distance, we need to calculate a correct torque distribution: the front braking torque should be increased, and rear torque should be decreased. At the same time, we need to provide better transversal stability. In a simple case of all adhesion coefficients being the same under all wheels, the torque distribution may secure the optimal (maximal) control of the robot braking process, securing the minimum braking distance and a minimum braking time. At the same time, the transversal stability is relatively good. At any time, we control the transversal acceleration. In the case of the transversal movement, we stop the braking process and re-apply braking torque after a defined period of time. If we correctly calculate the value of the torques, we may secure the traction coefficient under the front and rear wheels close to its maximum. Also, in order to provide an optimum braking control, we need to calculate the timing of the braking torque application and the timing of its release. The braking torques should be released shortly after the wheels passed a maximum traction coefficient (while a wheels’ slip increases) and applied again after the wheels pass a maximum of traction coefficient (while the slip decreases). The correct braking torque distribution secures the front and rear wheels, passing this maximum at the same time. It guarantees an optimum deceleration control, therefore, minimum braking time. In order to calculate a correct torque distribution, a control unit should receive the input signals of a rear torque value (which changes independently), the robot’s deceleration, and values of the vertical front and rear forces. In order to calculate the timing of torque application and torque release, more signals are needed: speed of the robot: angular speed, and angular deceleration of the wheels. In case of different adhesion coefficients under the left and right wheels, but the same under each pair of wheels- the same under right wheels and the same under left wheels, the Select-Low (SL) and select high (SH) methods are applied. The SL method is suggested if transversal stability is more important than braking efficiency. Often in the case of the robot, more important is braking efficiency; therefore, the SH method is applied with some control of the transversal stability. In the case that all adhesion coefficients are different under all wheels, the front-rear torque distribution is maintained as in all previous cases. However, the timing of the braking torque application and release is controlled by the rear wheels’ lowest adhesion coefficient. The Lagrange equations have been used to describe robot dynamics. Matlab has been used in order to simulate the process of wheeled robot braking, and in conclusion, the braking methods have been selected.Keywords: wheeled robots, braking, traction coefficient, asymmetric
Procedia PDF Downloads 165481 Transmedia and Platformized Political Discourse in a Growing Democracy: A Study of Nigeria’s 2023 General Elections
Authors: Tunde Ope-Davies
Abstract:
Transmediality and platformization as online content-sharing protocols have continued to accentuate the growing impact of the unprecedented digital revolution across the world. The rapid transformation across all sectors as a result of this revolution has continued to spotlight the increasing importance of new media technologies in redefining and reshaping the rhythm and dynamics of our private and public discursive practices. Equally, social and political activities are being impacted daily through the creation and transmission of political discourse content through multi-channel platforms such as mobile telephone communication, social media networks and the internet. It has been observed that digital platforms have become central to the production, processing, and distribution of multimodal social data and cultural content. The platformization paradigm thus underpins our understanding of how digital platforms enhance the production and heterogenous distribution of media and cultural content through these platforms and how this process facilitates socioeconomic and political activities. The use of multiple digital platforms to share and transmit political discourse material synchronously and asynchronously has gained some exciting momentum in the last few years. Nigeria’s 2023 general elections amplified the usage of social media and other online platforms as tools for electioneering campaigns, socio-political mobilizations and civic engagement. The study, therefore, focuses on transmedia and platformed political discourse as a new strategy to promote political candidates and their manifesto in order to mobilize support and woo voters. This innovative transmedia digital discourse model involves a constellation of online texts and images transmitted through different online platforms almost simultaneously. The data for the study was extracted from the 2023 general elections campaigns in Nigeria between January- March 2023 through media monitoring, manual download and the use of software to harvest the online electioneering campaign material. I adopted a discursive-analytic qualitative technique with toolkits drawn from a computer-mediated multimodal discourse paradigm. The study maps the progressive development of digital political discourse in this young democracy. The findings also demonstrate the inevitable transformation of modern democratic practice through platform-dependent and transmedia political discourse. Political actors and media practitioners now deploy layers of social media network platforms to convey messages and mobilize supporters in order to aggregate and maximize the impact of their media campaign projects and audience reach.Keywords: social media, digital humanities, political discourse, platformized discourse, multimodal discourse
Procedia PDF Downloads 88480 Portable and Parallel Accelerated Development Method for Field-Programmable Gate Array (FPGA)-Central Processing Unit (CPU)- Graphics Processing Unit (GPU) Heterogeneous Computing
Authors: Nan Hu, Chao Wang, Xi Li, Xuehai Zhou
Abstract:
The field-programmable gate array (FPGA) has been widely adopted in the high-performance computing domain. In recent years, the embedded system-on-a-chip (SoC) contains coarse granularity multi-core CPU (central processing unit) and mobile GPU (graphics processing unit) that can be used as general-purpose accelerators. The motivation is that algorithms of various parallel characteristics can be efficiently mapped to the heterogeneous architecture coupled with these three processors. The CPU and GPU offload partial computationally intensive tasks from the FPGA to reduce the resource consumption and lower the overall cost of the system. However, in present common scenarios, the applications always utilize only one type of accelerator because the development approach supporting the collaboration of the heterogeneous processors faces challenges. Therefore, a systematic approach takes advantage of write-once-run-anywhere portability, high execution performance of the modules mapped to various architectures and facilitates the exploration of design space. In this paper, A servant-execution-flow model is proposed for the abstraction of the cooperation of the heterogeneous processors, which supports task partition, communication and synchronization. At its first run, the intermediate language represented by the data flow diagram can generate the executable code of the target processor or can be converted into high-level programming languages. The instantiation parameters efficiently control the relationship between the modules and computational units, including two hierarchical processing units mapping and adjustment of data-level parallelism. An embedded system of a three-dimensional waveform oscilloscope is selected as a case study. The performance of algorithms such as contrast stretching, etc., are analyzed with implementations on various combinations of these processors. The experimental results show that the heterogeneous computing system with less than 35% resources achieves similar performance to the pure FPGA and approximate energy efficiency.Keywords: FPGA-CPU-GPU collaboration, design space exploration, heterogeneous computing, intermediate language, parameterized instantiation
Procedia PDF Downloads 118479 A Fourier Method for Risk Quantification and Allocation of Credit Portfolios
Authors: Xiaoyu Shen, Fang Fang, Chujun Qiu
Abstract:
Herewith we present a Fourier method for credit risk quantification and allocation in the factor-copula model framework. The key insight is that, compared to directly computing the cumulative distribution function of the portfolio loss via Monte Carlo simulation, it is, in fact, more efficient to calculate the transformation of the distribution function in the Fourier domain instead and inverting back to the real domain can be done in just one step and semi-analytically, thanks to the popular COS method (with some adjustments). We also show that the Euler risk allocation problem can be solved in the same way since it can be transformed into the problem of evaluating a conditional cumulative distribution function. Once the conditional or unconditional cumulative distribution function is known, one can easily calculate various risk metrics. The proposed method not only fills the niche in literature, to the best of our knowledge, of accurate numerical methods for risk allocation but may also serve as a much faster alternative to the Monte Carlo simulation method for risk quantification in general. It can cope with various factor-copula model choices, which we demonstrate via examples of a two-factor Gaussian copula and a two-factor Gaussian-t hybrid copula. The fast error convergence is proved mathematically and then verified by numerical experiments, in which Value-at-Risk, Expected Shortfall, and conditional Expected Shortfall are taken as examples of commonly used risk metrics. The calculation speed and accuracy are tested to be significantly superior to the MC simulation for real-sized portfolios. The computational complexity is, by design, primarily driven by the number of factors instead of the number of obligors, as in the case of Monte Carlo simulation. The limitation of this method lies in the "curse of dimension" that is intrinsic to multi-dimensional numerical integration, which, however, can be relaxed with the help of dimension reduction techniques and/or parallel computing, as we will demonstrate in a separate paper. The potential application of this method has a wide range: from credit derivatives pricing to economic capital calculation of the banking book, default risk charge and incremental risk charge computation of the trading book, and even to other risk types than credit risk.Keywords: credit portfolio, risk allocation, factor copula model, the COS method, Fourier method
Procedia PDF Downloads 168478 Criticism and Theorizing of Architecture and Urbanism in the Creativity Cinematographic Film
Authors: Wafeek Mohamed Ibrahim Mohamed
Abstract:
In the era of globalization, the camera of the cinematographic film plays a very important role in terms of monitoring and documenting what it was and distinguished the built environment of architectural and Urbanism. Moving the audience to the out-going backward through the cinematographic film and its stereophonic screen by which the picture appears at its best and its coexistence reached now its third dimension. The camera has indicated to the city shape with its paths, (alley) lanes, buildings and its architectural style. We have seen the architectural styles in its cinematic scenes which remained a remembrance in its history, in spite of the fact that some of which has been disappearing as what happened to ‘Boulak Bridge’ in Cairo built by ‘Eiffel’ and it has been demolished, but it remains a remembrance we can see it in the films of ’Usta Hassan’and A Crime in the Quiet Neighborhood. The purpose of the fundamental research is an attempt to reach a critical view of the idea of criticism and theorizing for Architecture and Urbanism in the cinematographic film and their relationship and reflection on the ‘audience’ understanding of the public opinion related to our built environment of Architectural and Urbanism with its problems and hardness. It is like as a trial to study the Architecture and Urbanism of the built environment in the cinematographic film and hooking up (linking) a realistic view of the governing conceptual significance thereof. The aesthetic thought of our traditional environment, in a psychological and anthropological framework, derives from the cinematic concept of the Architecture and Urbanism of the place and the dynamics of the space. The architectural space considers the foundation stone of the cinematic story and the main background of the events therein, which integrate the audience into a romantic trip to the city through its symbolized image of the spaces, lanes [alley], etc. This will be done through two main branches: firstly, Reviewing during time pursuit of the Architecture and Urbanism in the cinematographic films the thirties ago in the Egyptian cinema [onset from the film ‘Bab El Hadid’ to the American University at a film of ‘Saidi at the American University’]. The research concludes the importance of the need to study the cinematic films which deal with our societies, their architectural and Urbanism concerns whether the traditional ones or the contemporary and their crisis (such as the housing crisis in the film of ‘Krakoun in the street’, etc) to study the built environment with its architectural dynamic spaces through a modernist view. In addition, using the cinema as an important Media for spreading the ideas, documenting and monitoring the current changes in the built environment through its various dramas and comedies, etc. The cinema is considered as a mirror of the society and its built environment over the epochs. It assured the unique case constituted by cinema with the audience (public opinion) through a sense of emptiness and forming the mental image related to the city and the built environment.Keywords: architectural and urbanism, cinematographic architectural, film, space in the film, media
Procedia PDF Downloads 238477 New Hardy Type Inequalities of Two-Dimensional on Time Scales via Steklov Operator
Authors: Wedad Albalawi
Abstract:
The mathematical inequalities have been the core of mathematical study and used in almost all branches of mathematics as well in various areas of science and engineering. The inequalities by Hardy, Littlewood and Polya were the first significant composition of several science. This work presents fundamental ideas, results and techniques, and it has had much influence on research in various branches of analysis. Since 1934, various inequalities have been produced and studied in the literature. Furthermore, some inequalities have been formulated by some operators; in 1989, weighted Hardy inequalities have been obtained for integration operators. Then, they obtained weighted estimates for Steklov operators that were used in the solution of the Cauchy problem for the wave equation. They were improved upon in 2011 to include the boundedness of integral operators from the weighted Sobolev space to the weighted Lebesgue space. Some inequalities have been demonstrated and improved using the Hardy–Steklov operator. Recently, a lot of integral inequalities have been improved by differential operators. Hardy inequality has been one of the tools that is used to consider integrity solutions of differential equations. Then, dynamic inequalities of Hardy and Coposon have been extended and improved by various integral operators. These inequalities would be interesting to apply in different fields of mathematics (functional spaces, partial differential equations, mathematical modeling). Some inequalities have been appeared involving Copson and Hardy inequalities on time scales to obtain new special version of them. A time scale is an arbitrary nonempty closed subset of the real numbers. Then, the dynamic inequalities on time scales have received a lot of attention in the literature and has become a major field in pure and applied mathematics. There are many applications of dynamic equations on time scales to quantum mechanics, electrical engineering, neural networks, heat transfer, combinatorics, and population dynamics. This study focuses on Hardy and Coposon inequalities, using Steklov operator on time scale in double integrals to obtain special cases of time-scale inequalities of Hardy and Copson on high dimensions. The advantage of this study is that it uses the one-dimensional classical Hardy inequality to obtain higher dimensional on time scale versions that will be applied in the solution of the Cauchy problem for the wave equation. In addition, the obtained inequalities have various applications involving discontinuous domains such as bug populations, phytoremediation of metals, wound healing, maximization problems. The proof can be done by introducing restriction on the operator in several cases. The concepts in time scale version such as time scales calculus will be used that allows to unify and extend many problems from the theories of differential and of difference equations. In addition, using chain rule, and some properties of multiple integrals on time scales, some theorems of Fubini and the inequality of H¨older.Keywords: time scales, inequality of hardy, inequality of coposon, steklov operator
Procedia PDF Downloads 97476 Selenuranes as Cysteine Protease Inhibitors: Theorical Investigation on Model Systems
Authors: Gabriela D. Silva, Rodrigo L. O. R. Cunha, Mauricio D. Coutinho-Neto
Abstract:
In the last four decades the biological activities of selenium compounds has received great attention, particularly for hypervalent derivates from selenium (IV) used as enzyme inhibitors. The unregulated activity of cysteine proteases are related to the development of several pathologies, such as neurological disorders, cardiovascular diseases, obesity, rheumatoid arthritis, cancer and parasitic infections. These enzymes are therefore a valuable target for designing new small molecule inhibitors such as selenuranes. Even tough there has been advances in the synthesis and design of new selenuranes based inhibitors, little is known about their mechanism of action. It is a given that inhibition occurs through the reaction between the thiol group of the enzyme and the chalcogen atom. However, several open questions remain about the nature of the mechanism (associative vs. dissociative) and about the nature of the reactive species in solution under physiological conditions. In this work we performed a theoretical investigation on model systems to study the possible routes of substitution reactions. Nucleophiles may be present in biological systems, our interest is centered in the thiol groups from the cysteine proteases and the hydroxyls from the aqueous environment. We therefore expect this study to clarify the possibility of a route reaction in two stages, the first consisting of the substitution of chloro atoms by hydroxyl groups and then replacing these hydroxyl groups per thiol groups in selenuranes. The structures of selenuranes and nucleophiles were optimized using density function theory along the B3LYP functional and a 6-311+G(d) basis set. Solvent was treated using the IEFPCM method as implemented in the Gaussian 09 code. Our results indicate that hydrolysis from water react preferably with selenuranes, and then, they are replaced by the thiol group. It show the energy values of -106,0730423 kcal/mol for dople substituition by hydroxyl group and 96,63078511 kcal/mol for thiol group. The solvatation and pH reduction promotes this route, increasing the energy value for reaction with hydroxil group to -50,75637672 kcal/mol and decreasing the energy value for thiol to 7,917767189 kcal/mol. Alternative ways were analyzed for monosubstitution (considering the competition between Cl, OH and SH groups) and they suggest the same route. Similar results were obtained for aliphatic and aromatic selenuranes studied.Keywords: chalcogenes, computational study, cysteine proteases, enzyme inhibitors
Procedia PDF Downloads 305475 Considering International/Local Peacebuilding Partnerships: The Stoplights Analysis System
Authors: Charles Davidson
Abstract:
This paper presents the Stoplight Analysis System of Partnering Organizations Readiness, offering a structured framework to evaluate conflict resolution collaboration feasibility, especially crucial in conflict areas, employing a colour-coded approach and specific assessment points, with implications for more informed decision-making and improved outcomes in peacebuilding initiatives. Derived from at total of 40 years of practical peacebuilding experience from the project’s two researchers as well as interviews of various other peacebuilding actors, this paper introduces the Stoplight Analysis System of Partnering Organizations Readiness, a comprehensive framework designed to facilitate effective collaboration in international/local peacebuilding partnerships by evaluating the readiness of both potential partner organisations and the location of the proposed project. ^The system employs a colour-coded approach, categorising potential partnerships into three distinct indicators: Red (no-go), Yellow (requires further research), and Green (promising, go ahead). Within each category, specific points are identified for assessment, guiding decision-makers in evaluating the feasibility and potential success of collaboration. The Red category signals significant barriers, prompting an immediate stoppage in the consideration of partnership. The Yellow category encourages deeper investigation to determine whether potential issues can be mitigated, while the Green category signifies organisations deemed ready for collaboration. This systematic and structured approach empowers decision-makers to make informed choices, enhancing the likelihood of successful and mutually beneficial partnerships. Methodologically, this paper utilised interviews from peacebuilders from around the globe, scholarly research of extant strategies, and a collaborative review of programming from the project’s two authors from their own time in the field. This method as a formalised model has been employed for the past two years across a litany of partnership considerations, and has been adjusted according to its field experimentation. This research holds significant importance in the field of conflict resolution as it provides a systematic and structured approach to peacebuilding partnership evaluation. In conflict-affected regions, where the dynamics are complex and challenging, the Stoplight Analysis System offers decision-makers a practical tool to assess the readiness of partnering organisations. This approach can enhance the efficiency of conflict resolution efforts by ensuring that resources are directed towards partnerships with a higher likelihood of success, ultimately contributing to more effective and sustainable peacebuilding outcomes.Keywords: collaboration, conflict resolution, partnerships, peacebuilding
Procedia PDF Downloads 64474 Digital Literacy Transformation and Implications in Institutions of Higher Learning in Kenya
Authors: Emily Cherono Sawe, Elisha Ondieki Makori
Abstract:
Knowledge and digital economies have brought challenges and potential opportunities for universities to innovate and improve the quality of learning. Disruption technologies and information dynamics continue to transform and change the landscape in teaching, scholarship, and research activities across universities. Digital literacy is a fundamental and imperative element in higher education and training, as witnessed during the new norm. COVID-19 caused unprecedented disruption in universities, where teaching and learning depended on digital innovations and applications. Academic services and activities were provided online, including library information services. Information professionals were forced to adopt various digital platforms in order to provide information services to patrons. University libraries’ roles in fulfilling educational responsibilities continue to evolve in response to changes in pedagogy, technology, economy, society, policies, and strategies of parent institutions. Libraries are currently undergoing considerable transformational change as a result of the inclusion of a digital environment. Academic libraries have been at the forefront of providing online learning resources and online information services, as well as supporting students and staff to develop digital literacy skills via online courses, tutorials, and workshops. Digital literacy transformation and information staff are crucial elements reminiscent of the prioritization of skills and knowledge for lifelong learning. The purpose of this baseline research is to assess the implications of digital literacy transformation in institutions of higher learning in Kenya and share appropriate strategies to leverage and sustain teaching and research. Objectives include examining the leverage and preparedness of the digital literacy environment in streamlining learning in the universities, exploring and benchmarking imperative digital competence for information professionals, establishing the perception of information professionals towards digital literacy skills, and determining lessons, best practices, and strategies to accelerate digital literacy transformation for effective research and learning in the universities. The study will adopt a descriptive research design using questionnaires and document analysis as the instruments for data collection. The targeted population is librarians and information professionals, as well as academics in public and private universities teaching information literacy programmes. Data and information are to be collected through an online structured questionnaire and digital face-to-face interviews. Findings and results will provide promising lessons together with best practices and strategies to transform and change digital literacies in university libraries in Kenya.Keywords: digital literacy, digital innovations, information professionals, librarians, higher education, university libraries, digital information literacy
Procedia PDF Downloads 98473 Single-parent Families and the Criminal Ramifications on Children in the United Kingdom; A Systematic Review
Authors: Naveed Ali
Abstract:
Under the construct of the ‘traditional family’ set-up (male and female parent) in the United Kingdom, the absence of a male parental figure remains a critical factor associated with an elevated risk of criminal behavior among youths. Empirical evidence suggests that father absence significantly correlates with increased rates of juvenile delinquency and criminality. For instance, data reveals that approximately 63% of young offenders in the United Kingdom originate from single-parent households, predominantly those without a father. Moreover, research displays that boys from father-absent homes are three times more likely to exhibit antisocial behavior compared to their peers from two-parent families. This absence can negatively impact educational attainment, with children from fatherless homes being twice as likely to leave school prematurely, thereby increasing their vulnerability to peer influence and gang affiliation- key pathways into criminal activities. Both legal frameworks and social policies in the United Kingdom acknowledge the pivotal role of family stability in crime prevention. Initiatives including parenting support programs, community-based interventions, and targeted youth services seek to address the challenges faced by single-parent families and mitigate the criminogenic effects of father absence. Despite these efforts, persistent challenges remain, including the need to address the broader socioeconomic determinants of family instability and to refine legal strategies that effectively address the root causes of youth offending linked to the absence of a male parental figure. A nuanced understanding of these dynamics is essential for developing more effective legal and social interventions aimed at reducing juvenile delinquency and supporting at-risk populations within the United Kingdom. This paper will highlight the significant impact of the absence of a male parental figure on youth crime rates in the United Kingdom, underlining the need for enhanced legal and social responses. By examining the interplay between family structure and juvenile offending, the paper will underline the importance of developing more comprehensive interventions that address both familial factors and the wider socioeconomic context. The findings aim to guide policymakers and practitioners in creating more effective strategies to reduce youth crime, ultimately strengthening support systems for vulnerable families and mitigating the adverse effects of father absence on young individuals.Keywords: criminality, family law, legal framework, the united kingdom perspective
Procedia PDF Downloads 31472 The Gypsy Community Facing the Sexual Orientation: An Empirical Approach to the Attitudes of the Gypsy Population of Granada Towards Homosexual Sex-Affective Relationships
Authors: Elena Arquer Cuenca
Abstract:
The gypsy community has been a mistreated and rejected group since its arrival in the Iberian Peninsula in the 15th century. At present, despite being the largest ethnic minority group in Spain as well as in Europe, the different legal and social initiatives in favour of equality continue to suffer discrimination by the general society. This has fostered a strengthening of the endogroup accompanied by cultural conservatism as a form of self-protection. Despite the current trend of normalization of sexual diversity in modern societies, LGB people continue to suffer discrimination, especially in more traditional environments or communities. This rejection for reasons of sexual orientation within the family or community can hinder the free development of the person and compromise peaceful coexistence. The present work is intended as an approach to the attitudes of the gypsy population towards non-heterosexual sexual orientation. The objective is none other than ‘to know the appreciation that the gypsy population has about homosexual sex-affective relationships, in order to assess whether this has any impact on family and community coexistence’. The following specific objectives are derived from this general objective: ‘To find out whether there is a relationship between the dichotomous Roma gender system and the acceptance/rejection of homosexuality’; ‘to analyse whether sexual orientation has an impact on the coexistence of the Roman family and community’; ‘to analyse whether the historical discrimination suffered by the Roman population favours the maintenance of the patriarchal heterosexual reproductive family’; and lastly ‘to explore whether ICTs have promoted the process of normalisation and/or acceptance of homosexuality within the Roma community’. In order to achieve these objectives, a bibliographical and documentary review has been used, as well as the semi-structured interview technique, in which 4 gypsy people participated (2 women and 2 men of different ages). One of the main findings was the inappropriateness of the use of the homogenising category "Gypsy People" at present, given the great diversity among the Roma communities. Moreover, the difficulty in accepting homosexuality seems to be related to the fact that the heterosexual reproductive family has been the main survival mechanism of Roma communities over centuries. However, it will be concluded that attitudes towards homosexuality will vary depending on the socio-economic and cultural context and factors such as age or professed religion. Three main contributions of this research are: firstly, the inclusion of sexual orientation as a variable to be considered when analysing peaceful coexistence; secondly socio-historical dynamics and structures of inequality have been taken into account when analysing Roma attitudes towards homosexuality; and finally, the processual nature of socio-cultural changes has also been considered.Keywords: gender, homosexuality, ICTs, peaceful coexistence, Roma community, sexual orientation
Procedia PDF Downloads 87471 The Effect of Acute Muscular Exercise and Training Status on Haematological Indices in Adult Males
Authors: Ibrahim Musa, Mohammed Abdul-Aziz Mabrouk, Yusuf Tanko
Abstract:
Introduction: Long term physical training affect the performance of athletes especially the females. Soccer which is a team sport, played in an outdoor field, require adequate oxygen transport system for the maximal aerobic power during exercise in order to complete 90 minutes of competitive play. Suboptimal haematological status has often been recorded in athletes with intensive physical activity. It may be due to the iron depletion caused by hemolysis or haemodilution results from plasma volume expansion. There is lack of data regarding the dynamics of red blood cell variables, in male football players. We hypothesized that, a long competitive season involving frequent matches and intense training could influence red blood cell variables, as a consequence of applying repeated physical loads when compared with sedentary. Methods: This cross sectional study was carried on 40 adult males (20 athletes and 20 non athletes) between 18-25 years of age. The 20 apparently healthy male non athletes were taken as sedentary and 20 male footballers comprise the study group. The university institutional review board (ABUTH/HREC/TRG/36) gave approval for all procedures in accordance with the Declaration of Helsinki. Red blood cell (RBC) concentration, packed cell volume (PCV), and plasma volume were measured in fasting state and immediately after exercise. Statistical analysis was done by using SPSS/ win.20.0 for comparison within and between the groups, using student’s paired and unpaired “t” test respectively. Results: The finding from our study shows that, immediately after termination of exercise, the mean RBC counts and PCV significantly (p<0.005) decreased with significant increased (p<0.005) in plasma volume when compared with pre-exercised values in both group. In addition the post exercise RBC was significantly higher in untrained (261.10±8.5) when compared with trained (255.20±4.5). However, there was no significant differences in the post exercise hematocrit and plasma volume parameters between the sedentary and the footballers. Moreover, beside changes in pre-exercise values among the sedentary and the football players, the resting red blood cell counts and Plasma volume (PV %) was significantly (p < 0.05) higher in the sedentary group (306.30±10.05 x 104 /mm3; 58.40±0.54%) when compared with football players (293.70±4.65 x 104 /mm3; 55.60±1.18%). On the other hand, the sedentary group exhibited significant (p < 0.05) decrease in PCV (41.60±0.54%) when compared with the football players (44.40±1.18%). Conclusions: It is therefore proposed that the acute football exercise induced reduction in RBC and PCV is entirely due to plasma volume expansion, and not of red blood cell hemolysis. In addition, the training status also influenced haematological indices of male football players differently from the sedentary at rest due to adaptive response. This is novel.Keywords: Haematological Indices, Performance Status, Sedentary, Male Football Players
Procedia PDF Downloads 258470 A Local Tensor Clustering Algorithm to Annotate Uncharacterized Genes with Many Biological Networks
Authors: Paul Shize Li, Frank Alber
Abstract:
A fundamental task of clinical genomics is to unravel the functions of genes and their associations with disorders. Although experimental biology has made efforts to discover and elucidate the molecular mechanisms of individual genes in the past decades, still about 40% of human genes have unknown functions, not to mention the diseases they may be related to. For those biologists who are interested in a particular gene with unknown functions, a powerful computational method tailored for inferring the functions and disease relevance of uncharacterized genes is strongly needed. Studies have shown that genes strongly linked to each other in multiple biological networks are more likely to have similar functions. This indicates that the densely connected subgraphs in multiple biological networks are useful in the functional and phenotypic annotation of uncharacterized genes. Therefore, in this work, we have developed an integrative network approach to identify the frequent local clusters, which are defined as those densely connected subgraphs that frequently occur in multiple biological networks and consist of the query gene that has few or no disease or function annotations. This is a local clustering algorithm that models multiple biological networks sharing the same gene set as a three-dimensional matrix, the so-called tensor, and employs the tensor-based optimization method to efficiently find the frequent local clusters. Specifically, massive public gene expression data sets that comprehensively cover dynamic, physiological, and environmental conditions are used to generate hundreds of gene co-expression networks. By integrating these gene co-expression networks, for a given uncharacterized gene that is of biologist’s interest, the proposed method can be applied to identify the frequent local clusters that consist of this uncharacterized gene. Finally, those frequent local clusters are used for function and disease annotation of this uncharacterized gene. This local tensor clustering algorithm outperformed the competing tensor-based algorithm in both module discovery and running time. We also demonstrated the use of the proposed method on real data of hundreds of gene co-expression data and showed that it can comprehensively characterize the query gene. Therefore, this study provides a new tool for annotating the uncharacterized genes and has great potential to assist clinical genomic diagnostics.Keywords: local tensor clustering, query gene, gene co-expression network, gene annotation
Procedia PDF Downloads 169469 Forging A Distinct Understanding of Implicit Bias
Authors: Benjamin D Reese Jr
Abstract:
Implicit bias is understood as unconscious attitudes, stereotypes, or associations that can influence the cognitions, actions, decisions, and interactions of an individual without intentional control. These unconscious attitudes or stereotypes are often targeted toward specific groups of people based on their gender, race, age, perceived sexual orientation or other social categories. Since the late 1980s, there has been a proliferation of research that hypothesizes that the operation of implicit bias is the result of the brain needing to process millions of bits of information every second. Hence, one’s prior individual learning history provides ‘shortcuts’. As soon as one see someone of a certain race, one have immediate associations based on their past learning, and one might make assumptions about their competence, skill, or danger. These assumptions are outside of conscious awareness. In recent years, an alternative conceptualization has been proposed. The ‘bias of crowds’ theory hypothesizes that a given context or situation influences the degree of accessibility of particular biases. For example, in certain geographic communities in the United States, there is a long-standing and deeply ingrained history of structures, policies, and practices that contribute to racial inequities and bias toward African Americans. Hence, negative biases among groups of people towards African Americans are more accessible in such contexts or communities. This theory does not focus on individual brain functioning or cognitive ‘shortcuts.’ Therefore, attempts to modify individual perceptions or learning might have negligible impact on those embedded environmental systems or policies that are within certain contexts or communities. From the ‘bias of crowds’ perspective, high levels of racial bias in a community can be reduced by making fundamental changes in structures, policies, and practices to create a more equitable context or community rather than focusing on training or education aimed at reducing an individual’s biases. The current paper acknowledges and supports the foundational role of long-standing structures, policies, and practices that maintain racial inequities, as well as inequities related to other social categories, and highlights the critical need to continue organizational, community, and national efforts to eliminate those inequities. It also makes a case for providing individual leaders with a deep understanding of the dynamics of how implicit biases impact cognitions, actions, decisions, and interactions so that those leaders might more effectively develop structural changes in the processes and systems under their purview. This approach incorporates both the importance of an individual’s learning history as well as the important variables within the ‘bias of crowds’ theory. The paper also offers a model for leadership education, as well as examples of structural changes leaders might consider.Keywords: implicit bias, unconscious bias, bias, inequities
Procedia PDF Downloads 13468 Lead Removal From Ex- Mining Pond Water by Electrocoagulation: Kinetics, Isotherm, and Dynamic Studies
Authors: Kalu Uka Orji, Nasiman Sapari, Khamaruzaman W. Yusof
Abstract:
Exposure of galena (PbS), tealite (PbSnS2), and other associated minerals during mining activities release lead (Pb) and other heavy metals into the mining water through oxidation and dissolution. Heavy metal pollution has become an environmental challenge. Lead, for instance, can cause toxic effects to human health, including brain damage. Ex-mining pond water was reported to contain lead as high as 69.46 mg/L. Conventional treatment does not easily remove lead from water. A promising and emerging treatment technology for lead removal is the application of the electrocoagulation (EC) process. However, some of the problems associated with EC are systematic reactor design, selection of maximum EC operating parameters, scale-up, among others. This study investigated an EC process for the removal of lead from synthetic ex-mining pond water using a batch reactor and Fe electrodes. The effects of various operating parameters on lead removal efficiency were examined. The results obtained indicated that the maximum removal efficiency of 98.6% was achieved at an initial PH of 9, the current density of 15mA/cm2, electrode spacing of 0.3cm, treatment time of 60 minutes, Liquid Motion of Magnetic Stirring (LM-MS), and electrode arrangement = BP-S. The above experimental data were further modeled and optimized using a 2-Level 4-Factor Full Factorial design, a Response Surface Methodology (RSM). The four factors optimized were the current density, electrode spacing, electrode arrangements, and Liquid Motion Driving Mode (LM). Based on the regression model and the analysis of variance (ANOVA) at 0.01%, the results showed that an increase in current density and LM-MS increased the removal efficiency while the reverse was the case for electrode spacing. The model predicted the optimal lead removal efficiency of 99.962% with an electrode spacing of 0.38 cm alongside others. Applying the predicted parameters, the lead removal efficiency of 100% was actualized. The electrode and energy consumptions were 0.192kg/m3 and 2.56 kWh/m3 respectively. Meanwhile, the adsorption kinetic studies indicated that the overall lead adsorption system belongs to the pseudo-second-order kinetic model. The adsorption dynamics were also random, spontaneous, and endothermic. The higher temperature of the process enhances adsorption capacity. Furthermore, the adsorption isotherm fitted the Freundlish model more than the Langmuir model; describing the adsorption on a heterogeneous surface and showed good adsorption efficiency by the Fe electrodes. Adsorption of Pb2+ onto the Fe electrodes was a complex reaction, involving more than one mechanism. The overall results proved that EC is an efficient technique for lead removal from synthetic mining pond water. The findings of this study would have application in the scale-up of EC reactor and in the design of water treatment plants for feed-water sources that contain lead using the electrocoagulation method.Keywords: ex-mining water, electrocoagulation, lead, adsorption kinetics
Procedia PDF Downloads 149