Search results for: predicting models
1495 An Integrated Label Propagation Network for Structural Condition Assessment
Authors: Qingsong Xiong, Cheng Yuan, Qingzhao Kong, Haibei Xiong
Abstract:
Deep-learning-driven approaches based on vibration responses have attracted larger attention in rapid structural condition assessment while obtaining sufficient measured training data with corresponding labels is relevantly costly and even inaccessible in practical engineering. This study proposes an integrated label propagation network for structural condition assessment, which is able to diffuse the labels from continuously-generating measurements by intact structure to those of missing labels of damage scenarios. The integrated network is embedded with damage-sensitive features extraction by deep autoencoder and pseudo-labels propagation by optimized fuzzy clustering, the architecture and mechanism which are elaborated. With a sophisticated network design and specified strategies for improving performance, the present network achieves to extends the superiority of self-supervised representation learning, unsupervised fuzzy clustering and supervised classification algorithms into an integration aiming at assessing damage conditions. Both numerical simulations and full-scale laboratory shaking table tests of a two-story building structure were conducted to validate its capability of detecting post-earthquake damage. The identifying accuracy of a present network was 0.95 in numerical validations and an average 0.86 in laboratory case studies, respectively. It should be noted that the whole training procedure of all involved models in the network stringently doesn’t rely upon any labeled data of damage scenarios but only several samples of intact structure, which indicates a significant superiority in model adaptability and feasible applicability in practice.Keywords: autoencoder, condition assessment, fuzzy clustering, label propagation
Procedia PDF Downloads 971494 The Ideology of the Jordanian Media Women’s Discourse: Lana Mamkgh as an Example
Authors: Amani Hassan Abu Atieh
Abstract:
This study aims at examining the patterns of ideology reflected in the written discourse of women writers in the media of Jordan; Lana Mamkgh is taken as an example. This study critically analyzes the discursive, linguistic, and cognitive representations that she employs as an agent in the institutionalized discourse of the media. Grounded in van Dijk’s critical discourse analysis approach to Sociocognitive Discourse Studies, the present study builds a multilayer framework that encompasses van Dijk’s triangle: discourse, society, and cognition. Specifically, the study attempts to analyze, at both micro and macro levels, the underlying cognitive processes and structures, mainly ideology and discursive strategies, which are functional in the production of women’s discourse in terms of meaning, forms, and functions. Cognitive processes that social actors adopt are underlined by experience/context and semantic mental models on the one hand and social cognition on the other. This study is based on qualitative research and adopts purposive sampling, taking as an example a sample of an opinion article written by Lana Mamkgh in the Arabic Jordanian Daily, Al Rai. Taking her role as an agent in the public sphere, she stresses the National and feminist ideologies, demonstrating the use of assertive, evaluative, and expressive linguistic and rhetorical devices that appeal to the logic, ethics, and emotions of the addressee. Highlighting the agency of Jordanian writers in the media, the study sought to achieve the macro goal of dispensing political and social justice to the underprivileged. Further, the study seeks to prove that the voice of Jordanian women, viewed as underrepresented and invisible in the public arena, has come through clearly.Keywords: critical discourse analysis, sociocognitive theory, ideology, women discourse, media
Procedia PDF Downloads 1081493 Relevance of Reliability Approaches to Predict Mould Growth in Biobased Building Materials
Authors: Lucile Soudani, Hervé Illy, Rémi Bouchié
Abstract:
Mould growth in living environments has been widely reported for decades all throughout the world. A higher level of moisture in housings can lead to building degradation, chemical component emissions from construction materials as well as enhancing mould growth within the envelope elements or on the internal surfaces. Moreover, a significant number of studies have highlighted the link between mould presence and the prevalence of respiratory diseases. In recent years, the proportion of biobased materials used in construction has been increasing, as seen as an effective lever to reduce the environmental impact of the building sector. Besides, bio-based materials are also hygroscopic materials: when in contact with the wet air of a surrounding environment, their porous structures enable a better capture of water molecules, thus providing a more suitable background for mould growth. Many studies have been conducted to develop reliable models to be able to predict mould appearance, growth, and decay over many building materials and external exposures. Some of them require information about temperature and/or relative humidity, exposure times, material sensitivities, etc. Nevertheless, several studies have highlighted a large disparity between predictions and actual mould growth in experimental settings as well as in occupied buildings. The difficulty of considering the influence of all parameters appears to be the most challenging issue. As many complex phenomena take place simultaneously, a preliminary study has been carried out to evaluate the feasibility to sadopt a reliability approach rather than a deterministic approach. Both epistemic and random uncertainties were identified specifically for the prediction of mould appearance and growth. Several studies published in the literature were selected and analysed, from the agri-food or automotive sectors, as the deployed methodology appeared promising.Keywords: bio-based materials, mould growth, numerical prediction, reliability approach
Procedia PDF Downloads 461492 Metrology in Egyptian Architecture, Interrelation with Archaeology
Authors: Monica M. Marcos
Abstract:
In the framework of Archaeological Research, Heritage Conservation and Restoration, the object of study is metrology applied in composition of religious architecture in ancient Egypt, and usefulness in Archaology. The objective is the determination of the geometric and metrological relations in architectural models and the module used in the initial project of the buildings. The study and data collection of religious buildings, tombs and temples of the ancient Egypt, is completed with plans. The measurements systematization and buildings modulation makes possible to establish common compositional parameters, with a module determined by the measurement unit used. The measurement system corresponding to the main period of egyptian history, was the Egyptian royal cubit. The analysis of units measurements, used in architectural design, provides exact numbers on buildable spaces dimensions. It allows establishing proportional relationships between them, and finding a geometric composition module, on which the original project was based. This responds to a philosophical and functional concept of projected spaces. In the heritage rehabilitation and restoration field, knowledge of metrology helps in excavation, reconstruction and restoration of construction elements. The correct use of metrology contributes to the identification of possible work areas, helping to locate where the damaged or missing areas are. Also in restoration projects, metrology is useful for reordering and locating decontextualized parts of buildings. The conversion of measurements taken in the current International System to the ancient egyptian measurements, allows understand its conceptual purpose and its functionality, which makes easier to carry out archaeological intervention. In the work carried out in archaeological excavations, metrology is an essential tool for locating sites and establishing work zones.Keywords: egyptology, metrology, archaeology, measurements, Egyptian cubit
Procedia PDF Downloads 251491 Bayesian System and Copula for Event Detection and Summarization of Soccer Videos
Authors: Dhanuja S. Patil, Sanjay B. Waykar
Abstract:
Event detection is a standout amongst the most key parts for distinctive sorts of area applications of video data framework. Recently, it has picked up an extensive interest of experts and in scholastics from different zones. While detecting video event has been the subject of broad study efforts recently, impressively less existing methodology has considered multi-model data and issues related efficiency. Start of soccer matches different doubtful circumstances rise that can't be effectively judged by the referee committee. A framework that checks objectively image arrangements would prevent not right interpretations because of some errors, or high velocity of the events. Bayesian networks give a structure for dealing with this vulnerability using an essential graphical structure likewise the probability analytics. We propose an efficient structure for analysing and summarization of soccer videos utilizing object-based features. The proposed work utilizes the t-cherry junction tree, an exceptionally recent advancement in probabilistic graphical models, to create a compact representation and great approximation intractable model for client’s relationships in an interpersonal organization. There are various advantages in this approach firstly; the t-cherry gives best approximation by means of junction trees class. Secondly, to construct a t-cherry junction tree can be to a great extent parallelized; and at last inference can be performed utilizing distributed computation. Examination results demonstrates the effectiveness, adequacy, and the strength of the proposed work which is shown over a far reaching information set, comprising more soccer feature, caught at better places.Keywords: summarization, detection, Bayesian network, t-cherry tree
Procedia PDF Downloads 3251490 Design of Hybrid Auxetic Metamaterials for Enhanced Energy Absorption under Compression
Authors: Ercan Karadogan, Fatih Usta
Abstract:
Auxetic materials have a negative Poisson’s ratio (NPR), which is not often found in nature. They are metamaterials that have potential applications in many engineering fields. Mechanical metamaterials are synthetically designed structures with unusual mechanical properties. These mechanical properties are dependent on the properties of the matrix structure. They have the following special characteristics, i.e., improved shear modulus, increased energy absorption, and intensive fracture toughness. Non-auxetic materials compress transversely when they are stretched. The system naturally is inclined to keep its density constant. The transversal compression increases the density to balance the loss in the longitudinal direction. This study proposes to improve the crushing performance of hybrid auxetic materials. The re-entrant honeycomb structure has been combined with a star honeycomb, an S-shaped unit cell, a double arrowhead, and a structurally hexagonal re-entrant honeycomb by 9 X 9 cells, i.e., the number of cells is 9 in the lateral direction and 9 in the vertical direction. The Finite Element (FE) and experimental methods have been used to determine the compression behavior of the developed hybrid auxetic structures. The FE models have been developed by using Abaqus software. The specimens made of polymer plastic materials have been 3D printed and subjected to compression loading. The results are compared in terms of specific energy absorption and strength. This paper describes the quasi-static crushing behavior of two types of hybrid lattice structures (auxetic + auxetic and auxetic + non-auxetic). The results show that the developed hybrid structures can be useful to control collapse mechanisms and present larger energy absorption compared to conventional re-entrant auxetic structures.Keywords: auxetic materials, compressive behavior, metamaterials, negative Poisson’s ratio
Procedia PDF Downloads 971489 Application of Rapidly Exploring Random Tree Star-Smart and G2 Quintic Pythagorean Hodograph Curves to the UAV Path Planning Problem
Authors: Luiz G. Véras, Felipe L. Medeiros, Lamartine F. Guimarães
Abstract:
This work approaches the automatic planning of paths for Unmanned Aerial Vehicles (UAVs) through the application of the Rapidly Exploring Random Tree Star-Smart (RRT*-Smart) algorithm. RRT*-Smart is a sampling process of positions of a navigation environment through a tree-type graph. The algorithm consists of randomly expanding a tree from an initial position (root node) until one of its branches reaches the final position of the path to be planned. The algorithm ensures the planning of the shortest path, considering the number of iterations tending to infinity. When a new node is inserted into the tree, each neighbor node of the new node is connected to it, if and only if the extension of the path between the root node and that neighbor node, with this new connection, is less than the current extension of the path between those two nodes. RRT*-smart uses an intelligent sampling strategy to plan less extensive routes by spending a smaller number of iterations. This strategy is based on the creation of samples/nodes near to the convex vertices of the navigation environment obstacles. The planned paths are smoothed through the application of the method called quintic pythagorean hodograph curves. The smoothing process converts a route into a dynamically-viable one based on the kinematic constraints of the vehicle. This smoothing method models the hodograph components of a curve with polynomials that obey the Pythagorean Theorem. Its advantage is that the obtained structure allows computation of the curve length in an exact way, without the need for quadratural techniques for the resolution of integrals.Keywords: path planning, path smoothing, Pythagorean hodograph curve, RRT*-Smart
Procedia PDF Downloads 1671488 Fundamental Natural Frequency of Chromite Composite Floor System
Authors: Farhad Abbas Gandomkar, Mona Danesh
Abstract:
This paper aims to determine Fundamental Natural Frequency (FNF) of a structural composite floor system known as Chromite. To achieve this purpose, FNFs of studied panels are determined by development of Finite Element Models (FEMs) in ABAQUS program. American Institute of Steel Construction (AISC) code in Steel Design Guide Series 11, presents a fundamental formula to calculate FNF of a steel framed floor system. This formula has been used to verify results of the FEMs. The variability in the FNF of the studied system under various parameters such as dimensions of floor, boundary conditions, rigidity of main and secondary beams around the floor, thickness of concrete slab, height of composite joists, distance between composite joists, thickness of top and bottom flanges of the open web steel joists, and adding tie beam perpendicular on the composite joists, is determined. The results show that changing in dimensions of the system, its boundary conditions, rigidity of main beam, and also adding tie beam, significant changes the FNF of the system up to 452.9%, 50.8%, -52.2%, %52.6%, respectively. In addition, increasing thickness of concrete slab increases the FNF of the system up to 10.8%. Furthermore, the results demonstrate that variation in rigidity of secondary beam, height of composite joist, and distance between composite joists, and thickness of top and bottom flanges of open web steel joists insignificant changes the FNF of the studied system up to -0.02%, -3%, -6.1%, and 0.96%, respectively. Finally, the results of this study help designer predict occurrence of resonance, comfortableness, and design criteria of the studied system.Keywords: Fundamental Natural Frequency, Chromite Composite Floor System, Finite Element Method, low and high frequency floors, Comfortableness, resonance.
Procedia PDF Downloads 4571487 Digital System Design for Strategic Improvement Planning in Education: A Socio-Technical and Iterative Design Approach
Authors: Neeley Current, Fatih Demir, Kenneth Haggerty, Blake Naughton, Isa Jahnke
Abstract:
Educational systems seek reform using data-intensive continuous improvement processes known as strategic improvement plans (SIPs). Schools turn to digital systems to monitor, analyze and report SIPs. One technical challenge of these digital systems focuses on integrating a highly diverse set of data sources. Another challenge is to create a learnable sociotechnical system to help administrators, principals and teachers add, manipulate and interpret data. This study explores to what extent one particular system is usable and useful for strategic planning activities and whether intended users see the benefit of the system achieve the goal of improving workflow related to strategic planning in schools. In a three-phase study, researchers used sociotechnical design methods to understand the current workflow, technology use, and processes of teachers and principals surrounding their strategic improvement planning. Additionally, design review and task analysis usability methods were used to evaluate task completion, usability, and user satisfaction of the system. The resulting socio-technical models illustrate the existing work processes and indicate how and at which places in the workflow the newly developed system could have an impact. The results point to the potential of the system but also indicate that it was initially too complicated for use. However, the diverse users see the potential benefits, especially to overcome the diverse set of data sources, and that the system could fill a gap for schools in planning and conducting strategic improvement plans.Keywords: continuous improvement process, education reform, strategic improvement planning, sociotechnical design, software development, usability
Procedia PDF Downloads 2971486 Disadvantaged Adolescents and Educational Delay in South Africa: Impacts of Personal, Family, and School Characteristics
Authors: Rocio Herrero Romero, Lucie Cluver, James Hall, Janina Steinert
Abstract:
Educational delay and non-completion are major policy concerns in South Africa. However, little research has focused on predictors for educational delay amongst adolescents in disadvantaged areas. This study has two aims: first, to use data integration approaches to compare the educational delay of 599 adolescents aged 16 to 18 from disadvantaged communities to national and provincial representative estimates in South Africa. Second, the paper also explores predictors for educational delay by comparing adolescents out of school (n=64) and at least one year behind (n=380), with adolescents in the age-appropriate grade or higher (n=155). Multinomial logistic regression models using self-report and administrative data were applied to look for significant associations of risk and protective factors. Significant risk factors for being behind (rather than in age-appropriate grade) were: male gender, past grade repetition, rural location and larger school size. Risk factors for being out of school (rather than in the age-appropriate grade) were: past grade repetition, having experienced problems concentrating at school, household poverty, and food insecurity. Significant protective factors for being in the age-appropriate grade (rather than out of school) were: living with biological parents or grandparents and access to school counselling. Attending school in wealthier communities was a significant protective factor for being in the age-appropriate grade (rather than behind). Our results suggest that both personal and contextual factors –family and school- predicted educational delay. This study provides new evidence to the significant effects of personal, family, and school characteristics on the educational outcomes of adolescents from disadvantaged communities in South Africa. This is the first longitudinal and quantitative study to systematically investigate risk and protective factors for post-compulsory educational outcomes amongst South African adolescents living in disadvantaged communities.Keywords: disadvantaged communities, quantitative analysis, school delay, South Africa
Procedia PDF Downloads 3481485 An In-Depth Experimental Study of Wax Deposition in Pipelines
Authors: Arias M. L., D’Adamo J., Novosad M. N., Raffo P. A., Burbridge H. P., Artana G.
Abstract:
Shale oils are highly paraffinic and, consequently, can create wax deposits that foul pipelines during transportation. Several factors must be considered when designing pipelines or treatment programs that prevents wax deposition: including chemical species in crude oils, flowrates, pipes diameters and temperature. This paper describes the wax deposition study carried out within the framework of Y-TEC's flow assurance projects, as part of the process to achieve a better understanding on wax deposition issues. Laboratory experiments were performed on a medium size, 1 inch diameter, wax deposition loop of 15 mts long equipped with a solid detector system, online microscope to visualize crystals, temperature and pressure sensors along the loop pipe. A baseline test was performed with diesel with no paraffin or additive content. Tests were undertaken with different temperatures of circulating and cooling fluid at different flow conditions. Then, a solution formed with a paraffin added to the diesel was considered. Tests varying flowrate and cooling rate were again run. Viscosity, density, WAT (Wax Appearance Temperature) with DSC (Differential Scanning Calorimetry), pour point and cold finger measurements were carried out to determine physical properties of the working fluids. The results obtained in the loop were analyzed through momentum balance and heat transfer models. To determine possible paraffin deposition scenarios temperature and pressure loop output signals were studied. They were compared with WAT static laboratory methods. Finally, we scrutinized the effect of adding a chemical inhibitor to the working fluid on the dynamics of the process of wax deposition in the loop.Keywords: paraffin desposition, flow assurance, chemical inhibitors, flow loop
Procedia PDF Downloads 1051484 Analyzing the Influence of Hydrometeorlogical Extremes, Geological Setting, and Social Demographic on Public Health
Authors: Irfan Ahmad Afip
Abstract:
This main research objective is to accurately identify the possibility for a Leptospirosis outbreak severity of a certain area based on its input features into a multivariate regression model. The research question is the possibility of an outbreak in a specific area being influenced by this feature, such as social demographics and hydrometeorological extremes. If the occurrence of an outbreak is being subjected to these features, then the epidemic severity for an area will be different depending on its environmental setting because the features will influence the possibility and severity of an outbreak. Specifically, this research objective was three-fold, namely: (a) to identify the relevant multivariate features and visualize the patterns data, (b) to develop a multivariate regression model based from the selected features and determine the possibility for Leptospirosis outbreak in an area, and (c) to compare the predictive ability of multivariate regression model and machine learning algorithms. Several secondary data features were collected locations in the state of Negeri Sembilan, Malaysia, based on the possibility it would be relevant to determine the outbreak severity in the area. The relevant features then will become an input in a multivariate regression model; a linear regression model is a simple and quick solution for creating prognostic capabilities. A multivariate regression model has proven more precise prognostic capabilities than univariate models. The expected outcome from this research is to establish a correlation between the features of social demographic and hydrometeorological with Leptospirosis bacteria; it will also become a contributor for understanding the underlying relationship between the pathogen and the ecosystem. The relationship established can be beneficial for the health department or urban planner to inspect and prepare for future outcomes in event detection and system health monitoring.Keywords: geographical information system, hydrometeorological, leptospirosis, multivariate regression
Procedia PDF Downloads 1151483 Artificial Neural Network-Based Prediction of Effluent Quality of Wastewater Treatment Plant Employing Data Preprocessing Approaches
Authors: Vahid Nourani, Atefeh Ashrafi
Abstract:
Prediction of treated wastewater quality is a matter of growing importance in water treatment procedure. In this way artificial neural network (ANN), as a robust data-driven approach, has been widely used for forecasting the effluent quality of wastewater treatment. However, developing ANN model based on appropriate input variables is a major concern due to the numerous parameters which are collected from treatment process and the number of them are increasing in the light of electronic sensors development. Various studies have been conducted, using different clustering methods, in order to classify most related and effective input variables. This issue has been overlooked in the selecting dominant input variables among wastewater treatment parameters which could effectively lead to more accurate prediction of water quality. In the presented study two ANN models were developed with the aim of forecasting effluent quality of Tabriz city’s wastewater treatment plant. Biochemical oxygen demand (BOD) was utilized to determine water quality as a target parameter. Model A used Principal Component Analysis (PCA) for input selection as a linear variance-based clustering method. Model B used those variables identified by the mutual information (MI) measure. Therefore, the optimal ANN structure when the result of model B compared with model A showed up to 15% percent increment in Determination Coefficient (DC). Thus, this study highlights the advantage of PCA method in selecting dominant input variables for ANN modeling of wastewater plant efficiency performance.Keywords: Artificial Neural Networks, biochemical oxygen demand, principal component analysis, mutual information, Tabriz wastewater treatment plant, wastewater treatment plant
Procedia PDF Downloads 1281482 Bionaut™: A Microrobotic Drug-Device Platform for the Local Treatment of Brainstem Gliomas
Authors: Alex Kiselyov, Suehyun Cho, Darrell Harrington; Florent Cros, Olin Palmer, John Caputo, Michael Kardosh, Eran Oren, William Loudon, Michael Shpigelmacher
Abstract:
Despite the most aggressive surgical and adjuvant therapeutic strategies, treatment of both pediatric and adult brainstem tumors remains problematic. Novel strategies, including targeted biologics, immunotherapy, and specialized delivery systems such as convection-enhanced delivery (CED), have been proposed. While some of these novel treatments are entering phase I trials, the field is still in need of treatment(s) that exhibits dramatically enhanced potency with optimal therapeutic ratio. Bionaut Labs has developed a modular microrobotic platform for performing localized delivery of diverse therapeutics in vivo. Our biocompatible particles (Bionauts™) are externally propelled and visualized in real-time. Bionauts™ are specifically designed to enhance the effect of radiation therapy via anatomically precise delivery of a radiosensitizing agent, as exemplified by temozolomide (TMZ) and Avastin™ to the brainstem gliomas of diverse origin. The treatment protocol is designed to furnish a better therapeutic outcome due to the localized (vs systemic) delivery of the drug to the neoplastic lesion(s) for use as a synergistic combination of radiation and radiosensitizing agent. In addition, the procedure is minimally invasive and is expected to be appropriate for both adult and pediatric patients. Current progress, including platform optimization, selection of the lead radiosensitizer as well as in vivo safety studies of the Bionauts™ in large animals, specifically the spine and the brain of porcine and ovine models, will be discussed.Keywords: Bionaut, brainstem, glioma, local delivery, micro-robot, radiosensitizer
Procedia PDF Downloads 1951481 Cement Bond Characteristics of Artificially Fabricated Sandstones
Authors: Ashirgul Kozhagulova, Ainash Shabdirova, Galym Tokazhanov, Minh Nguyen
Abstract:
The synthetic rocks have been advantageous over the natural rocks in terms of availability and the consistent studying the impact of a particular parameter. The artificial rocks can be fabricated using variety of techniques such as mixing sand and Portland cement or gypsum, firing the mixture of sand and fine powder of borosilicate glass or by in-situ precipitation of calcite solution. In this study, sodium silicate solution has been used as the cementing agent for the quartz sand. The molded soft cylindrical sandstone samples are placed in the gas-tight pressure vessel, where the hardening of the material takes place as the chemical reaction between carbon dioxide and the silicate solution progresses. The vessel allows uniform disperse of carbon dioxide and control over the ambient gas pressure. Current paper shows how the bonding material is initially distributed in the intergranular space and the surface of the sand particles by the usage of Electron Microscopy and the Energy Dispersive Spectroscopy. During the study, the strength of the cement bond as a function of temperature is observed. The impact of cementing agent dosage on the micro and macro characteristics of the sandstone is investigated. The analysis of the cement bond at micro level helps to trace the changes to particles bonding damage after a potential yielding. Shearing behavior and compressional response have been examined resulting in the estimation of the shearing resistance and cohesion force of the sandstone. These are considered to be main input values to the mathematical prediction models of sand production from weak clastic oil reservoir formations.Keywords: artificial sanstone, cement bond, microstructure, SEM, triaxial shearing
Procedia PDF Downloads 1671480 Re-Imagining Physical Education Teacher Education in a South African Higher Education Institution
Authors: C. F. Jones Couto, L. C. Motlhaolwa, K. Williams
Abstract:
This article explores the re-imagining of physical education teacher education in South African higher education. Utilising student reflections from a physical education practical module, valuable insights into student experiences were obtained about the current physical education pedagogical approaches and potential areas for improvement. The traditional teaching model of physical education is based on the idea of teaching students a variety of sports and physical activities. However, this model has been shown to be ineffective in promoting lifelong physical activity. The modern world demands a more holistic approach to health and wellness. Data was collected using the arts-based collage method in combination with written group reflections from 139 second-year undergraduate physical education students. This study employed thematic analysis methods to gain a comprehensive understanding of the data and extract a broader perspective on the students' experiences. The study aimed to empower student teachers to learn, think, and act creatively within the many educational models that impact their experience, contributing to the ongoing efforts of re-imagining physical education teacher education in South African higher education. This research is significant as the students' valuable insights reflected that they can think and work across disciplines. Sustainable development goals and graduate attributes are important concepts that can contribute to student preparation. Using a multi-model educational approach based on the cultural-historical theory, higher education institutions can help develop graduate attributes that will prepare students for success in the workplace and life.Keywords: holistic education, graduate attributes, physical education, teacher education, student experiences, sustainable development goals
Procedia PDF Downloads 741479 A Comparative Analysis of (De)legitimation Strategies in Selected African Inaugural Speeches
Authors: Lily Chimuanya, Ehioghae Esther
Abstract:
Language, a versatile and sophisticated tool, is fundamentally sacrosanct to mankind especially within the realm of politics. In this dynamic world, political leaders adroitly use language to engage in a strategic show aimed at manipulating or mechanising the opinion of discerning people. This nuanced synergy is marked by different rhetorical strategies, meticulously synced with contextual factors ranging from cultural, ideological, and political to achieve multifaceted persuasive objectives. This study investigates the (de)legitimation strategies inherent in African presidential inaugural speeches, as African leaders not only state their policy agenda through inaugural speeches but also subtly indulge in a dance of legitimation and delegitimation, performing a twofold objective of strengthening the credibility of their administration and, at times, undermining the performance of the past administration. Drawing insights from two different legitimation models and a dataset of 4 African presidential inaugural speeches obtained from authentic websites, the study describes the roles of authorisation, rationalisation, moral evaluation, altruism, and mythopoesis in unmasking the structure of political discourse. The analysis takes a mixed-method approach to unpack the (de)legitimation strategy embedded in the carefully chosen speeches. The focus extends beyond a superficial exploration and delves into the linguistic elements that form the basis of presidential discourse. In conclusion, this examination goes beyond the nuanced landscape of language as a potent tool in politics, with each strategy contributing to the overall rhetorical impact and shaping the narrative. From this perspective, the study argues that presidential inaugural speeches are not only linguistic exercises but also viable weapons that influence perceptions and legitimise authority.Keywords: CDA, legitimation, inaugural speeches, delegitmation
Procedia PDF Downloads 691478 Redefining “Minor”: An Empirical Research on Two Biennials in Contemporary China
Authors: Mengwei Li
Abstract:
Since the 1990s, biennials, and large-scale transnational art exhibitions, have proliferated exponentially across the globe, particularly in Asia, Africa, and Latin America. It has spurred debates regarding the inclusion of "new art cultures" and the deconstruction of the mechanism of exclusion embedded in the Western monopoly on art. Hans Belting introduced the concept of "global art" in 2013 to denounce the West's privileged canons in art by emphasising the inclusion of art practices from alleged non-Western regions. Arguably, the rise of new biennial networks developed by these locations has contributed to the asserted "inclusion of new art worlds." However, phrases such as "non-Western" and "beyond Euro-American" attached to these discussions raise the question of non- or beyond- in relation to whom. In this narrative, to become "integrated" and "equal" implies entry into the "core," a universal system in which preexisting authoritative voices define "newcomers" by what they are not. Possibly, if there is a global biennial system that symbolises a "universal language" of the contemporary art world, it is centered on the inherently dynamic yet asymmetrical interaction and negotiation between the "core" and the rest of the world's "periphery." Engaging with theories of "minor literature" developed by Deleuze and Guattari, this research proposes an epistemological framework to comprehend the global biennial discourse since the 1990s. Using this framework, this research looks at two biennial models in China: the 13th Shanghai Biennale, which was organised in the country's metropolitan art centre, and the 2nd Yinchuan Biennale, which was inaugurated in a geographically and economically marginalised city compared to domestic centres. By analysing how these two biennials from different locations in China positioned themselves and conveyed their local profiles through the universal language of the biennial, this research identifies a potential "minor" positionality within the global biennial discourse from China's perspective.Keywords: biennials, China, contemporary, global art, minor literature
Procedia PDF Downloads 871477 Off-Shore Wind Turbines: The Issue of Soil Plugging during Pile Installation
Authors: Mauro Iannazzone, Carmine D'Agostino
Abstract:
Off-shore wind turbines are currently considered as a reliable source of renewable energy Worldwide and especially in the UK. Most of the operational off-shore wind turbines located in shallow waters (i.e. < 30 m) are supported on monopiles. Monopiles are open-ended steel tubes with diameter ranging between 4 to 6 m. It is expected that future off-shore wind farms will be located in water depths as high as 70 m. Therefore, alternative foundation arrangements are needed. Foundations for off-shore structures normally consist of open-ended piles driven into the soil by means of impact hammers. During pile installation, the soil inside the pile may be mobilized by the increasing shear strength such as to prevent more soil from entering the pile. This phenomenon is known as soil plugging, and represents an important issue as it may change significantly the driving resistance of open-ended piles. In fact, if the plugging formation is unexpected, the installation may require more powerful and more expensive hammers. Engineers need to estimate whether the driven pile will be installed in a plugged or unplugged mode. As a consequence, a prediction of the degree of soil plugging is required in order to correctly predict the drivability of the pile. This work presents a brief review of the state-of-the-art of pile driving and approaches used to predict formation of soil plugs. In addition, a novel analytical approach is proposed, which is based on the vertical equilibrium of a plugged pile. Differently from previous studies, this research takes into account the enhancement of the stress within the soil plug. Finally, the work presents and discusses a series of experimental tests, which are carried out on small-scale models piles to validate the analytical solution.Keywords: off-shore wind turbines, pile installation, soil plugging, wind energy
Procedia PDF Downloads 3121476 Modeling and Characterization of Organic LED
Authors: Bouanati Sidi Mohammed, N. E. Chabane Sari, Mostefa Kara Selma
Abstract:
It is well-known that Organic light emitting diodes (OLEDs) are attracting great interest in the display technology industry due to their many advantages, such as low price of manufacturing, large-area of electroluminescent display, various colors of emission included white light. Recently, there has been much progress in understanding the device physics of OLEDs and their basic operating principles. In OLEDs, Light emitting is the result of the recombination of electron and hole in light emitting layer, which are injected from cathode and anode. For improve luminescence efficiency, it is needed that hole and electron pairs exist affluently and equally and recombine swiftly in the emitting layer. The aim of this paper is to modeling polymer LED and OLED made with small molecules for studying the electrical and optical characteristics. The first simulation structures used in this paper is a mono layer device; typically consisting of the poly (2-methoxy-5(2’-ethyl) hexoxy-phenylenevinylene) (MEH-PPV) polymer sandwiched between an anode usually an indium tin oxide (ITO) substrate, and a cathode, such as Al. In the second structure we replace MEH-PPV by tris (8-hydroxyquinolinato) aluminum (Alq3). We choose MEH-PPV because of it's solubility in common organic solvents, in conjunction with a low operating voltage for light emission and relatively high conversion efficiency and Alq3 because it is one of the most important host materials used in OLEDs. In this simulation, the Poole-Frenkel- like mobility model and the Langevin bimolecular recombination model have been used as the transport and recombination mechanism. These models are enabled in ATLAS -SILVACO software. The influence of doping and thickness on I(V) characteristics and luminescence, are reported.Keywords: organic light emitting diode, polymer lignt emitting diode, organic materials, hexoxy-phenylenevinylene
Procedia PDF Downloads 5541475 Identification of Risks Associated with Process Automation Systems
Authors: J. K. Visser, H. T. Malan
Abstract:
A need exists to identify the sources of risks associated with the process automation systems within petrochemical companies or similar energy related industries. These companies use many different process automation technologies in its value chain. A crucial part of the process automation system is the information technology component featuring in the supervisory control layer. The ever-changing technology within the process automation layers and the rate at which it advances pose a risk to safe and predictable automation system performance. The age of the automation equipment also provides challenges to the operations and maintenance managers of the plant due to obsolescence and unavailability of spare parts. The main objective of this research was to determine the risk sources associated with the equipment that is part of the process automation systems. A secondary objective was to establish whether technology managers and technicians were aware of the risks and share the same viewpoint on the importance of the risks associated with automation systems. A conceptual model for risk sources of automation systems was formulated from models and frameworks in literature. This model comprised six categories of risk which forms the basis for identifying specific risks. This model was used to develop a questionnaire that was sent to 172 instrument technicians and technology managers in the company to obtain primary data. 75 completed and useful responses were received. These responses were analyzed statistically to determine the highest risk sources and to determine whether there was difference in opinion between technology managers and technicians. The most important risks that were revealed in this study are: 1) the lack of skilled technicians, 2) integration capability of third-party system software, 3) reliability of the process automation hardware, 4) excessive costs pertaining to performing maintenance and migrations on process automation systems, and 5) requirements of having third-party communication interfacing compatibility as well as real-time communication networks.Keywords: distributed control system, identification of risks, information technology, process automation system
Procedia PDF Downloads 1391474 Multivariate Data Analysis for Automatic Atrial Fibrillation Detection
Authors: Zouhair Haddi, Stephane Delliaux, Jean-Francois Pons, Ismail Kechaf, Jean-Claude De Haro, Mustapha Ouladsine
Abstract:
Atrial fibrillation (AF) has been considered as the most common cardiac arrhythmia, and a major public health burden associated with significant morbidity and mortality. Nowadays, telemedical approaches targeting cardiac outpatients situate AF among the most challenged medical issues. The automatic, early, and fast AF detection is still a major concern for the healthcare professional. Several algorithms based on univariate analysis have been developed to detect atrial fibrillation. However, the published results do not show satisfactory classification accuracy. This work was aimed at resolving this shortcoming by proposing multivariate data analysis methods for automatic AF detection. Four publicly-accessible sets of clinical data (AF Termination Challenge Database, MIT-BIH AF, Normal Sinus Rhythm RR Interval Database, and MIT-BIH Normal Sinus Rhythm Databases) were used for assessment. All time series were segmented in 1 min RR intervals window and then four specific features were calculated. Two pattern recognition methods, i.e., Principal Component Analysis (PCA) and Learning Vector Quantization (LVQ) neural network were used to develop classification models. PCA, as a feature reduction method, was employed to find important features to discriminate between AF and Normal Sinus Rhythm. Despite its very simple structure, the results show that the LVQ model performs better on the analyzed databases than do existing algorithms, with high sensitivity and specificity (99.19% and 99.39%, respectively). The proposed AF detection holds several interesting properties, and can be implemented with just a few arithmetical operations which make it a suitable choice for telecare applications.Keywords: atrial fibrillation, multivariate data analysis, automatic detection, telemedicine
Procedia PDF Downloads 2671473 Comparison of E-learning and Face-to-Face Learning Models Through the Early Design Stage in Architectural Design Education
Authors: Gülay Dalgıç, Gildis Tachir
Abstract:
Architectural design studios are ambiencein where architecture design is realized as a palpable product in architectural education. In the design studios that the architect candidate will use in the design processthe information, the methods of approaching the design problem, the solution proposals, etc., are set uptogetherwith the studio coordinators. The architectural design process, on the other hand, is complex and uncertain.Candidate architects work in a process that starts with abstre and ill-defined problems. This process starts with the generation of alternative solutions with the help of representation tools, continues with the selection of the appropriate/satisfactory solution from these alternatives, and then ends with the creation of an acceptable design/result product. In the studio ambience, many designs and thought relationships are evaluated, the most important step is the early design phase. In the early design phase, the first steps of converting the information are taken, and converted information is used in the constitution of the first design decisions. This phase, which positively affects the progress of the design process and constitution of the final product, is complex and fuzzy than the other phases of the design process. In this context, the aim of the study is to investigate the effects of face-to-face learning model and e-learning model on the early design phase. In the study, the early design phase was defined by literature research. The data of the defined early design phase criteria were obtained with the feedback graphics created for the architect candidates who performed e-learning in the first year of architectural education and continued their education with the face-to-face learning model. The findings of the data were analyzed with the common graphics program. It is thought that this research will contribute to the establishment of a contemporary architectural design education model by reflecting the evaluation of the data and results on architectural education.Keywords: education modeling, architecture education, design education, design process
Procedia PDF Downloads 1371472 Bridge Members Segmentation Algorithm of Terrestrial Laser Scanner Point Clouds Using Fuzzy Clustering Method
Authors: Donghwan Lee, Gichun Cha, Jooyoung Park, Junkyeong Kim, Seunghee Park
Abstract:
3D shape models of the existing structure are required for many purposes such as safety and operation management. The traditional 3D modeling methods are based on manual or semi-automatic reconstruction from close-range images. It occasions great expense and time consuming. The Terrestrial Laser Scanner (TLS) is a common survey technique to measure quickly and accurately a 3D shape model. This TLS is used to a construction site and cultural heritage management. However there are many limits to process a TLS point cloud, because the raw point cloud is massive volume data. So the capability of carrying out useful analyses is also limited with unstructured 3-D point. Thus, segmentation becomes an essential step whenever grouping of points with common attributes is required. In this paper, members segmentation algorithm was presented to separate a raw point cloud which includes only 3D coordinates. This paper presents a clustering approach based on a fuzzy method for this objective. The Fuzzy C-Means (FCM) is reviewed and used in combination with a similarity-driven cluster merging method. It is applied to the point cloud acquired with Lecia Scan Station C10/C5 at the test bed. The test-bed was a bridge which connects between 1st and 2nd engineering building in Sungkyunkwan University in Korea. It is about 32m long and 2m wide. This bridge was used as pedestrian between two buildings. The 3D point cloud of the test-bed was constructed by a measurement of the TLS. This data was divided by segmentation algorithm for each member. Experimental analyses of the results from the proposed unsupervised segmentation process are shown to be promising. It can be processed to manage configuration each member, because of the segmentation process of point cloud.Keywords: fuzzy c-means (FCM), point cloud, segmentation, terrestrial laser scanner (TLS)
Procedia PDF Downloads 2341471 Similar Script Character Recognition on Kannada and Telugu
Authors: Gurukiran Veerapur, Nytik Birudavolu, Seetharam U. N., Chandravva Hebbi, R. Praneeth Reddy
Abstract:
This work presents a robust approach for the recognition of characters in Telugu and Kannada, two South Indian scripts with structural similarities in characters. To recognize the characters exhaustive datasets are required, but there are only a few publicly available datasets. As a result, we decided to create a dataset for one language (source language),train the model with it, and then test it with the target language.Telugu is the target language in this work, whereas Kannada is the source language. The suggested method makes use of Canny edge features to increase character identification accuracy on pictures with noise and different lighting. A dataset of 45,150 images containing printed Kannada characters was created. The Nudi software was used to automatically generate printed Kannada characters with different writing styles and variations. Manual labelling was employed to ensure the accuracy of the character labels. The deep learning models like CNN (Convolutional Neural Network) and Visual Attention neural network (VAN) are used to experiment with the dataset. A Visual Attention neural network (VAN) architecture was adopted, incorporating additional channels for Canny edge features as the results obtained were good with this approach. The model's accuracy on the combined Telugu and Kannada test dataset was an outstanding 97.3%. Performance was better with Canny edge characteristics applied than with a model that solely used the original grayscale images. The accuracy of the model was found to be 80.11% for Telugu characters and 98.01% for Kannada words when it was tested with these languages. This model, which makes use of cutting-edge machine learning techniques, shows excellent accuracy when identifying and categorizing characters from these scripts.Keywords: base characters, modifiers, guninthalu, aksharas, vattakshara, VAN
Procedia PDF Downloads 531470 Rapid and Easy Fabrication of Collagen-Based Biocomposite Scaffolds for 3D Cell Culture
Authors: Esra Turker, Umit Hakan Yildiz, Ahu Arslan Yildiz
Abstract:
The key of regenerative medicine is mimicking natural three dimensional (3D) microenvironment of tissues by utilizing appropriate biomaterials. In this study, a synthetic biodegradable polymer; poly (L-lactide-co-ε-caprolactone) (PLLCL) and a natural polymer; collagen was used to mimic the biochemical structure of the natural extracellular matrix (ECM), and by means of electrospinning technique the real physical structure of ECM has mimicked. PLLCL/Collagen biocomposite scaffolds enables cell attachment, proliferation and nutrient transport through fabrication of micro to nanometer scale nanofibers. Biocomposite materials are commonly preferred due to limitations of physical and biocompatible properties of natural and synthetic materials. Combination of both materials improves the strength, degradation and biocompatibility of scaffold. Literature studies have shown that collagen is mostly solved with heavy chemicals, which is not suitable for cell culturing. To overcome this problem, a new approach has been developed in this study where polyvinylpyrrolidone (PVP) is used as co-electrospinning agent. PVP is preferred due to its water solubility, so PLLCL/collagen biocomposite scaffold can be easily and rapidly produced. Hydrolytic and enzymatic biodegradation as well as mechanical strength of scaffolds were examined in vitro. Cell adhesion, proliferation and cell morphology characterization studies have been performed as well. Further, on-chip drug screening analysis has been performed over 3D tumor models. Overall, the developed biocomposite scaffold was used for 3D tumor model formation and obtained results confirmed that developed model could be used for drug screening studies to predict clinical efficacy of a drug.Keywords: biomaterials, 3D cell culture, drug screening, electrospinning, lab-on-a-chip, tissue engineering
Procedia PDF Downloads 3121469 Numerical Evaluation of Deep Ground Settlement Induced by Groundwater Changes During Pumping and Recovery Test in Shanghai
Authors: Shuo Wang
Abstract:
The hydrogeological parameters of the engineering site and the hydraulic connection between the aquifers can be obtained by the pumping test. Through the recovery test, the characteristics of water level recovery and the law of surface subsidence recovery can be understood. The above two tests can provide the basis for subsequent engineering design. At present, the deformation of deep soil caused by pumping tests is often neglected. However, some studies have shown that the maximum settlement subject to groundwater drawdown is not necessarily on the surface but in the deep soil. In addition, the law of settlement recovery of each soil layer subject to water level recovery is not clear. If the deformation-sensitive structure is deep in the test site, safety accidents may occur. In this study, the pumping test and recovery test of a confined aquifer in Shanghai are introduced. The law of measured groundwater changes and surface subsidence are analyzed. In addition, the fluid-solid coupling model was established by ABAQUS based on the Biot consolidation theory. The models are verified by comparing the computed and measured results. Further, the variation law of water level and the deformation law of deep soil during pumping and recovery tests under different site conditions and different times and spaces are discussed through the above model. It is found that the maximum soil settlement caused by pumping in a confined aquifer is related to the permeability of the overlying aquitard and pumping time. There is a lag between soil deformation and groundwater changes, and the recovery rate of settlement deformation of each soil layer caused by the rise of water level is different. Finally, some possible research directions are proposed to provide new ideas for academic research in this field.Keywords: coupled hydro-mechanical analysis, deep ground settlement, numerical simulation, pumping test, recovery test
Procedia PDF Downloads 441468 Sustainability Assessment of a Deconstructed Residential House
Authors: Atiq U. Zaman, Juliet Arnott
Abstract:
This paper analyses the various benefits and barriers of residential deconstruction in the context of environmental performance and circular economy based on a case study project in Christchurch, New Zealand. The case study project “Whole House Deconstruction” which aimed, firstly, to harvest materials from a residential house, secondly, to produce new products using the recovered materials, and thirdly, to organize an exhibition for the local public to promote awareness on resource conservation and sustainable deconstruction practices. Through a systematic deconstruction process, the project recovered around 12 tonnes of various construction materials, most of which would otherwise be disposed of to landfill in the traditional demolition approach. It is estimated that the deconstruction of a similar residential house could potentially prevent around 27,029 kg of carbon emission to the atmosphere by recovering and reusing the building materials. In addition, the project involved local designers to produce 400 artefacts using the recovered materials and to exhibit them to accelerate public awareness. The findings from this study suggest that the deconstruction project has significant environmental benefits, as well as social benefits by involving the local community and unemployed youth as a part of their professional skills development opportunities. However, the project faced a number of economic and institutional challenges. The study concludes that with proper economic models and appropriate institutional support a significant amount of construction and demolition waste can be reduced through a systematic deconstruction process. Traditionally, the greatest benefits from such projects are often ignored and remain unreported to wider audiences as most of the external and environmental costs have not been considered in the traditional linear economy.Keywords: circular economy, construction and demolition waste, resource recovery, systematic deconstruction, sustainable waste management
Procedia PDF Downloads 1821467 Transition From Economic Growth-Energy Use to Green Growth-Green Energy Towards Environmental Quality: Evidence from Africa Using Econometric Approaches
Authors: Jackson Niyongabo
Abstract:
This study addresses a notable gap in the existing literature on the relationship between energy consumption, economic growth, and CO₂ emissions, particularly within the African context. While numerous studies have explored these dynamics globally and regionally across various development levels, few have delved into the nuances of regions and income levels specific to African countries. Furthermore, the evaluation of the interplay between green growth policies, green energy technologies, and their impact on environmental quality has been underexplored. This research aims to fill these gaps by conducting a comprehensive analysis of the transition from conventional economic growth and energy consumption to a paradigm of green growth coupled with green energy utilization across the African continent from 1980 to 2018. The study is structured into three main parts: an empirical examination of the long-term effects of energy intensity, renewable energy consumption, and economic growth on CO₂ emissions across diverse African regions and income levels; an estimation of the long-term impact of green growth and green energy use on CO₂ emissions for countries implementing green policies within Africa, as well as at regional and global levels; and a comparative analysis of the impact of green growth policies on environmental degradation before and after implementation. Employing advanced econometric methods and panel estimators, the study utilizes a testing framework, panel unit tests, and various estimators to derive meaningful insights. The anticipated results and conclusions will be elucidated through causality tests, impulse response, and variance decomposition analyses, contributing valuable knowledge to the discourse on sustainable development in the African context.Keywords: economic growth, green growth, energy consumption, CO₂ emissions, econometric models, green energy
Procedia PDF Downloads 581466 Quoting Jobshops Due Dates Subject to Exogenous Factors in Developing Nations
Authors: Idris M. Olatunde, Kareem B.
Abstract:
In manufacturing systems, especially job shops, service performance is a key factor that determines customer satisfaction. Service performance depends not only on the quality of the output but on the delivery lead times as well. Besides product quality enhancement, delivery lead time must be minimized for optimal patronage. Quoting accurate due dates is sine quo non for job shop operational survival in a global competitive environment. Quoting accurate due dates in job shops has been a herculean task that nearly defiled solutions from many methods employed due to complex jobs routing nature of the system. This class of NP-hard problems possessed no rigid algorithms that can give an optimal solution. Jobshop operational problem is more complex in developing nations due to some peculiar factors. Operational complexity in job shops emanated from political instability, poor economy, technological know-how, and the non-promising socio-political environment. The mentioned exogenous factors were hardly considered in the previous studies on scheduling problem related to due date determination in job shops. This study has filled the gap created in the past studies by developing a dynamic model that incorporated the exogenous factors for accurate determination of due dates for varying jobs complexity. Real data from six job shops selected from the different part of Nigeria, were used to test the efficacy of the model, and the outcomes were analyzed statistically. The results of the analyzes showed that the model is more promising in determining accurate due dates than the traditional models deployed by many job shops in terms of patronage and lead times minimization.Keywords: due dates prediction, improved performance, customer satisfaction, dynamic model, exogenous factors, job shops
Procedia PDF Downloads 412