Search results for: pseudo-operational matrix of integration
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4823

Search results for: pseudo-operational matrix of integration

263 Research on the Effect of Coal Ash Slag Structure Evolution on Its Flow Behavior During Co-gasification of Coal and Indirect Coal Liquefaction Residue

Authors: Linmin Zhang

Abstract:

Entrained-flow gasification technology is considered the most promising gasification technology because of its clean and efficient utilization characteristics. The stable fluidity of slag at high temperatures is the key to affecting the long-period operation of the gasifier. The diversity and differences of coal ash-slag systems make it difficult to meet the requirements for stable slagging in entrained-flow gasifiers. Therefore, coal blending or adding fluxes has been used in industry for a long time to improve the flow behavior of coal ash. As a by-product of the indirect coal liquefaction process, indirect coal liquefaction residue (ICLR) is a kind of industrial solid waste that is usually disposed of by stacking or landfilling. However, this disposal method will not only occupy land resources but also cause serious pollution to soil and water bodies by leachate containing toxic and harmful metals. As a carbon-containing matrix, ICLR is not only a kind of waste but also a kind of energy substance. Utilizing existing industrial gasifiers to blend combustion ICLR can not only transform industrial solid waste into fuel but also save coal resources. Moreover, the ICLR usually contains a unique ash chemical composition different from coal, which will affect the slagging performance of the gasifier. Therefore, exploring the effect of the ash addition in ICLR on the coal ash flow behavior can not only improve the slagging performance and gasification efficiency of entrained-flow gasifier by using the unique ash chemical composition of ICLR but also provide some theoretical support for the large-scale consumption of industrial solid waste. Combining molecular dynamics simulation with Raman spectroscopy experiment, the effect of ICLR addition on slag structure and fluidity was explained, and the relationship between the evolution law of slag short/medium range microstructure and macroscopic flow behavior was discussed. The research found that the high silicon and aluminum content in coal ash led to the formation of complex [SiO₄]⁴- tetrahedron and [AlO₄]⁵- tetrahedron structures at high temperature, and the [SiO₄]⁴- tetrahedron and [AlO₄]⁵- tetrahedron were connected by oxygen atoms to form a multi-membered ring structure with high polymerization degree. Due to the action of the multi-membered ring structure, the internal friction in the slag increased, and the viscosity value was higher on the macro-level. As a network-modified ion, Fe2+ could replace Si4+ and Al3+ in the multi-membered ring structure and combine with O2-, which will destroy the bridge oxygen (BO) structure and transform more complex tri cluster oxygen (TO) and bridge oxygen (BO) into simple non-bridge oxygen (NBO) structure. As a result, a large number of multi-membered rings with high polymerization degrees were depolymerized into low-membered rings with low polymerization degrees. The evolution of oxygen types and ring structures in slag reduced the structure complexity and polymerization degree of coal ash slag, resulting in a decrease in the viscosity of coal ash slag.

Keywords: ash slag, coal gasification, fluidity, industrial solid waste, slag structure

Procedia PDF Downloads 31
262 The Four Pillars of Islamic Design: A Methodology for an Objective Approach to the Design and Appraisal of Islamic Urban Planning and Architecture Based on Traditional Islamic Religious Knowledge

Authors: Azzah Aldeghather, Sara Alkhodair

Abstract:

In the modern urban planning and architecture landscape, with western ideologies and styles becoming the mainstay of experience and definitions globally, the Islamic world requires a methodology that defines its expression, which transcends cultural, societal, and national styles. This paper will propose a methodology as an objective system to define, evaluate and apply traditional Islamic knowledge to Islamic urban planning and architecture, providing the Islamic world with a system to manifest its approach to design. The methodology is expressed as Four Pillars which are based on traditional meanings of Arab words roughly translated as Pillar One: The Principles (Al Mabade’), Pillar Two: The Foundations (Al Asas), Pillar Three: The Purpose (Al Ghaya), Pillar Four: Presence (Al Hadara). Pillar One: (The Principles) expresses the unification (Tawheed) pillar of Islam: “There is no God but God” and is comprised of seven principles listed as: 1. Human values (Qiyam Al Insan), 2. Universal language as sacred geometry, 3. Fortitude© and Benefitability©, 4. Balance and Integration: conjoining the opposites, 5. Man, time, and place, 6. Body, mind, spirit, and essence, 7. Unity of design expression to achieve unity, harmony, and security in design. Pillar Two: The Foundations is based on two foundations: “Muhammad is the Prophet of God” and his relationship to the renaming of Medina City as a prototypical city or place, which defines a center space for collection conjoined by an analysis of the Medina Charter as a base for the humanistic design. Pillar Three: The Purpose (Al Ghaya) is comprised of four criteria: The naming of the design as a title, the intention of the design as an end goal, the reasoning behind the design, and the priorities of expression. Pillar Four: Presence (Al Hadara) is usually translated as a civilization; in Arabic, the root of Hadara is to be present. This has five primary definitions utilized to express the act of design: Wisdom (Hikma) as a philosophical concept, Identity (Hawiya) of the form, and Dialogue (Hiwar), which are the requirements of the project vis-a-vis what the designer wishes to convey, Expression (Al Ta’abeer) the designer wishes to apply, and Resources (Mawarid) available. The Proposal will provide examples, where applicable, of past and present designs that exemplify the manifestation of the Pillars. The proposed methodology endeavors to return Islamic urban planning and architecture design to its a priori position as a leading design expression adaptable to any place, time, and cultural expression while providing a base for analysis that transcends the concept of style and external form as a definition and expresses the singularity of the esoteric “Spiritual” aspects in a rational, principled, and logical manner clearly addressed in Islam’s essence.

Keywords: Islamic architecture, Islamic design, Islamic urban planning, principles of Islamic design

Procedia PDF Downloads 107
261 Physico-Mechanical Behavior of Indian Oil Shales

Authors: K. S. Rao, Ankesh Kumar

Abstract:

The search for alternative energy sources to petroleum has increased these days because of increase in need and depletion of petroleum reserves. Therefore the importance of oil shales as an economically viable substitute has increased many folds in last 20 years. The technologies like hydro-fracturing have opened the field of oil extraction from these unconventional rocks. Oil shale is a compact laminated rock of sedimentary origin containing organic matter known as kerogen which yields oil when distilled. Oil shales are formed from the contemporaneous deposition of fine grained mineral debris and organic degradation products derived from the breakdown of biota. Conditions required for the formation of oil shales include abundant organic productivity, early development of anaerobic conditions, and a lack of destructive organisms. These rocks are not gown through the high temperature and high pressure conditions in Mother Nature. The most common approach for oil extraction is drastically breaking the bond of the organics which involves retorting process. The two approaches for retorting are surface retorting and in-situ processing. The most environmental friendly approach for extraction is In-situ processing. The three steps involved in this process are fracturing, injection to achieve communication, and fluid migration at the underground location. Upon heating (retorting) oil shale at temperatures in the range of 300 to 400°C, the kerogen decomposes into oil, gas and residual carbon in a process referred to as pyrolysis. Therefore it is very important to understand the physico-mechenical behavior of such rocks, to improve the technology for in-situ extraction. It is clear from the past research and the physical observations that these rocks will behave as an anisotropic rock so it is very important to understand the mechanical behavior under high pressure at different orientation angles for the economical use of these resources. By knowing the engineering behavior under above conditions will allow us to simulate the deep ground retorting conditions numerically and experimentally. Many researchers have investigate the effect of organic content on the engineering behavior of oil shale but the coupled effect of organic and inorganic matrix is yet to be analyzed. The favourable characteristics of Assam coal for conversion to liquid fuels have been known for a long time. Studies have indicated that these coals and carbonaceous shale constitute the principal source rocks that have generated the hydrocarbons produced from the region. Rock cores of the representative samples are collected by performing on site drilling, as coring in laboratory is very difficult due to its highly anisotropic nature. Different tests are performed to understand the petrology of these samples, further the chemical analyses are also done to exactly quantify the organic content in these rocks. The mechanical properties of these rocks are investigated by considering different anisotropic angles. Now the results obtained from petrology and chemical analysis are correlated with the mechanical properties. These properties and correlations will further help in increasing the producibility of these rocks. It is well established that the organic content is negatively correlated to tensile strength, compressive strength and modulus of elasticity.

Keywords: oil shale, producibility, hydro-fracturing, kerogen, petrology, mechanical behavior

Procedia PDF Downloads 347
260 Characteristics-Based Lq-Control of Cracking Reactor by Integral Reinforcement

Authors: Jana Abu Ahmada, Zaineb Mohamed, Ilyasse Aksikas

Abstract:

The linear quadratic control system of hyperbolic first order partial differential equations (PDEs) are presented. The aim of this research is to control chemical reactions. This is achieved by converting the PDEs system to ordinary differential equations (ODEs) using the method of characteristics to reduce the system to control it by using the integral reinforcement learning. The designed controller is applied to a catalytic cracking reactor. Background—Transport-Reaction systems cover a large chemical and bio-chemical processes. They are best described by nonlinear PDEs derived from mass and energy balances. As a main application to be considered in this work is the catalytic cracking reactor. Indeed, the cracking reactor is widely used to convert high-boiling, high-molecular weight hydrocarbon fractions of petroleum crude oils into more valuable gasoline, olefinic gases, and others. On the other hand, control of PDEs systems is an important and rich area of research. One of the main control techniques is feedback control. This type of control utilizes information coming from the system to correct its trajectories and drive it to a desired state. Moreover, feedback control rejects disturbances and reduces the variation effects on the plant parameters. Linear-quadratic control is a feedback control since the developed optimal input is expressed as feedback on the system state to exponentially stabilize and drive a linear plant to the steady-state while minimizing a cost criterion. The integral reinforcement learning policy iteration technique is a strong method that solves the linear quadratic regulator problem for continuous-time systems online in real time, using only partial information about the system dynamics (i.e. the drift dynamics A of the system need not be known), and without requiring measurements of the state derivative. This is, in effect, a direct (i.e. no system identification procedure is employed) adaptive control scheme for partially unknown linear systems that converges to the optimal control solution. Contribution—The goal of this research is to Develop a characteristics-based optimal controller for a class of hyperbolic PDEs and apply the developed controller to a catalytic cracking reactor model. In the first part, developing an algorithm to control a class of hyperbolic PDEs system will be investigated. The method of characteristics will be employed to convert the PDEs system into a system of ODEs. Then, the control problem will be solved along the characteristic curves. The reinforcement technique is implemented to find the state-feedback matrix. In the other half, applying the developed algorithm to the important application of a catalytic cracking reactor. The main objective is to use the inlet fraction of gas oil as a manipulated variable to drive the process state towards desired trajectories. The outcome of this challenging research would yield the potential to provide a significant technological innovation for the gas industries since the catalytic cracking reactor is one of the most important conversion processes in petroleum refineries.

Keywords: PDEs, reinforcement iteration, method of characteristics, riccati equation, cracking reactor

Procedia PDF Downloads 91
259 A Lexicographic Approach to Obstacles Identified in the Ontological Representation of the Tree of Life

Authors: Sandra Young

Abstract:

The biodiversity literature is vast and heterogeneous. In today’s data age, numbers of data integration and standardisation initiatives aim to facilitate simultaneous access to all the literature across biodiversity domains for research and forecasting purposes. Ontologies are being used increasingly to organise this information, but the rationalisation intrinsic to ontologies can hit obstacles when faced with the intrinsic fluidity and inconsistency found in the domains comprising biodiversity. Essentially the problem is a conceptual one: biological taxonomies are formed on the basis of specific, physical specimens yet nomenclatural rules are used to provide labels to describe these physical objects. These labels are ambiguous representations of the physical specimen. An example of this is with the genus Melpomene, the scientific nomenclatural representation of a genus of ferns, but also for a genus of spiders. The physical specimens for each of these are vastly different, but they have been assigned the same nomenclatural reference. While there is much research into the conceptual stability of the taxonomic concept versus the nomenclature used, to the best of our knowledge as yet no research has looked empirically at the literature to see the conceptual plurality or singularity of the use of these species’ names, the linguistic representation of a physical entity. Language itself uses words as symbols to represent real world concepts, whether physical entities or otherwise, and as such lexicography has a well-founded history in the conceptual mapping of words in context for dictionary making. This makes it an ideal candidate to explore this problem. The lexicographic approach uses corpus-based analysis to look at word use in context, with a specific focus on collocated word frequencies (the frequencies of words used in specific grammatical and collocational contexts). It allows for inconsistencies and contradictions in the source data and in fact includes these in the word characterisation so that 100% of the available evidence is counted. Corpus analysis is indeed suggested as one of the ways to identify concepts for ontology building, because of its ability to look empirically at data and show patterns in language usage, which can indicate conceptual ideas which go beyond words themselves. In this sense it could potentially be used to identify if the hierarchical structures present within the empirical body of literature match those which have been identified in ontologies created to represent them. The first stages of this research have revealed a hierarchical structure that becomes apparent in the biodiversity literature when annotating scientific species’ names, common names and more general names as classes, which will be the focus of this paper. The next step in the research is focusing on a larger corpus in which specific words can be analysed and then compared with existing ontological structures looking at the same material, to evaluate the methods by means of an alternative perspective. This research aims to provide evidence as to the validity of the current methods in knowledge representation for biological entities, and also shed light on the way that scientific nomenclature is used within the literature.

Keywords: ontology, biodiversity, lexicography, knowledge representation, corpus linguistics

Procedia PDF Downloads 138
258 Development and Validation of a Quantitative Measure of Engagement in the Analysing Aspect of Dialogical Inquiry

Authors: Marcus Goh Tian Xi, Alicia Chua Si Wen, Eunice Gan Ghee Wu, Helen Bound, Lee Liang Ying, Albert Lee

Abstract:

The Map of Dialogical Inquiry provides a conceptual look at the underlying nature of future-oriented skills. According to the Map, learning is learner-oriented, with conversational time shifted from teachers to learners, who play a strong role in deciding what and how they learn. For example, in courses operating on the principles of Dialogical Inquiry, learners were able to leave the classroom with a deeper understanding of the topic, broader exposure to differing perspectives, and stronger critical thinking capabilities, compared to traditional approaches to teaching. Despite its contributions to learning, the Map is grounded in a qualitative approach both in its development and its application for providing feedback to learners and educators. Studies hinge on openended responses by Map users, which can be time consuming and resource intensive. The present research is motivated by this gap in practicality by aiming to develop and validate a quantitative measure of the Map. In addition, a quantifiable measure may also strengthen applicability by making learning experiences trackable and comparable. The Map outlines eight learning aspects that learners should holistically engage. This research focuses on the Analysing aspect of learning. According to the Map, Analysing has four key components: liking or engaging in logic, using interpretative lenses, seeking patterns, and critiquing and deconstructing. Existing scales of constructs (e.g., critical thinking, rationality) related to these components were identified so that the current scale could adapt items from. Specifically, items were phrased beginning with an “I”, followed by an action phrase, to fulfil the purpose of assessing learners' engagement with Analysing either in general or in classroom contexts. Paralleling standard scale development procedure, the 26-item Analysing scale was administered to 330 participants alongside existing scales with varying levels of association to Analysing, to establish construct validity. Subsequently, the scale was refined and its dimensionality, reliability, and validity were determined. Confirmatory factor analysis (CFA) revealed if scale items loaded onto the four factors corresponding to the components of Analysing. To refine the scale, items were systematically removed via an iterative procedure, according to their factor loadings and results of likelihood ratio tests at each step. Eight items were removed this way. The Analysing scale is better conceptualised as unidimensional, rather than comprising the four components identified by the Map, for three reasons: 1) the covariance matrix of the model specified for the CFA was not positive definite, 2) correlations among the four factors were high, and 3) exploratory factor analyses did not yield an easily interpretable factor structure of Analysing. Regarding validity, since the Analysing scale had higher correlations with conceptually similar scales than conceptually distinct scales, with minor exceptions, construct validity was largely established. Overall, satisfactory reliability and validity of the scale suggest that the current procedure can result in a valid and easy-touse measure for each aspect of the Map.

Keywords: analytical thinking, dialogical inquiry, education, lifelong learning, pedagogy, scale development

Procedia PDF Downloads 91
257 Antimicrobial Nanocompositions Made of Amino Acid Based Biodegradable Polymers

Authors: Nino Kupatadze, Mzevinar Bedinashvili, Tamar Memanishvili, Manana Gurielidze, David Tugushi, Ramaz Katsarava

Abstract:

Bacteria easily colonize the surfaces of tissues, surgical devices (implants, orthopedics, catheters, etc.), and instruments causing surgical device related infections. Therefore, the battle against bacteria and the prevention of surgical devices from biofilm formation is one of the main challenges of biomedicine today. Our strategy to the solution of this problem consists in using antimicrobial polymeric coatings as effective “shields” to protect surfaces from bacteria’s colonization and biofilm formation. As one of the most promising approaches look be the use of antimicrobial bioerodible polymeric nanocomposites containing silver nanoparticles (AgNPs). We assume that the combination of an erodible polymer with a strong bactericide should put obstacles to bacteria to occupy the surface and to form biofilm. It has to be noted that this kind of nanocomposites are also promising as wound dressing materials to treat infected superficial wounds. Various synthetic and natural polymers were used for creating biocomposites containing AgNPs as both particles' stabilizers and matrices forming elastic films at surfaces. One of the most effective systems to fabricate AgNPs is an ethanol solution of polyvinylpyrrolidone(PVP) with dissolved AgNO3–ethanol serves as a AgNO3 reductant and PVP as AgNPs stabilizer (through the interaction of nanoparticles with nitrogen atom of the amide group). Though PVP is biocompatible and film-forming polymer, it is not a good candidate to design either "biofilm shield" or wound dressing material because of a high solubility in water – though the solubility of PVP provides the desirable release of AgNPs from the matrix, but the coating is easily washable away from the surfaces. More promising as matrices look water insoluble but bioerodible polymers that can provide the release of AgNPs and form long-lasting coatings at the surfaces. For creating bioerodible water-insoluble antimicrobial coatings containing AgNPs, we selected amino acid based biodegradable polymers(AABBPs)–poly(ester amide)s, poly(ester urea)s, their copolymers containing amide and related groups capable to stabilize AgNPs. Among a huge variety of AABBPs reported we selected the polymers soluble in ethanol. For preparing AgNPs containing nanocompositions AABBPs and AgNO3 were dissolved in ethanol and subjected to photochemical reduction using daylight-irradiation. The formation of AgNPs was observed visually by coloring the solutions in brownish-red. The obtained AgNPs were characterized by UV-spectroscopy, transmission electron microscopy(TEM), and dynamic light scattering(DLS). According to the UV and TEM data, the photochemical reduction resulted presumably in spherical AgNPs with rather high contribution of the particles below 10 nm that are known as responsible for the antimicrobial activity. DLS study showed that average size of nanoparticles formed after photo-reduction in ethanol solution ranged within 50 nm. The in vitro antimicrobial activity study of the new nanocomposite material is in progress now.

Keywords: nanocomposites, silver nanoparticles, polymer, biodegradable

Procedia PDF Downloads 396
256 Monitoring the Production of Large Composite Structures Using Dielectric Tool Embedded Capacitors

Authors: Galatee Levadoux, Trevor Benson, Chris Worrall

Abstract:

With the rise of public awareness on climate change comes an increasing demand for renewable sources of energy. As a result, the wind power sector is striving to manufacture longer, more efficient and reliable wind turbine blades. Currently, one of the leading causes of blade failure in service is improper cure of the resin during manufacture. The infusion process creating the main part of the composite blade structure remains a critical step that is yet to be monitored in real time. This stage consists of a viscous resin being drawn into a mould under vacuum, then undergoing a curing reaction until solidification. Successful infusion assumes the resin fills all the voids and cures completely. Given that the electrical properties of the resin change significantly during its solidification, both the filling of the mould and the curing reaction are susceptible to be followed using dieletrometry. However, industrially available dielectrics sensors are currently too small to monitor the entire surface of a wind turbine blade. The aim of the present research project is to scale up the dielectric sensor technology and develop a device able to monitor the manufacturing process of large composite structures, assessing the conformity of the blade before it even comes out of the mould. An array of flat copper wires acting as electrodes are embedded in a polymer matrix fixed in an infusion mould. A multi-frequency analysis from 1 Hz to 10 kHz is performed during the filling of the mould with an epoxy resin and the hardening of the said resin. By following the variations of the complex admittance Y*, the filling of the mould and curing process are monitored. Results are compared to numerical simulations of the sensor in order to validate a virtual cure-monitoring system. The results obtained by drawing glycerol on top of the copper sensor displayed a linear relation between the wetted length of the sensor and the complex admittance measured. Drawing epoxy resin on top of the sensor and letting it cure at room temperature for 24 hours has provided characteristic curves obtained when conventional interdigitated sensor are used to follow the same reaction. The response from the developed sensor has shown the different stages of the polymerization of the resin, validating the geometry of the prototype. The model created and analysed using COMSOL has shown that the dielectric cure process can be simulated, so long as a sufficient time and temperature dependent material properties can be determined. The model can be used to help design larger sensors suitable for use with full-sized blades. The preliminary results obtained with the sensor prototype indicate that the infusion and curing process of an epoxy resin can be followed with the chosen configuration on a scale of several decimeters. Further work is to be devoted to studying the influence of the sensor geometry and the infusion parameters on the results obtained. Ultimately, the aim is to develop a larger scale sensor able to monitor the flow and cure of large composite panels industrially.

Keywords: composite manufacture, dieletrometry, epoxy, resin infusion, wind turbine blades

Procedia PDF Downloads 168
255 Direct Integration of 3D Ultrasound Scans with Patient Educational Mobile Application

Authors: Zafar Iqbal, Eugene Chan, Fareed Ahmed, Mohamed Jama, Avez Rizvi

Abstract:

Advancements in Ultrasound Technology have enabled machines to capture 3D and 4D images with intricate features of the growing fetus. Sonographers can now capture clear 3D images and 4D videos of the fetus, especially of the face. Fetal faces are often seen on the ultrasound scan of the third trimester where anatomical features become more defined. Parents often want 3D/4D images and videos of their ultrasounds, and particularly image that capture the child’s face. Sidra Medicine developed a patient education mobile app called 10 Moons to improve care and provide useful information during the length of their pregnancy. In addition to general information, we built the ability to send ultrasound images directly from the modality to the mobile application, allowing expectant mothers to easily store and share images of their baby. 10 Moons represent the length of the pregnancy on a lunar calendar, which has both cultural and religious significance in the Middle East. During the third trimester scan, sonographers can capture 3D pictures of the fetus. Ultrasound machines are connected with a local 10 Moons Server with a Digital Imaging and Communications in Medicine (DICOM) application running on it. Sonographers are able to send images directly to the DICOM server by a preprogrammed button on the ultrasound modality. Mothers can also request which pictures they would like to be available on the app. An internally built DICOM application receives the image and saves the patient information from DICOM header (for verification purpose). The application also anonymizes the image by removing all the DICOM header information and subsequently converts it into a lossless JPEG. Finally, and the application passes the image to the mobile application server. On the 10 Moons mobile app – patients enter their Medical Record Number (MRN) and Date of Birth (DOB) to receive a One Time Password (OTP) for security reasons to view the images. Patients can also share the images anonymized images with friends and family. Furthermore, patients can also request 3D printed mementos of their child through 10 Moons. 10 Moons is unique patient education and information application where expected mothers can also see 3D ultrasound images of their children. Sidra Medicine staff has the added benefit of a full content management administrative backend where updates to content can be made. The app is available on secure infrastructure with both local and public interfaces. The application is also available in both English and Arabic languages to facilitate most of the patients in the region. Innovation is at the heart of modern healthcare management. With Innovation being one of Sidra Medicine’s core values, our 10 Moons application provides expectant mothers with unique educational content as well as the ability to store and share images of their child and purchase 3D printed mementos.

Keywords: patient educational mobile application, ultrasound images, digital imaging and communications in medicine (DICOM), imaging informatics

Procedia PDF Downloads 142
254 An Approach to Addressing Homelessness in Hong Kong: Life Story Approach

Authors: Tak Mau Simon Chan, Ying Chuen Lance Chan

Abstract:

Homelessness has been a popular and controversial debate in Hong Kong, a city which is densely populated and well-known for very expensive housing. The constitution of the homeless as threats to the community and environmental hygiene is ambiguous and debatable in the Hong Kong context. The lack of an intervention model is the critical research gap thus far, aside from the tangible services delivered. The life story approach (LSA), with its unique humanistic orientation, has been well applied in recent decades to depict the needs of various target groups, but not the homeless. It is argued that the life story approach (LSA), which has been employed by health professionals in the landscape of dementia, and health and social care settings, can be used as a reference in the local Chinese context through indigenization. This study, therefore, captures the viewpoints of service providers and users by constructing an indigenous intervention model that refers to the LSA in serving the chronically homeless. By informing 13 social workers and 27 homeless individuals in 8 focus groups whilst 12 homeless individuals have participated in individual in-depth interviews, a framework of LSA in homeless people is proposed. Through thematic analysis, three main themes of their life stories was generated, namely, the family, negative experiences and identity transformation. The three domains solidified framework that not only can be applied to the homeless, but also other disadvantaged groups in the Chinese context. Based on the three domains of family, negative experiences and identity transformation, the model is applied in the daily practices of social workers who help the homeless. The domain of family encompasses familial relationships from the past to the present to the speculated future with ten sub-themes. The domain of negative experiences includes seven sub-themes, with reference to the deviant behavior committed. The last domain, identity transformation, incorporates the awareness and redefining of one’s identity and there are a total of seven sub-themes. The first two domains are important components of personal histories while the third is more of an unknown, exploratory and yet to-be-redefined territory which has a more positive and constructive orientation towards developing one’s identity and life meaning. The longitudinal temporal dimension of moving from the past – present - future enriches the meaning making process, facilitates the integration of life experiences and maintains a more hopeful dialogue. The model is tested and its effectiveness is measured by using qualitative and quantitative methods to affirm the extent that it is relevant to the local context. First, it contributes to providing a clear guideline for social workers who can use the approach as a reference source. Secondly, the framework acts as a new intervention means to address problem saturated stories and the intangible needs of the homeless. Thirdly, the model extends the application to beyond health related issues. Last but not least, the model is highly relevant to the local indigenous context.

Keywords: homeless, indigenous intervention, life story approach, social work practice

Procedia PDF Downloads 296
253 Scenario-Based Scales and Situational Judgment Tasks to Measure the Social and Emotional Skills

Authors: Alena Kulikova, Leonid Parmaksiz, Ekaterina Orel

Abstract:

Social and emotional skills are considered by modern researchers as predictors of a person's success both in specific areas of activity and in the life of a person as a whole. The popularity of this scientific direction ensures the emergence of a large number of practices aimed at developing and evaluating socio-emotional skills. Assessment of social and emotional development is carried out at the national level, as well as at the level of individual regions and institutions. Despite the fact that many of the already existing social and emotional skills assessment tools are quite convenient and reliable, there are now more and more new technologies and task formats which improve the basic characteristics of the tools. Thus, the goal of the current study is to develop a tool for assessing social and emotional skills such as emotion recognition, emotion regulation, empathy and a culture of self-care. To develop a tool assessing social and emotional skills, Rasch-Gutman scenario-based approach was used. This approach has shown its reliability and merit for measuring various complex constructs: parental involvement; teacher practices that support cultural diversity and equity; willingness to participate in the life of the community after psychiatric rehabilitation; educational motivation and others. To assess emotion recognition, we used a situational judgment task based on OCC (Ortony, Clore, and Collins) emotions theory. The main advantage of these two approaches compare to classical Likert scales is that it reduces social desirability in answers. A field test to check the psychometric properties of the developed instrument was conducted. The instrument was developed for the presidential autonomous non-profit organization “Russia - Land of Opportunity” for nationwide soft skills assessment among higher education students. The sample for the field test consisted of 500 people, students aged from 18 to 25 (mean = 20; standard deviation 1.8), 71% female. 67% of students are only studying and are not currently working and 500 employed adults aged from 26 to 65 (mean = 42.5; SD 9), 57% female. Analysis of the psychometric characteristics of the scales was carried out using the methods of IRT (Item Response Theory). A one-parameter rating scale model RSM (Rating scale model) and Graded Response model (GRM) of the modern testing theory were applied. GRM is a polyatomic extension of the dichotomous two-parameter model of modern testing theory (2PL) based on the cumulative logit function for modeling the probability of a correct answer. The validity of the developed scales was assessed using correlation analysis and MTMM (multitrait-multimethod matrix). The developed instrument showed good psychometric quality and can be used by HR specialists or educational management. The detailed results of a psychometric study of the quality of the instrument, including the functioning of the tasks of each scale, will be presented. Also, the results of the validity study by MTMM analysis will be discussed.

Keywords: social and emotional skills, psychometrics, MTMM, IRT

Procedia PDF Downloads 76
252 The Development of Assessment Criteria Framework for Sustainable Healthcare Buildings in China

Authors: Chenyao Shen, Jie Shen

Abstract:

The rating system provides an effective framework for assessing building environmental performance and integrating sustainable development into building and construction processes; as it can be used as a design tool by developing appropriate sustainable design strategies and determining performance measures to guide the sustainable design and decision-making processes. Healthcare buildings are resource (water, energy, etc.) intensive. To maintain high-cost operations and complex medical facilities, they require a great deal of hazardous and non-hazardous materials, stringent control of environmental parameters, and are responsible for producing polluting emission. Compared with other types of buildings, the impact of healthcare buildings on the full cycle of the environment is particularly large. With broad recognition among designers and operators that energy use can be reduced substantially, many countries have set up their own green rating systems for healthcare buildings. There are four main green healthcare building evaluation systems widely acknowledged in the world - Green Guide for Health Care (GGHC), which was jointly organized by the United States HCWH and CMPBS in 2003; BREEAM Healthcare, issued by the British Academy of Building Research (BRE) in 2008; the Green Star-Healthcare v1 tool, released by the Green Building Council of Australia (GBCA) in 2009; and LEED Healthcare 2009, released by the United States Green Building Council (USGBC) in 2011. In addition, the German Association of Sustainable Building (DGNB) has also been developing the German Sustainable Building Evaluation Criteria (DGNB HC). In China, more and more scholars and policy makers have recognized the importance of assessment of sustainable development, and have adapted some tools and frameworks. China’s first comprehensive assessment standard for green building (the GBTs) was issued in 2006 (lately updated in 2014), promoting sustainability in the built-environment and raise awareness of environmental issues among architects, engineers, contractors as well as the public. However, healthcare building was not involved in the evaluation system of GBTs because of its complex medical procedures, strict requirements of indoor/outdoor environment and energy consumption of various functional rooms. Learn from advanced experience of GGHC, BREEAM, and LEED HC above, China’s first assessment criteria for green hospital/healthcare buildings was finally released in December 2015. Combined with both quantitative and qualitative assessment criteria, the standard highlight the differences between healthcare and other public buildings in meeting the functional needs for medical facilities and special groups. This paper has focused on the assessment criteria framework for sustainable healthcare buildings, for which the comparison of different rating systems is rather essential. Descriptive analysis is conducted together with the cross-matrix analysis to reveal rich information on green assessment criteria in a coherent manner. The research intends to know whether the green elements for healthcare buildings in China are different from those conducted in other countries, and how to improve its assessment criteria framework.

Keywords: assessment criteria framework, green building design, healthcare building, building performance rating tool

Procedia PDF Downloads 147
251 Deep Learning Based Text to Image Synthesis for Accurate Facial Composites in Criminal Investigations

Authors: Zhao Gao, Eran Edirisinghe

Abstract:

The production of an accurate sketch of a suspect based on a verbal description obtained from a witness is an essential task for most criminal investigations. The criminal investigation system employs specifically trained professional artists to manually draw a facial image of the suspect according to the descriptions of an eyewitness for subsequent identification. Within the advancement of Deep Learning, Recurrent Neural Networks (RNN) have shown great promise in Natural Language Processing (NLP) tasks. Additionally, Generative Adversarial Networks (GAN) have also proven to be very effective in image generation. In this study, a trained GAN conditioned on textual features such as keywords automatically encoded from a verbal description of a human face using an RNN is used to generate photo-realistic facial images for criminal investigations. The intention of the proposed system is to map corresponding features into text generated from verbal descriptions. With this, it becomes possible to generate many reasonably accurate alternatives to which the witness can use to hopefully identify a suspect from. This reduces subjectivity in decision making both by the eyewitness and the artist while giving an opportunity for the witness to evaluate and reconsider decisions. Furthermore, the proposed approach benefits law enforcement agencies by reducing the time taken to physically draw each potential sketch, thus increasing response times and mitigating potentially malicious human intervention. With publically available 'CelebFaces Attributes Dataset' (CelebA) and additionally providing verbal description as training data, the proposed architecture is able to effectively produce facial structures from given text. Word Embeddings are learnt by applying the RNN architecture in order to perform semantic parsing, the output of which is fed into the GAN for synthesizing photo-realistic images. Rather than the grid search method, a metaheuristic search based on genetic algorithms is applied to evolve the network with the intent of achieving optimal hyperparameters in a fraction the time of a typical brute force approach. With the exception of the ‘CelebA’ training database, further novel test cases are supplied to the network for evaluation. Witness reports detailing criminals from Interpol or other law enforcement agencies are sampled on the network. Using the descriptions provided, samples are generated and compared with the ground truth images of a criminal in order to calculate the similarities. Two factors are used for performance evaluation: The Structural Similarity Index (SSIM) and the Peak Signal-to-Noise Ratio (PSNR). A high percentile output from this performance matrix should attribute to demonstrating the accuracy, in hope of proving that the proposed approach can be an effective tool for law enforcement agencies. The proposed approach to criminal facial image generation has potential to increase the ratio of criminal cases that can be ultimately resolved using eyewitness information gathering.

Keywords: RNN, GAN, NLP, facial composition, criminal investigation

Procedia PDF Downloads 164
250 Nigerian Football System: Examining Meso-Level Practices against a Global Model for Integrated Development of Mass and Elite Sport

Authors: I. Derek Kaka’an, P. Smolianov, D. Koh Choon Lian, S. Dion, C. Schoen, J. Norberg

Abstract:

This study was designed to examine mass participation and elite football performance in Nigeria with reference to advance international football management practices. Over 200 sources of literature on sport delivery systems were analyzed to construct a globally applicable model of elite football integrated with mass participation, comprising of the following three levels: macro- (socio-economic, cultural, legislative, and organizational), meso- (infrastructures, personnel, and services enabling sport programs) and micro-level (operations, processes, and methodologies for development of individual athletes). The model has received scholarly validation and showed to be a framework for program analysis that is not culturally bound. The Smolianov and Zakus model has been employed for further understanding of sport systems such as US soccer, US Rugby, swimming, tennis, and volleyball as well as Russian and Dutch swimming. A questionnaire was developed using the above-mentioned model. Survey questions were validated by 12 experts including academicians, executives from sport governing bodies, football coaches, and administrators. To identify best practices and determine areas for improvement of football in Nigeria, 120 coaches completed the questionnaire. Useful exemplars and possible improvements were further identified through semi-structured discussions with 10 Nigerian football administrators and experts. Finally, content analysis of Nigeria Football Federation’s website and organizational documentation was conducted. This paper focuses on the meso-level of Nigerian football delivery, particularly infrastructures, personnel, and services enabling sport programs. This includes training centers, competition systems, and intellectual services. Results identified remarkable achievements coupled with great potential to further develop football in different types of public and private organizations in Nigeria. These include: assimilating football competitions with other cultural and educational activities, providing favorable conditions for employees of all possible organizations to partake and help in managing football programs and events, providing football coaching integrated with counseling for prevention of antisocial conduct, and improving cooperation between football programs and organizations for peace-making and advancement of international relations, tourism, and socio-economic development. Accurate reporting of the sports programs from the media should be encouraged through staff training for better awareness of various events. The systematic integration of these meso-level practices into the balanced development of mass and high-performance football will contribute to international sport success as well as national health, education, and social harmony.

Keywords: football, high performance, mass participation, Nigeria, sport development

Procedia PDF Downloads 254
249 Developing Computational Thinking in Early Childhood Education

Authors: Kalliopi Kanaki, Michael Kalogiannakis

Abstract:

Nowadays, in the digital era, the early acquisition of basic programming skills and knowledge is encouraged, as it facilitates students’ exposure to computational thinking and empowers their creativity, problem-solving skills, and cognitive development. More and more researchers and educators investigate the introduction of computational thinking in K-12 since it is expected to be a fundamental skill for everyone by the middle of the 21st century, just like reading, writing and arithmetic are at the moment. In this paper, a doctoral research in the process is presented, which investigates the infusion of computational thinking into science curriculum in early childhood education. The whole attempt aims to develop young children’s computational thinking by introducing them to the fundamental concepts of object-oriented programming in an enjoyable, yet educational framework. The backbone of the research is the digital environment PhysGramming (an abbreviation of Physical Science Programming), which provides children the opportunity to create their own digital games, turning them from passive consumers to active creators of technology. PhysGramming deploys an innovative hybrid schema of visual and text-based programming techniques, with emphasis on object-orientation. Through PhysGramming, young students are familiarized with basic object-oriented programming concepts, such as classes, objects, and attributes, while, at the same time, get a view of object-oriented programming syntax. Nevertheless, the most noteworthy feature of PhysGramming is that children create their own digital games within the context of physical science courses, in a way that provides familiarization with the basic principles of object-oriented programming and computational thinking, even though no specific reference is made to these principles. Attuned to the ethical guidelines of educational research, interventions were conducted in two classes of second grade. The interventions were designed with respect to the thematic units of the curriculum of physical science courses, as a part of the learning activities of the class. PhysGramming was integrated into the classroom, after short introductory sessions. During the interventions, 6-7 years old children worked in pairs on computers and created their own digital games (group games, matching games, and puzzles). The authors participated in these interventions as observers in order to achieve a realistic evaluation of the proposed educational framework concerning its applicability in the classroom and its educational and pedagogical perspectives. To better examine if the objectives of the research are met, the investigation was focused on six criteria; the educational value of PhysGramming, its engaging and enjoyable characteristics, its child-friendliness, its appropriateness for the purpose that is proposed, its ability to monitor the user’s progress and its individualizing features. In this paper, the functionality of PhysGramming and the philosophy of its integration in the classroom are both described in detail. Information about the implemented interventions and the results obtained is also provided. Finally, several limitations of the research conducted that deserve attention are denoted.

Keywords: computational thinking, early childhood education, object-oriented programming, physical science courses

Procedia PDF Downloads 120
248 Organic Rankine Cycles (ORC) for Mobile Applications: Economic Feasibility in Different Transportation Sectors

Authors: Roberto Pili, Alessandro Romagnoli, Hartmut Spliethoff, Christoph Wieland

Abstract:

Internal combustion engines (ICE) are today the most common energy system to drive vehicles and transportation systems. Numerous studies state that 50-60% of the fuel energy content is lost to the ambient as sensible heat. ORC offers a valuable alternative to recover such waste heat from ICE, leading to fuel energy savings and reduced emissions. In contrast, the additional weight of the ORC affects the net energy balance of the overall system and the ORC occupies additional volume that competes with vehicle transportation capacity. Consequently, a lower income from delivered freight or passenger tickets can be achieved. The economic feasibility of integrating an ORC into an ICE and the resulting economic impact of weight and volume have not been analyzed in open literature yet. This work intends to define such a benchmark for ORC applications in the transportation sector and investigates the current situation on the market. The applied methodology refers to the freight market, but it can be extended to passenger transportation as well. The economic parameter X is defined as the ratio between the variation of the freight revenues and the variation of fuel costs when an ORC is installed as a bottoming cycle for an ICE with respect to a reference case without ORC. A good economic situation is obtained when the reduction in fuel costs is higher than the reduction of revenues for the delivered freight, i.e. X<1. Through this constraint, a maximum allowable change of transport capacity for a given relative reduction in fuel consumption is determined. The specific fuel consumption is influenced by the ORC in two ways. Firstly because the transportable freight is reduced and secondly because the total weight of the vehicle is increased. Note, that the generated electricity of the ORC influences the size of the ICE and the fuel consumption as well. Taking the above dependencies into account, the limiting condition X = 1 results in a second order equation for the relative change in transported cargo. The described procedure is carried out for a typical city bus, a truck of 24-40 t of payload capacity, a middle-size freight train (1000 t), an inland water vessel (Va RoRo, 2500 t) and handysize-like vessel (25000 t). The maximum allowable mass and volume of the ORC are calculated in dependence of its efficiency in order to satisfy X < 1. Subsequently, these values are compared with weight and volume of commercial ORC products. For ships of any size, the situation appears already highly favorable. A different result is obtained for road and rail vehicles. For trains, the mass and the volume of common ORC products have to be reduced at least by 50%. For trucks and buses, the situation looks even worse. The findings of the present study show a theoretical and practical approach for the economic application of ORC in the transportation sector. In future works, the potential for volume and mass reduction of the ORC will be addressed, together with the integration of an economic assessment for the ORC.

Keywords: ORC, transportation, volume, weight

Procedia PDF Downloads 229
247 The Language of Landscape Architecture

Authors: Hosna Pourhashemi

Abstract:

Chahar Bagh, the symbol of the world, displayed around the pool of life in the centre, attempts to emulate Eden. It represents a duality concept based on the division of the universe into two perceptional insights, a celestial and an earthly one. Chahar Bagh garden pattern refers to the Garden of Eden, that was watered and framed by main four rivers. This microcosm is combined with a mystical love of flowers, sweet-scented trees, the variety of colors, and the sense of eternal life. This symbol of the integration of spontaneous expressivity of the natural elements and reasoning awareness of man strives for the ideal of divine perfection. Through collecting and analyzing the data, the prevalence and continuous influence of Chahar Bagh concept on selected historical gardens was elaborated and evaluated. After the conquest of Persia by the Arabs in the 7th century, Chahar Bagh was adopted and spread throughout the Islamic expansion, from the Middle East, westward across northern Africa to Morocco and the Iberian Peninsula, and eastward through Iran to Central Asia and India. Furthermore, its continuity to the mid of 16th century Renaissance period is shown. By adapting the semiotic theory of Peirce and Saussure on the Persian garden, Chahar Bagh was defined as the basic pattern language for the garden culture. The hypothesis of the continuous influence of Chahar Bagh pattern language on today’s landscape architecture was examined on selected works of Dieter Kienast, as the important and relevant protagonist of his time (end of twentieth ct.) and up to our time. Chahar Bagh pattern language offers collective cultural sensitive healing wisdom transmitted down through the millennia. Through my reflections in Dieter Kienast’s works, I transformed my personal experience into a transpersonal understanding regarding the Sufi philosophy and the Jung psychology, which brings me to define new design theories and methods to form a spiritual, as I call it” Quaternary Perception Model” for landscape architecture. Based on a cognition process through self-journeying in this holistic model, human consciousness can be developed to access to “higher” levels of being and embrace the unity. The self-purification and mindfulness through transpersonal confrontation in the ”Quaternary Perception Model” generates a form of heart-based treatment. I adapted the seven spiritual levels of Sufi self-development on the perception of landscape, representing the stages of the self, ranging from absolutely self-centered to pure spiritual humanity. I redefine and reread the elements and features of Chahar Bagh pattern language for today’s landscape architecture. The “lost paradise” lies in our heart and can be perceived by all humans in landscapes and cities designed in the spirit of” Quaternary Model”.

Keywords: persian garden, pattern language of Chahar Bagh, wholistic Perception, dieter kienast, “quaternary model”

Procedia PDF Downloads 84
246 Kansei Engineering Applied to the Design of Rural Primary Education Classrooms: Design-Based Learning Case

Authors: Jimena Alarcon, Andrea Llorens, Gabriel Hernandez, Maritza Palma, Lucia Navarrete

Abstract:

The research has funding from the Government of Chile and is focused on defining the design of rural primary classroom that stimulates creativity. The relevance of the study consists of its capacity to define adequate educational spaces for the implementation of the design-based learning (DBL) methodology. This methodology promotes creativity and teamwork, generating a meaningful learning experience for students, based on the appreciation of their environment and the generation of projects that contribute positively to their communities; also, is an inquiry-based form of learning that is based on the integration of design thinking and the design process into the classroom. The main goal of the study is to define the design characteristics of rural primary school classrooms, associated with the implementation of the DBL methodology. Along with the change in learning strategies, it is necessary to change the educational spaces in which they develop. The hypothesis indicates that a change in the space and equipment of the classrooms based on the emotions of the students will motivate better learning results based on the implementation of a new methodology. In this case, the pedagogical dynamics require an important interaction between the participants, as well as an environment favorable to creativity. Methodologies from Kansei engineering are used to know the emotional variables associated with their definition. The study is done to 50 students between 6 and 10 years old (average age of seven years), 48% of men and 52% women. Virtual three-dimensional scale models and semantic differential tables are used. To define the semantic differential, self-applied surveys were carried out. Each survey consists of eight separate questions in two groups: question A to find desirable emotions; question B related to emotions. Both questions have a maximum of three alternatives to answer. Data were tabulated with IBM SPSS Statistics version 19. Terms referred to emotions are grouped into twenty concepts with a higher presence in surveys. To select the values obtained as part of the implementation of Semantic Differential, a number expected of 'chi-square test (x2)' frequency calculated for classroom space is considered lower limit. All terms over the N expected a cut point, are included to prepare tables for surveys to find a relation between emotion and space. Statistic contrast (Chi-Square) represents significance level ≥ 0, indicator that frequencies appeared are not random. Then, the most representative terms depend on the variable under study: a) definition of textures and color of vertical surfaces is associated with emotions such as tranquility, attention, concentration, creativity; and, b) distribution of the equipment of the rooms, with emotions associated with happiness, distraction, creativity, freedom. The main findings are linked to the generation of classrooms according to diverse DBL team dynamics. Kansei engineering is the appropriate methodology to know the emotions that students want to feel in the classroom space.

Keywords: creativity, design-based learning, education spaces, emotions

Procedia PDF Downloads 142
245 The Monitor for Neutron Dose in Hadrontherapy Project: Secondary Neutron Measurement in Particle Therapy

Authors: V. Giacometti, R. Mirabelli, V. Patera, D. Pinci, A. Sarti, A. Sciubba, G. Traini, M. Marafini

Abstract:

The particle therapy (PT) is a very modern technique of non invasive radiotherapy mainly devoted to the treatment of tumours untreatable with surgery or conventional radiotherapy, because localised closely to organ at risk (OaR). Nowadays, PT is available in about 55 centres in the word and only the 20\% of them are able to treat with carbon ion beam. However, the efficiency of the ion-beam treatments is so impressive that many new centres are in construction. The interest in this powerful technology lies to the main characteristic of PT: the high irradiation precision and conformity of the dose released to the tumour with the simultaneous preservation of the adjacent healthy tissue. However, the beam interactions with the patient produce a large component of secondary particles whose additional dose has to be taken into account during the definition of the treatment planning. Despite, the largest fraction of the dose is released to the tumour volume, a non-negligible amount is deposed in other body regions, mainly due to the scattering and nuclear interactions of the neutrons within the patient body. One of the main concerns in PT treatments is the possible occurrence of secondary malignant neoplasm (SMN). While SMNs can be developed up to decades after the treatments, their incidence impacts directly life quality of the cancer survivors, in particular in pediatric patients. Dedicated Treatment Planning Systems (TPS) are used to predict the normal tissue toxicity including the risk of late complications induced by the additional dose released by secondary neutrons. However, no precise measurement of secondary neutrons flux is available, as well as their energy and angular distributions: an accurate characterization is needed in order to improve TPS and reduce safety margins. The project MONDO (MOnitor for Neutron Dose in hadrOntherapy) is devoted to the construction of a secondary neutron tracker tailored to the characterization of that secondary neutron component. The detector, based on the tracking of the recoil protons produced in double-elastic scattering interactions, is a matrix of thin scintillating fibres, arranged in layer x-y oriented. The final size of the object is 10 x 10 x 20 cm3 (squared 250µm scint. fibres, double cladding). The readout of the fibres is carried out with a dedicated SPAD Array Sensor (SBAM) realised in CMOS technology by FBK (Fondazione Bruno Kessler). The detector is under development as well as the SBAM sensor and it is expected to be fully constructed for the end of the year. MONDO will make data tacking campaigns at the TIFPA Proton Therapy Center of Trento, at the CNAO (Pavia) and at HIT (Heidelberg) with carbon ion in order to characterize the neutron component and predict the additional dose delivered on the patients with much more precision and to drastically reduce the actual safety margins. Preliminary measurements with charged particles beams and MonteCarlo FLUKA simulation will be presented.

Keywords: secondary neutrons, particle therapy, tracking detector, elastic scattering

Procedia PDF Downloads 224
244 Technology and the Need for Integration in Public Education

Authors: Eric Morettin

Abstract:

Cybersecurity and digital literacy are pressing issues among Canadian citizens, yet formal education does not provide today’s students with the necessary knowledge and skills needed to adapt to these challenging issues within the physical and digital labor-market. Canada’s current education systems do not highlight the importance of these respective fields, aside from using technology for learning management systems and alternative methods of assignment completion. Educators are not properly trained to integrate technology into the compulsory courses within public education, to better prepare their learners in these topics and Canada’s digital economy. ICTC addresses these gaps in education and training through cross-Canadian educational programming in digital literacy and competency, cybersecurity and coding which is bridged with Canada’s provincially regulated K-12 curriculum guidelines. After analyzing Canada’s provincial education, it is apparent that there are gaps in learning related to technology, as well as inconsistent educational outcomes that do not adequately represent the current Canadian and global economies. Presently only New Brunswick, Nova Scotia, Ontario, and British Columbia offer curriculum guidelines for cybersecurity, computer programming, and digital literacy. The remaining provinces do not address these skills in their curriculum guidelines. Moreover, certain courses across some provinces not being updated since the 1990’s. The three territories respectfully take curriculum strands from other provinces and use them as their foundation in education. Yukon uses all British Columbia curriculum. Northwest Territories and Nunavut respectfully use a hybrid of Alberta and Saskatchewan curriculum as their foundation of learning. Education that is provincially regulated does not allow for consistency across the country’s educational outcomes and what Canada’s students will achieve – especially when curriculum outcomes have not been updated to reflect present day society. Through this, ICTC has aligned Canada’s provincially regulated curriculum and created opportunities for focused education in the realm of technology to better serve Canada’s present learners and teachers; while addressing inequalities and applicability within curriculum strands and outcomes across the country. As a result, lessons, units, and formal assessment strategies, have been created to benefit students and teachers in this interdisciplinary, cross-curricular, practice - as well as meeting their compulsory education requirements and developing skills and literacy in cyber education. Teachers can access these lessons and units through ICTC’s website, as well as receive professional development regarding the assessment and implementation of these offerings from ICTC’s education coordinators, whose combines experience exceeds 50 years of teaching in public, private, international, and Indigenous schools. We encourage you to take this opportunity that will benefit students and educators, and will bridge the learning and curriculum gaps in Canadian education to better reflect the ever-changing public, social, and career landscape that all citizens are a part of. Students are the future, and we at ICTC strive to ensure their futures are bright and prosperous.

Keywords: cybersecurity, education, curriculum, teachers

Procedia PDF Downloads 83
243 Regularized Euler Equations for Incompressible Two-Phase Flow Simulations

Authors: Teng Li, Kamran Mohseni

Abstract:

This paper presents an inviscid regularization technique for the incompressible two-phase flow simulations. This technique is known as observable method due to the understanding of observability that any feature smaller than the actual resolution (physical or numerical), i.e., the size of wire in hotwire anemometry or the grid size in numerical simulations, is not able to be captured or observed. Differ from most regularization techniques that applies on the numerical discretization, the observable method is employed at PDE level during the derivation of equations. Difficulties in the simulation and analysis of realistic fluid flow often result from discontinuities (or near-discontinuities) in the calculated fluid properties or state. Accurately capturing these discontinuities is especially crucial when simulating flows involving shocks, turbulence or sharp interfaces. Over the past several years, the properties of this new regularization technique have been investigated that show the capability of simultaneously regularizing shocks and turbulence. The observable method has been performed on the direct numerical simulations of shocks and turbulence where the discontinuities are successfully regularized and flow features are well captured. In the current paper, the observable method will be extended to two-phase interfacial flows. Multiphase flows share the similar features with shocks and turbulence that is the nonlinear irregularity caused by the nonlinear terms in the governing equations, namely, Euler equations. In the direct numerical simulation of two-phase flows, the interfaces are usually treated as the smooth transition of the properties from one fluid phase to the other. However, in high Reynolds number or low viscosity flows, the nonlinear terms will generate smaller scales which will sharpen the interface, causing discontinuities. Many numerical methods for two-phase flows fail at high Reynolds number case while some others depend on the numerical diffusion from spatial discretization. The observable method regularizes this nonlinear mechanism by filtering the convective terms and this process is inviscid. The filtering effect is controlled by an observable scale which is usually about a grid length. Single rising bubble and Rayleigh-Taylor instability are studied, in particular, to examine the performance of the observable method. A pseudo-spectral method is used for spatial discretization which will not introduce numerical diffusion, and a Total Variation Diminishing (TVD) Runge Kutta method is applied for time integration. The observable incompressible Euler equations are solved for these two problems. In rising bubble problem, the terminal velocity and shape of the bubble are particularly examined and compared with experiments and other numerical results. In the Rayleigh-Taylor instability, the shape of the interface are studied for different observable scale and the spike and bubble velocities, as well as positions (under a proper observable scale), are compared with other simulation results. The results indicate that this regularization technique can potentially regularize the sharp interface in the two-phase flow simulations

Keywords: Euler equations, incompressible flow simulation, inviscid regularization technique, two-phase flow

Procedia PDF Downloads 502
242 Statistical Analysis to Compare between Smart City and Traditional Housing

Authors: Taha Anjamrooz, Sareh Rajabi, Ayman Alzaatreh

Abstract:

Smart cities are playing important roles in real life. Integration and automation between different features of modern cities and information technologies improve smart city efficiency, energy management, human and equipment resource management, life quality and better utilization of resources for the customers. One of difficulties in this path, is use, interface and link between software, hardware, and other IT technologies to develop and optimize processes in various business fields such as construction, supply chain management and transportation in parallel to cost-effective and resource reduction impacts. Also, Smart cities are certainly intended to demonstrate a vital role in offering a sustainable and efficient model for smart houses while mitigating environmental and ecological matters. Energy management is one of the most important matters within smart houses in the smart cities and communities, because of the sensitivity of energy systems, reduction in energy wastage and maximization in utilizing the required energy. Specially, the consumption of energy in the smart houses is important and considerable in the economic balance and energy management in smart city as it causes significant increment in energy-saving and energy-wastage reduction. This research paper develops features and concept of smart city in term of overall efficiency through various effective variables. The selected variables and observations are analyzed through data analysis processes to demonstrate the efficiency of smart city and compare the effectiveness of each variable. There are ten chosen variables in this study to improve overall efficiency of smart city through increasing effectiveness of smart houses using an automated solar photovoltaic system, RFID System, smart meter and other major elements by interfacing between software and hardware devices as well as IT technologies. Secondly to enhance aspect of energy management by energy-saving within smart house through efficient variables. The main objective of smart city and smart houses is to reproduce energy and increase its efficiency through selected variables with a comfortable and harmless atmosphere for the customers within a smart city in combination of control over the energy consumption in smart house using developed IT technologies. Initially the comparison between traditional housing and smart city samples is conducted to indicate more efficient system. Moreover, the main variables involved in measuring overall efficiency of system are analyzed through various processes to identify and prioritize the variables in accordance to their influence over the model. The result analysis of this model can be used as comparison and benchmarking with traditional life style to demonstrate the privileges of smart cities. Furthermore, due to expensive and expected shortage of natural resources in near future, insufficient and developed research study in the region, and available potential due to climate and governmental vision, the result and analysis of this study can be used as key indicator to select most effective variables or devices during construction phase and design

Keywords: smart city, traditional housing, RFID, photovoltaic system, energy efficiency, energy saving

Procedia PDF Downloads 113
241 A Smart Sensor Network Approach Using Affordable River Water Level Sensors

Authors: Dian Zhang, Brendan Heery, Maria O’Neill, Ciprian Briciu-Burghina, Noel E. O’Connor, Fiona Regan

Abstract:

Recent developments in sensors, wireless data communication and the cloud computing have brought the sensor web to a whole new generation. The introduction of the concept of ‘Internet of Thing (IoT)’ has brought the sensor research into a new level, which involves the developing of long lasting, low cost, environment friendly and smart sensors; new wireless data communication technologies; big data analytics algorithms and cloud based solutions that are tailored to large scale smart sensor network. The next generation of smart sensor network consists of several layers: physical layer, where all the smart sensors resident and data pre-processes occur, either on the sensor itself or field gateway; data transmission layer, where data and instructions exchanges happen; the data process layer, where meaningful information is extracted and organized from the pre-process data stream. There are many definitions of smart sensor, however, to summarize all these definitions, a smart sensor must be Intelligent and Adaptable. In future large scale sensor network, collected data are far too large for traditional applications to send, store or process. The sensor unit must be intelligent that pre-processes collected data locally on board (this process may occur on field gateway depends on the sensor network structure). In this case study, three smart sensing methods, corresponding to simple thresholding, statistical model and machine learning based MoPBAS method, are introduced and their strength and weakness are discussed as an introduction to the smart sensing concept. Data fusion, the integration of data and knowledge from multiple sources, are key components of the next generation smart sensor network. For example, in the water level monitoring system, weather forecast can be extracted from external sources and if a heavy rainfall is expected, the server can send instructions to the sensor notes to, for instance, increase the sampling rate or switch on the sleeping mode vice versa. In this paper, we describe the deployment of 11 affordable water level sensors in the Dublin catchment. The objective of this paper is to use the deployed river level sensor network at the Dodder catchment in Dublin, Ireland as a case study to give a vision of the next generation of a smart sensor network for flood monitoring to assist agencies in making decisions about deploying resources in the case of a severe flood event. Some of the deployed sensors are located alongside traditional water level sensors for validation purposes. Using the 11 deployed river level sensors in a network as a case study, a vision of the next generation of smart sensor network is proposed. Each key component of the smart sensor network is discussed, which hopefully inspires the researchers who are working in the sensor research domain.

Keywords: smart sensing, internet of things, water level sensor, flooding

Procedia PDF Downloads 383
240 Surface Adjustments for Endothelialization of Decellularized Porcine Pericardium

Authors: M. Markova, E. Filova, O. Kaplan, R. Matejka, L. Bacakova

Abstract:

The porcine pericardium is used as a material for cardiac and aortic valves substitutes. Current biological aortic heart valve prosthesis have a limited lifetime period because they undergo degeneration. In order to make them more biocompatible and prolong their lifetime it is necessary to reseed the decellularized prostheses with endothelial cells and with valve interstitial cells. The endothelialization of the prosthesis-surface may be supported by suitable chemical surface modification of the prosthesis. The aim of this study is to prepare bioactive fibrin layers which would both support endothelialization of porcine pericardium and enhance differentiation and maturation of the endothelial cells seeded. As a material for surface adjustments we used layers of fibrin with/without heparin and some of them with adsorbed or chemically bound FGF2, VEGF or their combination. Fibrin assemblies were prepared in 24-well cell culture plate and were seeded with HSVEC (Human Saphenous Vein Endothelial Cells) at a density of 20,000 cells per well in EGM-2 medium with 0.5% FS and without heparin, without FGF2 and without VEGF; medium was supplemented with aprotinin (200 U/mL). As a control, surface polystyrene (PS) was used. Fibrin was also used as homogeneous impregnation of the decellularized porcine pericardium throughout the scaffolds. Morphology, density, and viability of the seeded endothelial cells were observed from micrographs after staining the samples by LIVE/DEAD cytotoxicity/viability assay kit on the days 1, 3, and 7. Endothelial cells were immunocytochemically stained for proteins involved in cell adhesion, i.e. alphaV integrin, vinculin, and VE-cadherin, markers of endothelial cells differentiation and maturation, i.e. von Willebrand factor and CD31, and for extracellular matrix proteins typically produced by endothelial cells, i.e. type IV collagen and laminin. The staining intensities were subsequently quantified using a software. HSVEC cells grew on each of the prepared surfaces better than on control surface. They reached confluency. The highest cell densities were obtained on the surface of fibrin with heparin and both grow factors used together. Intensity of alphaV integrins staining was highest on samples with remained fibrin layer, i.e. on layers with lower cell densities, i.e. on fibrin without heparin. Vinculin staining was apparent, but was rather diffuse, on fibrin with both FGF2 and VEGF and on control PS. Endothelial cells on all samples were positively stained for von Willebrand factor and CD31. VE-cadherin receptors clusters were best developed on fibrin with heparin and growth factors. Significantly stronger staining of type IV collagen was observed on fibrin with heparin and both growth factors. Endothelial cells on all samples produced laminin-1. Decellularized pericardium was homogeneously filled with fibrin structures. These fibrin-modified pericardium samples will be further seeded with cells and cultured in a bioreactor. Fibrin layers with/without heparin and with adsorbed or chemically bound FGF2, VEGF or their combination are good surfaces for endothelialization of cardiovascular prostheses or porcine pericardium based heart valves. Supported by the Ministry of Health, grants No15-29153A and 15-32497A, and the Grant Agency of the Czech Republic, project No. P108/12/G108.

Keywords: aortic valves prosthesis, FGF2, heparin, HSVEC cells, VEGF

Procedia PDF Downloads 266
239 Developing a Product Circularity Index with an Emphasis on Longevity, Repairability, and Material Efficiency

Authors: Lina Psarra, Manogj Sundaresan, Purjeet Sutar

Abstract:

In response to the global imperative for sustainable solutions, this article proposes the development of a comprehensive circularity index applicable to a wide range of products across various industries. The absence of a consensus on using a universal metric to assess circularity performance presents a significant challenge in prioritizing and effectively managing sustainable initiatives. This circularity index serves as a quantitative measure to evaluate the adherence of products, processes, and systems to the principles of a circular economy. Unlike traditional distinct metrics such as recycling rates or material efficiency, this index considers the entire lifecycle of a product in one single metric, also incorporating additional factors such as reusability, scarcity of materials, reparability, and recyclability. Through a systematic approach and by reviewing existing metrics and past methodologies, this work aims to address this gap by formulating a circularity index that can be applied to diverse product portfolio and assist in comparing the circularity of products on a scale of 0%-100%. Project objectives include developing a formula, designing and implementing a pilot tool based on the developed Product Circularity Index (PCI), evaluating the effectiveness of the formula and tool using real product data, and assessing the feasibility of integration into various sustainability initiatives. The research methodology involves an iterative process of comprehensive research, analysis, and refinement where key steps include defining circularity parameters, collecting relevant product data, applying the developed formula, and testing the tool in a pilot phase to gather insights and make necessary adjustments. Major findings of the study indicate that the PCI provides a robust framework for evaluating product circularity across various dimensions. The Excel-based pilot tool demonstrated high accuracy and reliability in measuring circularity, and the database proved instrumental in supporting comprehensive assessments. The PCI facilitated the identification of key areas for improvement, enabling more informed decision-making towards circularity and benchmarking across different products, essentially assisting towards better resource management. In conclusion, the development of the Product Circularity Index represents a significant advancement in global sustainability efforts. By providing a standardized metric, the PCI empowers companies and stakeholders to systematically assess product circularity, track progress, identify improvement areas, and make informed decisions about resource management. This project contributes to the broader discourse on sustainable development by offering a practical approach to enhance circularity within industrial systems, thus paving the way towards a more resilient and sustainable future.

Keywords: circular economy, circular metrics, circularity assessment, circularity tool, sustainable product design, product circularity index

Procedia PDF Downloads 30
238 Numerical Analyses of Dynamics of Deployment of PW-Sat2 Deorbit Sail Compared with Results of Experiment under Micro-Gravity and Low Pressure Conditions

Authors: P. Brunne, K. Ciechowska, K. Gajc, K. Gawin, M. Gawin, M. Kania, J. Kindracki, Z. Kusznierewicz, D. Pączkowska, F. Perczyński, K. Pilarski, D. Rafało, E. Ryszawa, M. Sobiecki, I. Uwarowa

Abstract:

Big amount of space debris constitutes nowadays a real thread for operating space crafts; therefore the main purpose of PW-Sat2’ team was to create a system that could help cleanse the Earth’s orbit after each small satellites’ mission. After 4 years of development, the motorless, low energy consumption and low weight system has been created. During series of tests, the system has shown high reliable efficiency. The PW-Sat2’s deorbit system is a square-shaped sail which covers an area of 4m². The sail surface is made of 6 μm aluminized Mylar film which is stretched across 4 diagonally placed arms, each consisting of two C-shaped flat springs and enveloped in Mylar sleeves. The sail is coiled using a special, custom designed folding stand that provides automation and repeatability of the sail unwinding tests and placed in a container with inner diameter of 85 mm. In the final configuration the deorbit system weights ca. 600 g and occupies 0.6U (in accordance with CubeSat standard). The sail’s releasing system requires minimal amount of power based on thermal knife that burns out the Dyneema wire, which holds the system before deployment. The Sail is being pushed out of the container within a safe distance (20 cm away) from the satellite. The energy for the deployment is completely assured by coiled C-shaped flat springs, which during the release, unfold the sail surface. To avoid dynamic effects on the satellite’s structure, there is the rotational link between the sail and satellite’s main body. To obtain complete knowledge about complex dynamics of the deployment, a number of experiments have been performed in varied environments. The numerical model of the dynamics of the Sail’s deployment has been built and is still under continuous development. Currently, the integration of the flight model and Deorbit Sail is performed. The launch is scheduled for February 2018. At the same time, in cooperation with United Nations Office for Outer Space Affairs, sail models and requested facilities are being prepared for the sail deployment experiment under micro-gravity and low pressure conditions at Bremen Drop Tower, Germany. Results of those tests will provide an ultimate and wide knowledge about deployment in space environment to which system will be exposed during its mission. Outcomes of the numerical model and tests will be compared afterwards and will help the team in building a reliable and correct model of a very complex phenomenon of deployment of 4 c-shaped flat springs with surface attached. The verified model could be used inter alia to investigate if the PW-Sat2’s sail is scalable and how far is it possible to go with enlarging when creating systems for bigger satellites.

Keywords: cubesat, deorbitation, sail, space, debris

Procedia PDF Downloads 292
237 Detection of Triclosan in Water Based on Nanostructured Thin Films

Authors: G. Magalhães-Mota, C. Magro, S. Sério, E. Mateus, P. A. Ribeiro, A. B. Ribeiro, M. Raposo

Abstract:

Triclosan [5-chloro-2-(2,4-dichlorophenoxy) phenol], belonging to the class of Pharmaceuticals and Personal Care Products (PPCPs), is a broad-spectrum antimicrobial agent and bactericide. Because of its antimicrobial efficacy, it is widely used in personal health and skin care products, such as soaps, detergents, hand cleansers, cosmetics, toothpastes, etc. However, it has been considered to disrupt the endocrine system, for instance, thyroid hormone homeostasis and possibly the reproductive system. Considering the widespread use of triclosan, it is expected that environmental and food safety problems regarding triclosan will increase dramatically. Triclosan has been found in river water samples in both North America and Europe and is likely widely distributed wherever triclosan-containing products are used. Although significant amounts are removed in sewage plants, considerable quantities remain in the sewage effluent, initiating widespread environmental contamination. Triclosan undergoes bioconversion to methyl-triclosan, which has been demonstrated to bio accumulate in fish. In addition, triclosan has been found in human urine samples from persons with no known industrial exposure and in significant amounts in samples of mother's milk, demonstrating its presence in humans. The action of sunlight in river water is known to turn triclosan into dioxin derivatives and raises the possibility of pharmacological dangers not envisioned when the compound was originally utilized. The aim of this work is to detect low concentrations of triclosan in an aqueous complex matrix through the use of a sensor array system, following the electronic tongue concept based on impedance spectroscopy. To achieve this goal, we selected the appropriate molecules to the sensor so that there is a high affinity for triclosan and whose sensitivity ensures the detection of concentrations of at least nano-molar. Thin films of organic molecules and oxides have been produced by the layer-by-layer (LbL) technique and sputtered onto glass solid supports already covered by gold interdigitated electrodes. By submerging the films in complex aqueous solutions with different concentrations of triclosan, resistance and capacitance values were obtained at different frequencies. The preliminary results showed that an array of interdigitated electrodes sensor coated or uncoated with different LbL and films, can be used to detect TCS traces in aqueous solutions in a wide range concentration, from 10⁻¹² to 10⁻⁶ M. The PCA method was applied to the measured data, in order to differentiate the solutions with different concentrations of TCS. Moreover, was also possible to trace a curve, the plot of the logarithm of resistance versus the logarithm of concentration, which allowed us to fit the plotted data points with a decreasing straight line with a slope of 0.022 ± 0.006 which corresponds to the best sensitivity of our sensor. To find the sensor resolution near of the smallest concentration (Cs) used, 1pM, the minimum measured value which can be measured with resolution is 0.006, so the ∆logC =0.006/0.022=0.273, and, therefore, C-Cs~0.9 pM. This leads to a sensor resolution of 0.9 pM for the smallest concentration used, 1pM. This attained detection limit is lower than the values obtained in the literature.

Keywords: triclosan, layer-by-layer, impedance spectroscopy, electronic tongue

Procedia PDF Downloads 253
236 Effective Affordable Housing Finance in Developing Economies: An Integration of Demand and Supply Solutions

Authors: Timothy Akinwande, Eddie Hui, Karien Dekker

Abstract:

Housing the urban poor remains a persistent challenge, despite evident research attention over many years. It is, therefore, pertinent to investigate affordable housing provision challenges with novel approaches. For innovative solutions to affordable housing constraints, it is apposite to thoroughly examine housing solutions vis a vis the key elements of the housing supply value chain (HSVC), which are housing finance, housing construction and land acquisition. A pragmatic analysis will examine affordable housing solutions from demand and supply perspectives to arrive at consolidated solutions from bilateral viewpoints. This study thoroughly examined informal housing finance strategies of the urban poor and diligently investigated expert opinion on affordable housing finance solutions. The research questions were: (1) What mutual grounds exist between informal housing finance solutions of the urban poor and housing expert solutions to affordable housing finance constraints in developing economies? (2) What are effective approaches to affordable housing finance in developing economies from an integrated demand - supply perspective? Semi-structured interviews were conducted in the 5 largest slums of Lagos, Nigeria, with 40 informal settlers for demand-oriented solutions, while focus group discussion and in-depth interviews were conducted with 12 housing experts in Nigeria for supply-oriented solutions. Following a rigorous thematic, content and descriptive analyses of data using NVivo and Excel, findings ascertained mutual solutions from both demand and supply standpoints that can be consolidated into more effective affordable housing finance solutions in Nigeria. Deliberate finance models that recognise and include the finance realities of the urban poor was found to be the most significant supply-side housing finance solution, representing 25.4% of total expert responses. Findings also show that 100% of sampled urban poor engage in vocations where they earn little irregular income or zero income, limiting their housing finance capacities and creditworthiness. Survey revealed that the urban poor are involved in community savings and employ microfinance institutions within the informal settlements to tackle their housing finance predicaments. These are informal finance models of the urban poor, revealing common grounds between demand and supply solutions for affordable housing financing. Effective, affordable housing approach will be to modify, institutionalise and incorporate the informal finance strategies of the urban poor into deliberate government policies. This consolidation of solutions from demand and supply perspectives can eliminate the persistent misalliance between affordable housing demand and affordable housing supply. This study provides insights into mutual housing solutions from demand and supply perspectives, and findings are informative for effective, affordable housing provision approaches in developing countries. This study is novel in consolidating affordable housing solutions from demand and supply viewpoints, especially in relation to housing finance as a key component of HSVC. The framework for effective, affordable housing finance in developing economies from a consolidated viewpoint generated in this study is significant for the achievement of sustainable development goals, especially goal 11 for sustainable, resilient and inclusive cities. Findings are vital for future housing studies.

Keywords: affordable housing, affordable housing finance, developing economies, effective affordable housing, housing policy, urban poor, sustainable development goal, sustainable affordable housing

Procedia PDF Downloads 71
235 Revolutionizing Accounting: Unleashing the Power of Artificial Intelligence

Authors: Sogand Barghi

Abstract:

The integration of artificial intelligence (AI) in accounting practices is reshaping the landscape of financial management. This paper explores the innovative applications of AI in the realm of accounting, emphasizing its transformative impact on efficiency, accuracy, decision-making, and financial insights. By harnessing AI's capabilities in data analysis, pattern recognition, and automation, accounting professionals can redefine their roles, elevate strategic decision-making, and unlock unparalleled value for businesses. This paper delves into AI-driven solutions such as automated data entry, fraud detection, predictive analytics, and intelligent financial reporting, highlighting their potential to revolutionize the accounting profession. Artificial intelligence has swiftly emerged as a game-changer across industries, and accounting is no exception. This paper seeks to illuminate the profound ways in which AI is reshaping accounting practices, transcending conventional boundaries, and propelling the profession toward a new era of efficiency and insight-driven decision-making. One of the most impactful applications of AI in accounting is automation. Tasks that were once labor-intensive and time-consuming, such as data entry and reconciliation, can now be streamlined through AI-driven algorithms. This not only reduces the risk of errors but also allows accountants to allocate their valuable time to more strategic and analytical tasks. AI's ability to analyze vast amounts of data in real time enables it to detect irregularities and anomalies that might go unnoticed by traditional methods. Fraud detection algorithms can continuously monitor financial transactions, flagging any suspicious patterns and thereby bolstering financial security. AI-driven predictive analytics can forecast future financial trends based on historical data and market variables. This empowers organizations to make informed decisions, optimize resource allocation, and develop proactive strategies that enhance profitability and sustainability. Traditional financial reporting often involves extensive manual effort and data manipulation. With AI, reporting becomes more intelligent and intuitive. Automated report generation not only saves time but also ensures accuracy and consistency in financial statements. While the potential benefits of AI in accounting are undeniable, there are challenges to address. Data privacy and security concerns, the need for continuous learning to keep up with evolving AI technologies, and potential biases within algorithms demand careful attention. The convergence of AI and accounting marks a pivotal juncture in the evolution of financial management. By harnessing the capabilities of AI, accounting professionals can transcend routine tasks, becoming strategic advisors and data-driven decision-makers. The applications discussed in this paper underline the transformative power of AI, setting the stage for an accounting landscape that is smarter, more efficient, and more insightful than ever before. The future of accounting is here, and it's driven by artificial intelligence.

Keywords: artificial intelligence, accounting, automation, predictive analytics, financial reporting

Procedia PDF Downloads 71
234 Fully Instrumented Small-Scale Fire Resistance Benches for Aeronautical Composites Assessment

Authors: Fabienne Samyn, Pauline Tranchard, Sophie Duquesne, Emilie Goncalves, Bruno Estebe, Serge Boubigot

Abstract:

Stringent fire safety regulations are enforced in the aeronautical industry due to the consequences that potential fire event on an aircraft might imply. This is so much true that the fire issue is considered right from the design of the aircraft structure. Due to the incorporation of an increasing amount of polymer matrix composites in replacement of more conventional materials like metals, the nature of the fire risks is changing. The choice of materials used is consequently of prime importance as well as the evaluation of its resistance to fire. The fire testing is mostly done using the so-called certification tests according to standards such as the ISO2685:1998(E). The latter describes a protocol to evaluate the fire resistance of structures located in fire zone (ability to withstand fire for 5min). The test consists in exposing an at least 300x300mm² sample to an 1100°C propane flame with a calibrated heat flux of 116kW/m². This type of test is time-consuming, expensive and gives access to limited information in terms of fire behavior of the materials (pass or fail test). Consequently, it can barely be used for material development purposes. In this context, the laboratory UMET in collaboration with industrial partners has developed a horizontal and a vertical small-scale instrumented fire benches for the characterization of the fire behavior of composites. The benches using smaller samples (no more than 150x150mm²) enables to cut downs costs and hence to increase sampling throughput. However, the main added value of our benches is the instrumentation used to collect useful information to understand the behavior of the materials. Indeed, measurements of the sample backside temperature are performed using IR camera in both configurations. In addition, for the vertical set up, a complete characterization of the degradation process, can be achieved via mass loss measurements and quantification of the gasses released during the tests. These benches have been used to characterize and study the fire behavior of aeronautical carbon/epoxy composites. The horizontal set up has been used in particular to study the performances and durability of protective intumescent coating on 2mm thick 2D laminates. The efficiency of this approach has been validated, and the optimized coating thickness has been determined as well as the performances after aging. Reductions of the performances after aging were attributed to the migration of some of the coating additives. The vertical set up has enabled to investigate the degradation process of composites under fire. An isotropic and a unidirectional 4mm thick laminates have been characterized using the bench and post-fire analyses. The mass loss measurements and the gas phase analyses of both composites do not present significant differences unlike the temperature profiles in the thickness of the samples. The differences have been attributed to differences of thermal conductivity as well as delamination that is much more pronounced for the isotropic composite (observed on the IR-images). This has been confirmed by X-ray microtomography. The developed benches have proven to be valuable tools to develop fire safe composites.

Keywords: aeronautical carbon/epoxy composite, durability, intumescent coating, small-scale ‘ISO 2685 like’ fire resistance test, X-ray microtomography

Procedia PDF Downloads 271