Search results for: applied art
533 Quality of Life Responses of Students with Intellectual Disabilities Entering an Inclusive, Residential Post-Secondary Program
Authors: Mary A. Lindell
Abstract:
Adults with intellectual disabilities (ID) are increasingly attending postsecondary institutions, including inclusive residential programs at four-year universities. The legislation, national organizations, and researchers support developing postsecondary education (PSE) options for this historically underserved population. Simultaneously, researchers are assessing the quality of life indicators (QOL) for people with ID. This study explores the quality of life characteristics for individuals with ID entering a two-year PSE program. A survey aligned with the PSE program was developed and administered to participants before they began their college program (in future studies, the same survey will be administered 6 months and 1 year after graduating). Employment, income, and housing are frequently cited QOL measures. People with disabilities, and especially people with ID, are more likely to experience unemployment and low wages than people without disabilities. PSE improves adult outcomes (e.g., employment, income, housing) for people with and without disabilities. Similarly, adults with ID who attend PSE are more likely to be employed than their peers who do not attend PSE; however, adults with ID are least likely among their typical peers and other students with disabilities to attend PSE. There is increased attention to providing individuals with ID access to PSE and more research is needed regarding the characteristics of students attending PSE. This study focuses on the participants of a fully residential two-year program for individuals with ID. Students earn an Applied Skills Certificate while focusing on five benchmarks: self-care, home care, relationships, academics, and employment. To create a QOL measure, the goals of the PSE program were identified, and possible assessment items were initially selected from the National Core Indicators (NCI) and the National Transition Longitudinal Survey 2 (NTLS2) that aligned with the five program goals. Program staff and advisory committee members offered input on potential item alignment with program goals and expected value to students with ID in the program. National experts in researching QOL outcomes of people with ID were consulted and concurred that the items selected would be useful in measuring the outcomes of postsecondary students with ID. The measure was piloted, modified, and administered to incoming students with ID. Research questions: (1) In what ways are students with ID entering a two-year PSE program similar to individuals with ID who complete the NCI and NTLS2 surveys? (2) In what ways are students with ID entering a two-year PSE program different than individuals with ID who completed the NCI and NTLS2 surveys? The process of developing a QOL measure specific to a PSE program for individuals with ID revealed that many of the items in comprehensive national QOL measures are not relevant to stake-holders of this two-year residential inclusive PSE program. Specific responses of students with ID entering an inclusive PSE program will be presented as well as a comparison to similar items on national QOL measures. This study explores the characteristics of students with ID entering a residential, inclusive PSE program. This information is valuable for, researchers, educators, and policy makers as PSE programs become more accessible for individuals with ID.Keywords: intellectual disabilities, inclusion, post-secondary education, quality of life
Procedia PDF Downloads 99532 An Overview of Bioinformatics Methods to Detect Novel Riboswitches Highlighting the Importance of Structure Consideration
Authors: Danny Barash
Abstract:
Riboswitches are RNA genetic control elements that were originally discovered in bacteria and provide a unique mechanism of gene regulation. They work without the participation of proteins and are believed to represent ancient regulatory systems in the evolutionary timescale. One of the biggest challenges in riboswitch research is that many are found in prokaryotes but only a small percentage of known riboswitches have been found in certain eukaryotic organisms. The few examples of eukaryotic riboswitches were identified using sequence-based bioinformatics search methods that include some slight structural considerations. These pattern-matching methods were the first ones to be applied for the purpose of riboswitch detection and they can also be programmed very efficiently using a data structure called affix arrays, making them suitable for genome-wide searches of riboswitch patterns. However, they are limited by their ability to detect harder to find riboswitches that deviate from the known patterns. Several methods have been developed since then to tackle this problem. The most commonly used by practitioners is Infernal that relies on Hidden Markov Models (HMMs) and Covariance Models (CMs). Profile Hidden Markov Models were also carried out in the pHMM Riboswitch Scanner web application, independently from Infernal. Other computational approaches that have been developed include RMDetect by the use of 3D structural modules and RNAbor that utilizes Boltzmann probability of structural neighbors. We have tried to incorporate more sophisticated secondary structure considerations based on RNA folding prediction using several strategies. The first idea was to utilize window-based methods in conjunction with folding predictions by energy minimization. The moving window approach is heavily geared towards secondary structure consideration relative to sequence that is treated as a constraint. However, the method cannot be used genome-wide due to its high cost because each folding prediction by energy minimization in the moving window is computationally expensive, enabling to scan only at the vicinity of genes of interest. The second idea was to remedy the inefficiency of the previous approach by constructing a pipeline that consists of inverse RNA folding considering RNA secondary structure, followed by a BLAST search that is sequence-based and highly efficient. This approach, which relies on inverse RNA folding in general and our own in-house fragment-based inverse RNA folding program called RNAfbinv in particular, shows capability to find attractive candidates that are missed by Infernal and other standard methods being used for riboswitch detection. We demonstrate attractive candidates found by both the moving-window approach and the inverse RNA folding approach performed together with BLAST. We conclude that structure-based methods like the two strategies outlined above hold considerable promise in detecting riboswitches and other conserved RNAs of functional importance in a variety of organisms.Keywords: riboswitches, RNA folding prediction, RNA structure, structure-based methods
Procedia PDF Downloads 234531 Corporate Governance and Disclosure Practices of Listed Companies in the ASEAN: A Conceptual Overview
Authors: Chen Shuwen, Nunthapin Chantachaimongkol
Abstract:
Since the world has moved into a transitional period, known as globalization; the business environment is now more complicated than ever before. Corporate information has become a matter of great importance for stakeholders, in order to understand the current situation. As a result of this, the concept of corporate governance has been broadly introduced to manage and control the affairs of corporations while businesses are required to disclose both financial and non-financial information to public via various communication channels such as the annual report, the financial report, the company’s website, etc. However, currently there are several other issues related to asymmetric information such as moral hazard or adverse selection that still occur intensively in workplaces. To prevent such problems in the business, it is required to have an understanding of what factors strengthen their transparency, accountability, fairness, and responsibility. Under aforementioned arguments, this paper aims to propose a conceptual framework that enables an investigation on how corporate governance mechanism influences disclosure efficiency of listed companies in the Association of Southeast Asia Nations (ASEAN) and the factors that should be considered for further development of good behaviors, particularly in regards to voluntary disclosure practices. To achieve its purpose, extensive reviews of literature are applied as a research methodology. It is divided into three main steps. Firstly, the theories involved with both corporate governance and disclosure practices such as agency theory, contract theory, signaling theory, moral hazard theory, and information asymmetry theory are examined to provide theoretical backgrounds. Secondly, the relevant literatures based on multi- perspectives of corporate governance, its attributions and their roles on business processes, the influences of corporate governance mechanisms on business performance, and the factors determining corporate governance characteristics as well as capability are reviewed to outline the parameters that should be included in the proposed model. Thirdly, the well-known regulatory document OECD principles and previous empirical studies on the corporate disclosure procedures are evaluated to identify the similarities and differentiations with the disclosure patterns in the ASEAN. Following the processes and consequences of the literature review, abundant factors and variables are found. Further to the methodology, additional critical factors that also have an impact on the disclosure behaviors are addressed in two groups. In the first group, the factors which are linked to the national characteristics - the quality of national code, legal origin, culture, the level of economic development, and so forth. Whereas in the second group, the discoveries which refer to the firm’s characteristics - ownership concentration, ownership’s rights, controlling group, and so on. However, because of research limitations, only some literature are chosen and summarized to form part of the conceptual framework that explores the relationship between corporate governance and the disclosure practices of listed companies in ASEAN.Keywords: corporate governance, disclosure practice, ASEAN, listed company
Procedia PDF Downloads 192530 (De)Motivating Mitigation Behavior: An Exploratory Framing Study Applied to Sustainable Food Consumption
Authors: Youval Aberman, Jason E. Plaks
Abstract:
This research provides initial evidence that self-efficacy of mitigation behavior – the belief that one’s action can make a difference on the environment – can be implicitly inferred from the way numerical information is presented in environmental messages. The scientific community sees climate change as a pressing issue, but the general public tends to construe climate change as an abstract phenomenon that is psychologically distant. As such, a main barrier to pro-environmental behavior is that individuals often believe that their own behavior makes little to no difference on the environment. When it comes to communicating how the behavior of billions of individuals affects global climate change, it might appear valuable to aggregate those billions and present the shocking enormity of the resources individuals consume. This research provides initial evidence that, in fact, this strategy is ineffective; presenting large-scale aggregate data dilutes the contribution of the individual and impedes individuals’ motivation to act pro-environmentally. The high-impact, underrepresented behavior of eating a sustainable diet was chosen for the present studies. US Participants (total N = 668) were recruited online for a study on ‘meat and the environment’ and received information about some of resources used in meat production – water, CO2e, and feed – with numerical information that varied in its frame of reference. A ‘Nation’ frame of reference discussed the resources used in the beef industry, such as the billions of CO2e released daily by the industry, while a ‘Meal’ frame of reference presented the resources used in the production of a single beef dish. Participants completed measures of pro-environmental attitudes and behavioral intentions, either immediately (Study 1) or two days (Study 2) after reading the information. In Study 2 (n = 520) participants also indicated whether they consumed less or more meat than usual. Study 2 included an additional control condition that contained no environmental data. In Study 1, participants who read about meat production at a national level, compared to at a meal level, reported lower motivation to make ecologically conscious dietary choices and reported lower behavioral intention to change their diet. In Study 2, a similar pattern emerged, with the added insight that the Nation condition, but not the Meal condition, deviated from the control condition. Participants across conditions, on average, reduced their meat consumption in the duration of Study 2, except those in the Nation condition who remained unchanged. Presenting nation-wide consequences of human behavior is a double-edged sword: Framing in a large scale might reveal the relationship between collective actions and environmental issues, but it hinders the belief that individual actions make a difference.Keywords: climate change communication, environmental concern, meat consumption, motivation
Procedia PDF Downloads 158529 Stability Assessment of Underground Power House Encountering Shear Zone: Sunni Dam Hydroelectric Project (382 MW), India
Authors: Sanjeev Gupta, Ankit Prabhakar, K. Rajkumar Singh
Abstract:
Sunni Dam Hydroelectric Project (382 MW) is a run of river type development with an underground powerhouse, proposed to harness the hydel potential of river Satluj in Himachal Pradesh, India. The project is located in the inner lesser Himalaya between Dhauladhar Range in the south and the higher Himalaya in the north. The project comprises two large underground caverns, a Powerhouse cavern (171m long, 22.5m wide and 51.2m high) and another transformer hall cavern (175m long, 18.7m wide and 27m high) and the rock pillar between the two caverns is 50m. The highly jointed, fractured, anisotropic rock mass is a key challenge in Himalayan geology for an underground structure. The concern for the stability of rock mass increases when weak/shear zones are encountered in the underground structure. In the Sunni Dam project, 1.7m to 2m thick weak/shear zone comprising of deformed, weak material with gauge has been encountered in powerhouse cavern at 70m having dip direction 325 degree and dip amount 38 degree which also intersects transformer hall at initial reach. The rock encountered in the powerhouse area is moderate to highly jointed, pink quartz arenite belonging to the Khaira Formation, a transition zone comprising of alternate grey, pink & white quartz arenite and shale sequence and dolomite at higher reaches. The rock mass is intersected by mainly 3 joint sets excluding bedding joints and a few random joints. The rock class in powerhouse mainly varies from poor class (class IV) to lower order fair class (class III) and in some reaches, very poor rock mass has also been encountered. To study the stability of the underground structure in weak/shear rock mass, a 3D numerical model analysis has been carried out using RS3 software. Field studies have been interpreted and analysed to derive Bieniawski’s RMR, Barton’s “Q” class and Geological Strength Index (GSI). The various material parameters, in-situ characteristics have been determined based on tests conducted by Central Soil and Materials Research Station, New Delhi. The behaviour of the cavern has been studied by assessing the displacement contours, major and minor principal stresses and plastic zones for different stage excavation sequences. For optimisation of the support system, the stability of the powerhouse cavern with different powerhouse orientations has also been studied. The numerical modeling results indicate that cavern will not likely face stress governed by structural instability with the support system to be applied to the crown and side walls.Keywords: 3D analysis, Himalayan geology, shear zone, underground power house
Procedia PDF Downloads 88528 Methodology to Assess the Circularity of Industrial Processes
Authors: Bruna F. Oliveira, Teresa I. Gonçalves, Marcelo M. Sousa, Sandra M. Pimenta, Octávio F. Ramalho, José B. Cruz, Flávia V. Barbosa
Abstract:
The EU Circular Economy action plan, launched in 2020, is one of the major initiatives to promote the transition into a more sustainable industry. The circular economy is a popular concept used by many companies nowadays. Some industries are better forwarded to this reality than others, and the tannery industry is a sector that needs more attention due to its strong environmental impact caused by its dimension, intensive resources consumption, lack of recyclability, and second use of its products, as well as the industrial effluents generated by the manufacturing processes. For these reasons, the zero-waste goal and the European objectives are further being achieved. In this context, a need arises to provide an effective methodology that allows to determine the level of circularity of tannery companies. Regarding the complexity of the circular economy concept, few factories have a specialist in sustainability to assess the company’s circularity or have the ability to implement circular strategies that could benefit the manufacturing processes. Although there are several methodologies to assess circularity in specific industrial sectors, there is not an easy go-to methodology applied in factories aiming for cleaner production. Therefore, a straightforward methodology to assess the level of circularity, in this case of a tannery industry, is presented and discussed in this work, allowing any company to measure the impact of its activities. The methodology developed consists in calculating the Overall Circular Index (OCI) by evaluating the circularity of four key areas -energy, material, economy and social- in a specific factory. The index is a value between 0 and 1, where 0 means a linear economy, and 1 is a complete circular economy. Each key area has a sub-index, obtained through key performance indicators (KPIs) regarding each theme, and the OCI reflects the average of the four sub-indexes. Some fieldwork in the appointed company was required in order to obtain all the necessary data. By having separate sub-indexes, one can observe which areas are more linear than others. Thus, it is possible to work on the most critical areas by implementing strategies to increase the OCI. After these strategies are implemented, the OCI is recalculated to check the improvements made and any other changes in the remaining sub-indexes. As such, the methodology in discussion works through continuous improvement, constantly reevaluating and improving the circularity of the factory. The methodology is also flexible enough to be implemented in any industrial sector by adapting the KPIs. This methodology was implemented in a selected Portuguese small and medium-sized enterprises (SME) tannery industry and proved to be a relevant tool to measure the circularity level of the factory. It was witnessed that it is easier for non-specialists to evaluate circularity and identify possible solutions to increase its value, as well as learn how one action can impact their environment. In the end, energetic and environmental inefficiencies were identified and corrected, increasing the sustainability and circularity of the company. Through this work, important contributions were provided, helping the Portuguese SMEs to achieve the European and UN 2030 sustainable goals.Keywords: circular economy, circularity index, sustainability, tannery industry, zero-waste
Procedia PDF Downloads 68527 Seismic Response of Reinforced Concrete Buildings: Field Challenges and Simplified Code Formulas
Authors: Michel Soto Chalhoub
Abstract:
Building code-related literature provides recommendations on normalizing approaches to the calculation of the dynamic properties of structures. Most building codes make a distinction among types of structural systems, construction material, and configuration through a numerical coefficient in the expression for the fundamental period. The period is then used in normalized response spectra to compute base shear. The typical parameter used in simplified code formulas for the fundamental period is overall building height raised to a power determined from analytical and experimental results. However, reinforced concrete buildings which constitute the majority of built space in less developed countries pose additional challenges to the ones built with homogeneous material such as steel, or with concrete under stricter quality control. In the present paper, the particularities of reinforced concrete buildings are explored and related to current methods of equivalent static analysis. A comparative study is presented between the Uniform Building Code, commonly used for buildings within and outside the USA, and data from the Middle East used to model 151 reinforced concrete buildings of varying number of bays, number of floors, overall building height, and individual story height. The fundamental period was calculated using eigenvalue matrix computation. The results were also used in a separate regression analysis where the computed period serves as dependent variable, while five building properties serve as independent variables. The statistical analysis shed light on important parameters that simplified code formulas need to account for including individual story height, overall building height, floor plan, number of bays, and concrete properties. Such inclusions are important for reinforced concrete buildings of special conditions due to the level of concrete damage, aging, or materials quality control during construction. Overall results of the present analysis show that simplified code formulas for fundamental period and base shear may be applied but they require revisions to account for multiple parameters. The conclusion above is confirmed by the analytical model where fundamental periods were computed using numerical techniques and eigenvalue solutions. This recommendation is particularly relevant to code upgrades in less developed countries where it is customary to adopt, and mildly adapt international codes. We also note the necessity of further research using empirical data from buildings in Lebanon that were subjected to severe damage due to impulse loading or accelerated aging. However, we excluded this study from the present paper and left it for future research as it has its own peculiarities and requires a different type of analysis.Keywords: seismic behaviour, reinforced concrete, simplified code formulas, equivalent static analysis, base shear, response spectra
Procedia PDF Downloads 232526 Mental Health Surveys on Community and Organizational Levels: Challenges, Issues, Conclusions and Possibilities
Authors: László L. Lippai
Abstract:
In addition to the fact that mental health bears great significance to a particular individual, it can also be regarded as an organizational, community and societal resource. Within the Szeged Health Promotion Research Group, we conducted mental health surveys on two levels: The inhabitants of a medium-sized Hungarian town and students of a Hungarian university with a relatively big headcount were requested to participate in surveys whose goals were to define local government priorities and organization-level health promotion programmes, respectively. To facilitate professional decision-making, we defined three, pragmatically relevant, groups of the target population: the mentally healthy, the vulnerable and the endangered. In order to determine which group a person actually belongs to, we designed a simple and quick measurement tool, which could even be utilised as a smoothing method, the Mental State Questionnaire validity of the above three categories was verified by analysis of variance against psychological quality of life variables. We demonstrate the pragmatic significance of our method via the analyses of the scores of our two mental health surveys. On town level, during our representative survey in Hódmezővásárhely (N=1839), we found that 38.7% of the participants was mentally healthy, 35.3% was vulnerable, while 16.3% was considered as endangered. We were able to identify groups that were in a dramatic state in terms of mental health. For example, such a group consisted of men aged 45 to 64 with only primary education qualification and the ratios of the mentally healthy, vulnerable and endangered were 4.5, 45.5 and 50%, respectively. It was also astonishing to see to what a little extent qualification prevailed as a protective factor in the case of women. Based on our data, the female group aged 18 to 44 with primary education—of whom 20.3% was mentally healthy, 42.4% vulnerable and 37.3% was endangered—as well as the female group aged 45 to 64 with university or college degree—of whom 25% was mentally healthy, 51.3 vulnerable and 23.8% endangered—are to be handled as priority intervention target groups in a similarly difficult position. On organizational level, our survey involving the students of the University of Szeged, N=1565, provided data to prepare a strategy of mental health promotion for a university with a headcount exceeding 20,000. When developing an organizational strategy, it was important to gather information to estimate the proportions of target groups in which mental health promotion methods; for example, life management skills development, detection, psychological consultancy, psychotherapy, would be applied. Our scores show that 46.8% of the student participants were mentally healthy, 42.1% were vulnerable and 11.1% were endangered. These data convey relevant information as to the allocation of organizational resources within a university with a considerable headcount. In conclusion, The Mental State Questionnaire, as a valid smoothing method, is adequate to describe a community in a plain and informative way in the terms of mental health. The application of the method can promote the preparation, design and implementation of mental health promotion interventions.Keywords: health promotion, mental health promotion, mental state questionnaire, psychological well-being
Procedia PDF Downloads 295525 Using Scilab® as New Introductory Method in Numerical Calculations and Programming for Computational Fluid Dynamics (CFD)
Authors: Nicoly Coelho, Eduardo Vieira Vilas Boas, Paulo Orestes Formigoni
Abstract:
Faced with the remarkable developments in the various segments of modern engineering, provided by the increasing technological development, professionals of all educational areas need to overcome the difficulties generated due to the good understanding of those who are starting their academic journey. Aiming to overcome these difficulties, this article aims at an introduction to the basic study of numerical methods applied to fluid mechanics and thermodynamics, demonstrating the modeling and simulations with its substance, and a detailed explanation of the fundamental numerical solution for the use of finite difference method, using SCILAB, a free software easily accessible as it is free and can be used for any research center or university, anywhere, both in developed and developing countries. It is known that the Computational Fluid Dynamics (CFD) is a necessary tool for engineers and professionals who study fluid mechanics, however, the teaching of this area of knowledge in undergraduate programs faced some difficulties due to software costs and the degree of difficulty of mathematical problems involved in this way the matter is treated only in postgraduate courses. This work aims to bring the use of DFC low cost in teaching Transport Phenomena for graduation analyzing a small classic case of fundamental thermodynamics with Scilab® program. The study starts from the basic theory involving the equation the partial differential equation governing heat transfer problem, implies the need for mastery of students, discretization processes that include the basic principles of series expansion Taylor responsible for generating a system capable of convergence check equations using the concepts of Sassenfeld, finally coming to be solved by Gauss-Seidel method. In this work we demonstrated processes involving both simple problems solved manually, as well as the complex problems that required computer implementation, for which we use a small algorithm with less than 200 lines in Scilab® in heat transfer study of a heated plate in rectangular shape on four sides with different temperatures on either side, producing a two-dimensional transport with colored graphic simulation. With the spread of computer technology, numerous programs have emerged requiring great researcher programming skills. Thinking that this ability to program DFC is the main problem to be overcome, both by students and by researchers, we present in this article a hint of use of programs with less complex interface, thus enabling less difficulty in producing graphical modeling and simulation for DFC with an extension of the programming area of experience for undergraduates.Keywords: numerical methods, finite difference method, heat transfer, Scilab
Procedia PDF Downloads 387524 Functional Ingredients from Potato By-Products: Innovative Biocatalytic Processes
Authors: Salwa Karboune, Amanda Waglay
Abstract:
Recent studies indicate that health-promoting functional ingredients and nutraceuticals can help support and improve the overall public health, which is timely given the aging of the population and the increasing cost of health care. The development of novel ‘natural’ functional ingredients is increasingly challenging. Biocatalysis offers powerful approaches to achieve this goal. Our recent research has been focusing on the development of innovative biocatalytic approaches towards the isolation of protein isolates from potato by-products and the generation of peptides. Potato is a vegetable whose high-quality proteins are underestimated. In addition to their high proportion in the essential amino acids, potato proteins possess angiotensin-converting enzyme-inhibitory potency, an ability to reduce plasma triglycerides associated with a reduced risk of atherosclerosis, and stimulate the release of the appetite regulating hormone CCK. Potato proteins have long been considered not economically feasible due to the low protein content (27% dry matter) found in tuber (Solanum tuberosum). However, potatoes rank the second largest protein supplying crop grown per hectare following wheat. Potato proteins include patatin (40-45 kDa), protease inhibitors (5-25 kDa), and various high MW proteins. Non-destructive techniques for the extraction of proteins from potato pulp and for the generation of peptides are needed in order to minimize functional losses and enhance quality. A promising approach for isolating the potato proteins was developed, which involves the use of multi-enzymatic systems containing selected glycosyl hydrolase enzymes that synergistically work to open the plant cell wall network. This enzymatic approach is advantageous due to: (1) the use of milder reaction conditions, (2) the high selectivity and specificity of enzymes, (3) the low cost and (4) the ability to market natural ingredients. Another major benefit to this enzymatic approach is the elimination of a costly purification step; indeed, these multi-enzymatic systems have the ability to isolate proteins, while fractionating them due to their specificity and selectivity with minimal proteolytic activities. The isolated proteins were used for the enzymatic generation of active peptides. In addition, they were applied into a reduced gluten cookie formulation as consumers are putting a high demand for easy ready to eat snack foods, with high nutritional quality and limited to no gluten incorporation. The addition of potato protein significantly improved the textural hardness of reduced gluten cookies, more comparable to wheat flour alone. The presentation will focus on our recent ‘proof-of principle’ results illustrating the feasibility and the efficiency of new biocatalytic processes for the production of innovative functional food ingredients, from potato by-products, whose potential health benefits are increasingly being recognized.Keywords: biocatalytic approaches, functional ingredients, potato proteins, peptides
Procedia PDF Downloads 379523 Diversity of Rhopalocera in Different Vegetation Types of PC Hills, Philippines
Authors: Sean E. Gregory P. Igano, Ranz Brendan D. Gabor, Baron Arthur M. Cabalona, Numeriano Amer E. Gutierrez
Abstract:
Distribution patterns and abundance of butterflies respond in the long term to variations in habitat quality. Studying butterfly populations would give evidence on how vegetation types influence their diversity. In this research, the Rhopalocera diversity of PC Hills was assessed to provide information on diversity trends in varying vegetation types. PC Hills, located in Palo, Leyte, Philippines, is a relatively undisturbed area having forests and rivers. Despite being situated nearby inhabited villages; the area is observed to have a possible rich butterfly population. To assess the Rhopalocera species richness and diversity, transect sampling technique was applied to monitor and document butterflies. Transects were placed in locations that can be mapped, described and relocated easily. Three transects measuring three hundred meters each with a 5-meter diameter were established based on the different vegetation types present. The three main vegetation types identified were the agroecosystem (transect 1), dipterocarp forest (transect 2), and riparian (transect 3). Sample collections were done only from 9:00 A.M to 3:00 P.M. under warm and bright weather, with no more than moderate winds and when it was not raining. When weather conditions did not permit collection, it was moved to another day. A GPS receiver was used to record the location of the selected sample sites and the coordinates of where each sample was collected. Morphological analysis was done for the first phase of the study to identify the voucher specimen to the lowest taxonomic level possible using books about butterfly identification guides and species lists as references. For the second phase, DNA barcoding will be used to further identify the voucher specimen into the species taxonomic level. After eight (8) sampling sessions, seven hundred forty-two (742) individuals were seen, and twenty-two (22) Rhopalocera genera were identified through morphological identification. Nymphalidae family of genus Ypthima and the Pieridae family of genera Eurema and Leptosia were the most dominant species observed. Twenty (20) of the thirty-one (31) voucher specimen were already identified to their species taxonomic level using DNA Barcoding. Shannon-Weiner index showed that the highest diversity level was observed in the third transect (H’ = 2.947), followed by the second transect (H’ = 2.6317) and the lowest being in the first transect (H’ = 1.767). This indicates that butterflies are likely to inhabit dipterocarp and riparian vegetation types than agroecosystem, which influences their species composition and diversity. Moreover, the appearance of a river in the riparian vegetation supported its diversity value since butterflies have the tendency to fly into areas near rivers. Species identification of other voucher specimen will be done in order to compute the overall species richness in PC Hills. Further butterfly sampling sessions of PC Hills is recommended for a more reliable diversity trend and to discover more butterfly species. Expanding the research by assessing the Rhopalocera diversity in other locations should be considered along with studying factors that affect butterfly species composition other than vegetation types.Keywords: distribution patterns, DNA barcoding, morphological analysis, Rhopalocera
Procedia PDF Downloads 154522 Preparation of Activated Carbon From Waste Feedstock: Activation Variables Optimization and Influence
Authors: Oluwagbemi Victor Aladeokin
Abstract:
In the last decade, the global peanut cultivation has seen increased demand, which is attributed to their health benefits, rising to ~ 41.4 MMT in 2019/2020. Peanut and other nutshells are considered as waste in various parts of the world and are usually used for their fuel value. However, this agricultural by-product can be converted to a higher value product such as activated carbon. For many years, due to the highly porous structure of activated carbon, it has been widely and effectively used as an adsorbent in the purification and separation of gases and liquids. Those used for commercial purposes are primarily made from a range of precursors such as wood, coconut shell, coal, bones, etc. However, due to difficulty in regeneration and high cost, various agricultural residues such as rice husk, corn stalks, apricot stones, almond shells, coffee beans, etc, have been explored to produce activated carbons. In the present study, the potential of peanut shells as precursors in the production of activated carbon and their adsorption capacity is investigated. Usually, precursors used to produce activated carbon have carbon content above 45 %. A typical raw peanut shell has 42 wt.% carbon content. To increase the yield, this study has employed chemical activation method using zinc chloride. Zinc chloride is well known for its effectiveness in increasing porosity of porous carbonaceous materials. In chemical activation, activation temperature and impregnation ratio are parameters commonly reported to be the most significant, however, this study has also studied the influence of activation time on the development of activated carbon from peanut shells. Activated carbons are applied for different purposes, however, as the application of activated carbon becomes more specific, an understanding of the influence of activation variables to have a better control of the quality of the final product becomes paramount. A traditional approach to experimentally investigate the influence of the activation parameters, involves varying each parameter at a time. However, a more efficient way to reduce the number of experimental runs is to apply design of experiment. One of the objectives of this study is to optimize the activation variables. Thus, this work has employed response surface methodology of design of experiment to study the interactions between the activation parameters and consequently optimize the activation parameters (temperature, impregnation ratio, and activation time). The optimum activation conditions found were 485 °C, 15 min and 1.7, temperature, activation time, and impregnation ratio respectively. The optimum conditions resulted in an activated carbon with relatively high surface area ca. 1700 m2/g, 47 % yield, relatively high density, low ash, and high fixed carbon content. Impregnation ratio and temperature were found to mostly influence the final characteristics of the produced activated carbon from peanut shells. The results of this study, using response surface methodology technique, have revealed the potential and the most significant parameters that influence the chemical activation process, of peanut shells to produce activated carbon which can find its use in both liquid and gas phase adsorption applications.Keywords: chemical activation, fixed carbon, impregnation ratio, optimum, surface area
Procedia PDF Downloads 145521 Impact of Customer Experience Quality on Loyalty of Mobile and Fixed Broadband Services: Case Study of Telecom Egypt Group
Authors: Nawal Alawad, Passent Ibrahim Tantawi, Mohamed Abdel Salam Ragheb
Abstract:
Providing customers with quality experiences has been confirmed to be a sustainable, competitive advantage with a distinct financial impact for companies. The success of service providers now relies on their ability to provide customer-centric services. The importance of perceived service quality and customer experience is widely recognized. The focus of this research is in the area of mobile and fixed broadband services. This study is of dual importance both academically and practically. Academically, this research applies a new model investigating the impact of customer experience quality on loyalty based on modifying the multiple-item scale for measuring customers’ service experience in a new area and did not depend on the traditional models. The integrated scale embraces four dimensions: service experience, outcome focus, moments of truth and peace of mind. In addition, it gives a scientific explanation for this relationship so this research fill the gap in such relations in which no one correlate or give explanations for these relations before using such integrated model and this is the first time to apply such modified and integrated new model in telecom field. Practically, this research gives insights to marketers and practitioners to improve customer loyalty through evolving the experience quality of broadband customers which is interpreted to suggested outcomes: purchase, commitment, repeat purchase and word-of-mouth, this approach is one of the emerging topics in service marketing. Data were collected through 412 questionnaires and analyzed by using structural equation modeling.Findings revealed that both outcome focus and moments of truth have a significant impact on loyalty while both service experience and peace of mind have insignificant impact on loyalty.In addition, it was found that 72% of the variation occurring in loyalty is explained by the model. The researcher also measured the net prompters score and gave explanation for the results. Furthermore, assessed customer’s priorities of broadband services. The researcher recommends that the findings of this research will extend to be considered in the future plans of Telecom Egypt Group. In addition, to be applied in the same industry especially in the developing countries that have the same circumstances with similar service settings. This research is a positive contribution in service marketing, particularly in telecom industry for making marketing more reliable as managers can relate investments in service experience directly with the performance closest to income for instance, repurchasing behavior, positive word of mouth and, commitment. Finally, the researcher recommends that future studies should consider this model to explain significant marketing outcomes such as share of wallet and ultimately profitability.Keywords: broadband services, customer experience quality, loyalty, net promoters score
Procedia PDF Downloads 267520 Predicting Loss of Containment in Surface Pipeline using Computational Fluid Dynamics and Supervised Machine Learning Model to Improve Process Safety in Oil and Gas Operations
Authors: Muhammmad Riandhy Anindika Yudhy, Harry Patria, Ramadhani Santoso
Abstract:
Loss of containment is the primary hazard that process safety management is concerned within the oil and gas industry. Escalation to more serious consequences all begins with the loss of containment, starting with oil and gas release from leakage or spillage from primary containment resulting in pool fire, jet fire and even explosion when reacted with various ignition sources in the operations. Therefore, the heart of process safety management is avoiding loss of containment and mitigating its impact through the implementation of safeguards. The most effective safeguard for the case is an early detection system to alert Operations to take action prior to a potential case of loss of containment. The detection system value increases when applied to a long surface pipeline that is naturally difficult to monitor at all times and is exposed to multiple causes of loss of containment, from natural corrosion to illegal tapping. Based on prior researches and studies, detecting loss of containment accurately in the surface pipeline is difficult. The trade-off between cost-effectiveness and high accuracy has been the main issue when selecting the traditional detection method. The current best-performing method, Real-Time Transient Model (RTTM), requires analysis of closely positioned pressure, flow and temperature (PVT) points in the pipeline to be accurate. Having multiple adjacent PVT sensors along the pipeline is expensive, hence generally not a viable alternative from an economic standpoint.A conceptual approach to combine mathematical modeling using computational fluid dynamics and a supervised machine learning model has shown promising results to predict leakage in the pipeline. Mathematical modeling is used to generate simulation data where this data is used to train the leak detection and localization models. Mathematical models and simulation software have also been shown to provide comparable results with experimental data with very high levels of accuracy. While the supervised machine learning model requires a large training dataset for the development of accurate models, mathematical modeling has been shown to be able to generate the required datasets to justify the application of data analytics for the development of model-based leak detection systems for petroleum pipelines. This paper presents a review of key leak detection strategies for oil and gas pipelines, with a specific focus on crude oil applications, and presents the opportunities for the use of data analytics tools and mathematical modeling for the development of robust real-time leak detection and localization system for surface pipelines. A case study is also presented.Keywords: pipeline, leakage, detection, AI
Procedia PDF Downloads 191519 Focus on the Bactericidal Efficacies of Alkaline Agents in Solid and the Required Time for Bacterial Inactivation
Authors: Hakimullah Hakim, Chiharu Toyofuku, Mari Ota, Mayuko Suzuki, Miyuki Komura, Masashi Yamada, Md. Shahin Alam, Natthanan Sangsriratanakul, Dany Shoham, Kazuaki Takehara
Abstract:
Disinfectants and their application are essential part of infection control strategies and enhancement of biosecurity at farms, worldwide. Alkaline agents are well known for their strong and long term antimicrobial capacities and most frequently are applied at farms for control and prevention of biological hazards. However, inadequate information regarding such materials’ capacities to inactivate pathogens and their improper applications fail farmers to achieve the mentioned goal. Thus, this requires attention to further evaluate their efficacies, under different conditions and in different ways. Here in this study we evaluated bactericidal efficacies of food additive grade of calcium hydroxide (FdCa(OH)2) powder derived from natural calcium carbonates obtained from limestone (Fine Co., Ltd., Tokyo, Japan), and bioceramic powder (BCX) derived from chicken feces at pH 13 (NMG environmental development Co., Ltd., Tokyo, Japan), for their efficacies to inactivate bacteria in feces. [Materials & Methods] Chicken feces were inoculated by 100 µl Escherichia coli and Salmonella Infantis in falcon tubes, individually, then FdCa(OH)2 or BCX powders were individually added to make final concentration of 0, 5, 10, 20 and 30% (w/w) in total weight of 0.5g, followed by properly mixing and incubating at room temperature for certain periods of time, in a dark place. Afterwards, 10 ml 1M Tris-HCl (pH 7.2) was added onto them to reduce their pH, in order to stop powders’ activities and to harvest the remained viable bacteria, whereas using normal medium or dW2 to recover bacteria increases the mixture pH, and as a result bacteria would be inactivated soon; therefore, the latter practice brings about incorrect and misleading results. Samples were then inoculated on DHL agar plates in order to calculate colony forming units (CFU)/ml of viable bacteria. [Results and Discussion] FdCa(OH)2 powder at 10% and 5% required 3 hr and 6 hr exposure times, respectively, while BCX powder at 20% concentrations required 6 hr exposure time to kill the mentioned bacteria in feces down to lower than detectable level (≤ 3.6 log10 CFU/ml). This study confirmed capacities of FdCa(OH)2 and BCX powders to inactivate bacteria in feces, and both materials are environment friendly materials, with no risk to human or animal’s health. This finding helps farmers to properly apply alkaline agents in appropriate concentrations and exposure times in their farms, in order to prevent and control infectious diseases outbreaks and to enhance biosecurity. Finally, this finding may help farmers to implement better strategies for infections control in their livestock farms.Keywords: bacterial inactivation, bioceramic, biosecurity at livestock farms, chicken feces
Procedia PDF Downloads 440518 Embodied Neoliberalism and the Mind as Tool to Manage the Body: A Descriptive Study Applied to Young Australian Amateur Athletes
Authors: Alicia Ettlin
Abstract:
Amid the rise of neoliberalism to the leading economic policy model in Western societies in the 1980s, people have started to internalise a neoliberal way of thinking, whereby the human body has become an entity that can and needs to be precisely managed through free yet rational decision-making processes. The neoliberal citizen has consequently become an entrepreneur of the self who is free, independent, rational, productive and responsible for themselves, their health and wellbeing as well as their appearance. The focus on individuals as entrepreneurs who manage their bodies through the rationally thinking mind has, however, become increasingly criticised for viewing the social actor as ‘disembodied’, as a detached, social actor whose powerful mind governs over the passive body. On the other hand, the discourse around embodiment seeks to connect rational decision-making processes to the dominant neoliberal discourse which creates an embodied understanding that the body, just as other areas of people’s lives, can and should be shaped, monitored and managed through cognitive and rational thinking. This perspective offers an understanding of the body regarding its connections with the social environment that reaches beyond the debates around mind-body binary thinking. Hence, following this argument, body management should not be thought of as either solely guided by embodied discourses nor as merely falling into a mind-body dualism, but rather, simultaneously and inseparably as both at once. The descriptive, qualitative analysis of semi-structured in-depth interviews conducted with young Australian amateur athletes between the age of 18 and 24 has shown that most participants are interested in measuring and managing their body to create self-knowledge and self-improvement. The participants thereby connected self-improvement to weight loss, muscle gain or simply staying fit and healthy. Self-knowledge refers to body measurements including weight, BMI or body fat percentage. Self-management and self-knowledge that are reliant on one another to take rational and well-thought-out decisions, are both characteristic values of the neoliberal doctrine. A neoliberal way of thinking and looking after the body has also by many been connected to rewarding themselves for their discipline, hard work or achievement of specific body management goals (e.g. eating chocolate for reaching the daily step count goal). A few participants, however, have shown resistance against these neoliberal values, and in particular, against the precise monitoring and management of the body with the help of self-tracking devices. Ultimately, however, it seems that most participants have internalised the dominant discourses around self-responsibility, and by association, a sense of duty to discipline their body in normative ways. Even those who have indicated their resistance against body work and body management practices that follow neoliberal thinking and measurement systems, are aware and have internalised the concept of the rational operating mind that needs or should decide how to look after the body in terms of health but also appearance ideals. The discussion around the collected data thereby shows that embodiment and the mind/body dualism constitute two connected, rather than two separate or opposing concepts.Keywords: dualism, embodiment, mind, neoliberalism
Procedia PDF Downloads 163517 Experimental Study of Infill Walls with Joint Reinforcement Subjected to In-Plane Lateral Load
Authors: J. Martin Leal-Graciano, Juan J. Pérez-Gavilán, A. Reyes-Salazar, J. H. Castorena, J. L. Rivera-Salas
Abstract:
The experimental results about the global behavior of twelve 1:2 scaled reinforced concrete frames subject to in-plane lateral load are presented. The main objective was to generate experimental evidence about the use of steel bars within mortar bed joints as shear reinforcement in infill walls. Similar to the Canadian and New Zealand standards, the Mexican code includes specifications for this type of reinforcement. However, these specifications were obtained through experimental studies of load-bearing walls, mainly confined walls. Little information is found in the existing literature about the effects of joint reinforcement on the seismic behavior of infill masonry walls. Consequently, the Mexican code establishes the same equations to estimate the contribution of joint reinforcement for both confined walls and infill walls. Confined masonry construction and a reinforced concrete frame infilled with masonry walls have similar appearances. However, substantial differences exist between these two construction systems, which are mainly related to the sequence of construction and to how these structures support vertical and lateral loads. To achieve the objective established, ten reinforced concrete frames with masonry infill walls were built and tested in pairs, having both specimens in the pair identical characteristics except that one of them included joint reinforcement. The variables between pairs were the type of units, the size of the columns of the frame, and the aspect ratio of the wall. All cases included tie columns and tie beams on the perimeter of the wall to anchor the joint reinforcement. Also, two bare frames with identical characteristics to the infilled frames were tested. The purpose was to investigate the effects of the infill wall on the behavior of the system to in-plane lateral load. In addition, the experimental results were compared with the prediction of the Mexican code. All the specimens were tested in a cantilever under reversible cyclic lateral load. To simulate gravity load, constant vertical load was applied on the top of the columns. The results indicate that the contribution of the joint reinforcement to lateral strength depends on the size of the columns of the frame. Larger size columns produce a failure mode that is predominantly a sliding mode. Sliding inhibits the production of new inclined cracks, which are necessary to activate (deform) the joint reinforcement. Regarding the effects of joint reinforcement in the performance of confined masonry walls, many facts were confirmed for infill walls. This type of reinforcement increases the lateral strength of the wall, produces a more distributed cracking, and reduces the width of the cracks. Moreover, it reduces the ductility demand of the system at maximum strength. The prediction of the lateral strength provided by the Mexican code is a property in some cases; however, the effect of the size of the columns on the contribution of joint reinforcement needs to be better understood.Keywords: experimental study, infill wall, infilled frame, masonry wall
Procedia PDF Downloads 175516 Mapping the Suitable Sites for Food Grain Crops Using Geographical Information System (GIS) and Analytical Hierarchy Process (AHP)
Authors: Md. Monjurul Islam, Tofael Ahamed, Ryozo Noguchi
Abstract:
Progress continues in the fight against hunger, yet an unacceptably large number of people still lack food they need for an active and healthy life. Bangladesh is one of the rising countries in the South-Asia but still lots of people are food insecure. In the last few years, Bangladesh has significant achievements in food grain production but still food security at national to individual levels remain a matter of major concern. Ensuring food security for all is one of the major challenges that Bangladesh faces today, especially production of rice in the flood and poverty prone areas. Northern part is more vulnerable than any other part of Bangladesh. To ensure food security, one of the best way is to increase domestic production. To increase production, it is necessary to secure lands for achieving optimum utilization of resources. One of the measures is to identify the vulnerable and potential areas using Land Suitability Assessment (LSA) to increase rice production in the poverty prone areas. Therefore, the aim of the study was to identify the suitable sites for food grain crop rice production in the poverty prone areas located at the northern part of Bangladesh. Lack of knowledge on the best combination of factors that suit production of rice has contributed to the low production. To fulfill the research objective, a multi-criteria analysis was done and produced a suitable map for crop production with the help of Geographical Information System (GIS) and Analytical Hierarchy Process (AHP). Primary and secondary data were collected from ground truth information and relevant offices. The suitability levels for each factor were ranked based on the structure of FAO land suitability classification as: Currently Not Suitable (N2), Presently Not Suitable (N1), Marginally Suitable (S3), Moderately Suitable (S2) and Highly Suitable (S1). The suitable sites were identified using spatial analysis and compared with the recent raster image from Google Earth Pro® to validate the reliability of suitability analysis. For producing a suitability map for rice farming using GIS and multi-criteria analysis tool, AHP was used to rank the relevant factors, and the resultant weights were used to create the suitability map using weighted sum overlay tool in ArcGIS 10.3®. Then, the suitability map for rice production in the study area was formed. The weighted overly was performed and found that 22.74 % (1337.02 km2) of the study area was highly suitable, while 28.54% (1678.04 km2) was moderately suitable, 14.86% (873.71 km2) was marginally suitable, and 1.19% (69.97 km2) was currently not suitable for rice farming. On the other hand, 32.67% (1920.87 km2) was permanently not suitable which occupied with settlements, rivers, water bodies and forests. This research provided information at local level that could be used by farmers to select suitable fields for rice production, and then it can be applied to other crops. It will also be helpful for the field workers and policy planner who serves in the agricultural sector.Keywords: AHP, GIS, spatial analysis, land suitability
Procedia PDF Downloads 241515 Waste Burial to the Pressure Deficit Areas in the Eastern Siberia
Authors: L. Abukova, O. Abramova, A. Goreva, Y. Yakovlev
Abstract:
Important executive decisions on oil and gas production stimulation in Eastern Siberia have been recently taken. There are unique and large fields of oil, gas, and gas-condensate in Eastern Siberia. The Talakan, Koyumbinskoye, Yurubcheno-Tahomskoye, Kovykta, Chayadinskoye fields are supposed to be developed first. It will result in an abrupt increase in environmental load on the nature of Eastern Siberia. In Eastern Siberia, the introduction of ecological imperatives in hydrocarbon production is still realistic. Underground water movement is the one of the most important factors of the ecosystems condition management. Oil and gas production is associated with the forced displacement of huge water masses, mixing waters of different composition, and origin that determines the extent of anthropogenic impact on water drive systems and their protective reaction. An extensive hydrogeological system of the depression type is identified in the pre-salt deposits here. Pressure relieve here is steady up to the basement. The decrease of the hydrodynamic potential towards the basement with such a gradient resulted in reformation of the fields in process of historical (geological) development of the Nepsko-Botuobinskaya anteclise. The depression hydrodynamic systems are characterized by extremely high isolation and can only exist under such closed conditions. A steady nature of water movement due to a strictly negative gradient of reservoir pressure makes it quite possible to use environmentally-harmful liquid substances instead of water. Disposal of the most hazardous wastes is the most expedient in the deposits of the crystalline basement in certain structures distant from oil and gas fields. The time period for storage of environmentally-harmful liquid substances may be calculated by means of the geological time scales ensuring their complete prevention from releasing into environment or air even during strong earthquakes. Disposal of wastes of chemical and nuclear industries is a matter of special consideration. The existing methods of storage and disposal of wastes are very expensive. The methods applied at the moment for storage of nuclear wastes at the depth of several meters, even in the most durable containers, constitute a potential danger. The enormous size of the depression system of the Nepsko-Botuobinskaya anteclise makes it possible to easily identify such objects at the depth below 1500 m where nuclear wastes will be stored indefinitely without any environmental impact. Thus, the water drive system of the Nepsko-Botuobinskaya anteclise is the ideal object for large-volume injection of environmentally harmful liquid substances even if there are large oil and gas accumulations in the subsurface. Specific geological and hydrodynamic conditions of the system allow the production of hydrocarbons from the subsurface simultaneously with the disposal of industrial wastes of oil and gas, mining, chemical, and nuclear industries without any environmental impact.Keywords: Eastern Siberia, formation pressure, underground water, waste burial
Procedia PDF Downloads 259514 Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics
Authors: Stephen L. Green, Alexander N. Gorban, Ivan Y. Tyukin
Abstract:
Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.Keywords: convolutional neural networks, deep learning, shallow correctors, sign language
Procedia PDF Downloads 100513 Air Pollution on Stroke in Shenzhen, China: A Time-Stratified Case Crossover Study Modified by Meteorological Variables
Authors: Lei Li, Ping Yin, Haneen Khreis
Abstract:
Stroke is the second leading cause of death and a third leading cause of death and disability worldwide in 2019. Given the significant role of environmental factors in stroke development and progression, it is essential to investigate the effect of air pollution on stroke occurrence while considering the modifying effects of meteorological variables. This study aimed to evaluate the association between short-term exposure to air pollution and the incidence of stroke subtypes in Shenzhen, China, and to explore the potential interactions of meteorological factors with air pollutants. The study analyzed data from January 1, 2006, to December 31, 2014, including 88,214 cases of ischemic stroke and 30,433 cases of hemorrhagic stroke among residents of Shenzhen. Using a time-stratified case–crossover design with conditional quasi-Poisson regression, the study estimated the percentage changes in stroke morbidity associated with short-term exposure to nitrogen dioxide (NO₂), sulfur dioxide (SO₂), particulate matter less than 10 mm in aerodynamic diameter (PM10), carbon monoxide (CO), and ozone (O₃). A five-day moving average of air pollution was applied to capture the cumulative effects of air pollution. The estimates were further stratified by sex, age, education level, and season. The additive and multiplicative interaction between air pollutants and meteorologic variables were assessed by the relative excess risk due to interaction (RERI) and adding the interactive term into the main model, respectively. The study found that NO₂ was positively associated with ischemic stroke occurrence throughout the year and in the cold season (November through April), with a stronger effect observed among men. Each 10 μg/m³ increment in the five-day moving average of NO₂ was associated with a 2.38% (95% confidence interval was 1.36% to 3.41%) increase in the risk of ischemic stroke over the whole year and a 3.36% (2.04% to 4.69%) increase in the cold season. The harmful effect of CO on ischemic stroke was observed only in the cold season, with each 1 mg/m³ increment in the five-day moving average of CO increasing the risk by 12.34% (3.85% to 21.51%). There was no statistically significant additive interaction between individual air pollutants and temperature or relative humidity, as demonstrated by the RERI. The interaction term in the model showed a multiplicative antagonistic effect between NO₂ and temperature (p-value=0.0268). For hemorrhagic stroke, no evidence of the effects of any individual air pollutants was found in the whole population. However, the RERI indicated a statistically additive and multiplicative interaction of temperature on the effects of PM10 and O₃ on hemorrhagic stroke onset. Therefore, the insignificant conclusion should be interpreted with caution. The study suggests that environmental NO₂ and CO might increase the morbidity of ischemic stroke, particularly during the cold season. These findings could help inform policy decisions aimed at reducing air pollution levels to prevent stroke and other health conditions. Additionally, the study provides valuable insights into the interaction between air pollution and meteorological variables, which underscores the need for further research into the complex relationship between environmental factors and health.Keywords: air pollution, meteorological variables, interactive effect, seasonal pattern, stroke
Procedia PDF Downloads 88512 One-Stage Conversion of Adjustable Gastric Band to One-Anastomosis Gastric Bypass Versus Sleeve Gastrectomy : A Single-Center Experience With a Short and Mid-term Follow-up
Authors: Basma Hussein Abdelaziz Hassan, Kareem Kamel, Philobater Bahgat Adly Awad, Karim Fahmy
Abstract:
Background: Laparoscopic adjustable gastric band was one of the most applied and common bariatric procedures in the last 8 years. However; the failure rate was very high, reaching approximately 60% of the patients not achieving the desired weight loss. Most patients sought another revisional surgery. In which, we compared two of the most common weight loss surgeries performed nowadays: the laparoscopic sleeve gastrectomy and laparoscopic one- anastomosis gastric bypass. Objective: To compare the weight loss and postoperative outcomes among patients undergoing conversion laparoscopic one-anastomosis gastric bypass (cOAGB) and laparoscopic sleeve gastrectomy (cSG) after a failed laparoscopic adjustable gastric band (LAGB). Patients and Methods: A prospective cohort study was conducted from June 2020 to June 2022 at a single medical center, which included 77 patients undergoing single-stage conversion to (cOAGB) vs (cSG). Patients were reassessed for weight loss, comorbidities remission, and post-operative complications at 6, 12, and 18 months. Results: There were 77 patients with failed LAGB in our study. Group (I) was 43 patients who underwent cOAGB and Group (II) was 34 patients who underwent cSG. The mean age of the cOAGB group was 38.58. While in the cSG group, the mean age was 39.47 (p=0.389). Of the 77 patients, 10 (12.99%) were males and 67 (87.01%) were females. Regarding Body mass index (BMI), in the cOAGB group the mean BMI was 41.06 and in the cSG group the mean BMI was 40.5 (p=0.042). The two groups were compared postoperative in relation to EBWL%, BMI, and the co-morbidities remission within 18 months follow-up. The BMI was calculated post-operative at three visits. After 6 months of follow-up, the mean BMI in the cOAGB group was 34.34, and the cSG group was 35.47 (p=0.229). In 12-month follow-up, the mean BMI in the cOAGB group was 32.69 and the cSG group was 33.79 (p=0.2). Finally, the mean BMI after 18 months of follow-up in the cOAGB group was 30.02, and in the cSG group was 31.79 (p=0.001). Both groups had no statistically significant values at 6 and 12 months follow-up with p-values of 0.229, and 0.2 respectively. However, patients who underwent cOAGB after 18 months of follow-up achieved lower BMI than those who underwent cSG with a statistically significant p-value of 0.005. Regarding EBWL% there was a statistically significant difference between the two groups. After 6 months of follow-up, the mean EBWL% in the cOAGB group was 35.9% and the cSG group was 33.14%. In the 12-month follow-up, the EBWL % mean in the cOAGB group was 52.35 and the cSG group was 48.76 (p=0.045). Finally, the mean EBWL % after 18 months of follow-up in the cOAGB group was 62.06 ±8.68 and in the cSG group was 55.58 ±10.87 (p=0.005). Regarding comorbidities remission; Diabetes mellitus remission was found in 22 (88%) patients in the cOAGB group and 10 (71.4%) patients in the cSG group with (p= 0.225). Hypertension remission was found in 20 (80%) patients in the cOAGB group and 14 (82.4%) patients in the cSG group with (p=1). In addition, dyslipidemia remission was found in 27(87%) patients in cOAGB group and 17(70%) patients in the cSG group with (p=0.18). Finally, GERD remission was found in about 15 (88.2%) patients in the cOAGB group and 6 (60%) patients in the cSG group with (p=0.47). There are no statistically significant differences between the two groups in the post-operative data outcomes. Conclusion: This study suggests that the conversion of LAGB to either cOAGB or cSG could be feasibly performed in a single-stage operation. cOAGB had a significant difference as regards the weight loss results than cSG among the mid-term follow-up. However, there is no significant difference in the postoperative complications and the resolution of the co-morbidities. Therefore, cOAGB could provide a reliable alternative but needs to be substantiated in future long-term studies.Keywords: laparoscopic, gastric banding, one-anastomosis gastric bypass, Sleeve gastrectomy, revisional surgery, weight loss
Procedia PDF Downloads 62511 Artificial Neural Network Approach for GIS-Based Soil Macro-Nutrients Mapping
Authors: Shahrzad Zolfagharnassab, Abdul Rashid Mohamed Shariff, Siti Khairunniza Bejo
Abstract:
Conventional methods for nutrient soil mapping are based on laboratory tests of samples that are obtained from surveys. The time and cost involved in gathering and analyzing soil samples are the reasons that researchers use Predictive Soil Mapping (PSM). PSM can be defined as the development of a numerical or statistical model of the relationship among environmental variables and soil properties, which is then applied to a geographic database to create a predictive map. Kriging is a group of geostatistical techniques to spatially interpolate point values at an unobserved location from observations of values at nearby locations. The main problem with using kriging as an interpolator is that it is excessively data-dependent and requires a large number of closely spaced data points. Hence, there is a need to minimize the number of data points without sacrificing the accuracy of the results. In this paper, an Artificial Neural Networks (ANN) scheme was used to predict macronutrient values at un-sampled points. ANN has become a popular tool for prediction as it eliminates certain difficulties in soil property prediction, such as non-linear relationships and non-normality. Back-propagation multilayer feed-forward network structures were used to predict nitrogen, phosphorous and potassium values in the soil of the study area. A limited number of samples were used in the training, validation and testing phases of ANN (pattern reconstruction structures) to classify soil properties and the trained network was used for prediction. The soil analysis results of samples collected from the soil survey of block C of Sawah Sempadan, Tanjung Karang rice irrigation project at Selangor of Malaysia were used. Soil maps were produced by the Kriging method using 236 samples (or values) that were a combination of actual values (obtained from real samples) and virtual values (neural network predicted values). For each macronutrient element, three types of maps were generated with 118 actual and 118 virtual values, 59 actual and 177 virtual values, and 30 actual and 206 virtual values, respectively. To evaluate the performance of the proposed method, for each macronutrient element, a base map using 236 actual samples and test maps using 118, 59 and 30 actual samples respectively produced by the Kriging method. A set of parameters was defined to measure the similarity of the maps that were generated with the proposed method, termed the sample reduction method. The results show that the maps that were generated through the sample reduction method were more accurate than the corresponding base maps produced through a smaller number of real samples. For example, nitrogen maps that were produced from 118, 59 and 30 real samples have 78%, 62%, 41% similarity, respectively with the base map (236 samples) and the sample reduction method increased similarity to 87%, 77%, 71%, respectively. Hence, this method can reduce the number of real samples and substitute ANN predictive samples to achieve the specified level of accuracy.Keywords: artificial neural network, kriging, macro nutrient, pattern recognition, precision farming, soil mapping
Procedia PDF Downloads 70510 A Case Study on the Development and Application of Media Literacy Education Program Based on Circular Learning
Authors: Kim Hyekyoung, Au Yunkyung
Abstract:
As media plays an increasingly important role in our lives, the age at which media usage begins is getting younger worldwide. Particularly, young children are exposed to media at an early age, making early childhood media literacy education an essential task. However, most existing early childhood media literacy education programs focus solely on teaching children how to use media, and practical implementation and application are challenging. Therefore, this study aims to develop a play-based early childhood media literacy education program utilizing topic-based media content and explore the potential application and impact of this program on young children's media literacy learning. Based on theoretical and literature review on media literacy education, analysis of existing educational programs, and a survey on the current status and teacher perceptions of media literacy education for preschool children, this study developed a media literacy education program for preschool children, considering the components of media literacy (understanding media characteristics, self-regulation, self-expression, critical understanding, ethical norms, and social communication). To verify the effectiveness of the program, 20 preschool children aged 5 from C City M Kindergarten were chosen as participants, and the program was implemented from March 28th to July 4th, 2022, once a week for a total of 7 sessions. The program was developed based on Gallenstain's (2003) iterative learning model (participation-exploration-explanation-extension-evaluation). To explore the quantitative changes before and after the program, a repeated measures analysis of variance was conducted, and qualitative analysis was employed to examine the observed process changes. It was found that after the application of the education program, media literacy levels such as understanding media characteristics, self-regulation, self-expression, critical understanding, ethical norms, and social communication significantly improved. The recursive learning-based early childhood media literacy education program developed in this study can be effectively applied to young children's media literacy education and help enhance their media literacy levels. In terms of observed process changes, it was confirmed that children learned about various topics, expressed their thoughts, and improved their ability to communicate with others using media content. These findings emphasize the importance of developing and implementing media literacy education programs and can contribute to empowering young children to safely and effectively utilize media in their media environment. The results of this study, exploring the potential application and impact of the recursive learning-based early childhood media literacy education program on young children's media literacy learning, demonstrated positive changes in young children's media literacy levels. These results go beyond teaching children how to use media and can help foster their ability to safely and effectively utilize media in their media environment. Additionally, to enhance young children's media literacy levels and create a safe media environment, diverse content and methodologies are needed, and the continuous development and evaluation of education programs should be conducted.Keywords: young children, media literacy, recursive learning, education program
Procedia PDF Downloads 77509 Deficient Multisensory Integration with Concomitant Resting-State Connectivity in Adult Attention Deficit/Hyperactivity Disorder (ADHD)
Authors: Marcel Schulze, Behrem Aslan, Silke Lux, Alexandra Philipsen
Abstract:
Objective: Patients with Attention Deficit/Hyperactivity Disorder (ADHD) often report that they are being flooded by sensory impressions. Studies investigating sensory processing show hypersensitivity for sensory inputs across the senses in children and adults with ADHD. Especially the auditory modality is affected by deficient acoustical inhibition and modulation of signals. While studying unimodal signal-processing is relevant and well-suited in a controlled laboratory environment, everyday life situations occur multimodal. A complex interplay of the senses is necessary to form a unified percept. In order to achieve this, the unimodal sensory modalities are bound together in a process called multisensory integration (MI). In the current study we investigate MI in an adult ADHD sample using the McGurk-effect – a well-known illusion where incongruent speech like phonemes lead in case of successful integration to a new perceived phoneme via late top-down attentional allocation . In ADHD neuronal dysregulation at rest e.g., aberrant within or between network functional connectivity may also account for difficulties in integrating across the senses. Therefore, the current study includes resting-state functional connectivity to investigate a possible relation of deficient network connectivity and the ability of stimulus integration. Method: Twenty-five ADHD patients (6 females, age: 30.08 (SD:9,3) years) and twenty-four healthy controls (9 females; age: 26.88 (SD: 6.3) years) were recruited. MI was examined using the McGurk effect, where - in case of successful MI - incongruent speech-like phonemes between visual and auditory modality are leading to a perception of a new phoneme. Mann-Whitney-U test was applied to assess statistical differences between groups. Echo-planar imaging-resting-state functional MRI was acquired on a 3.0 Tesla Siemens Magnetom MR scanner. A seed-to-voxel analysis was realized using the CONN toolbox. Results: Susceptibility to McGurk was significantly lowered for ADHD patients (ADHDMdn:5.83%, ControlsMdn:44.2%, U= 160.5, p=0.022, r=-0.34). When ADHD patients integrated phonemes, reaction times were significantly longer (ADHDMdn:1260ms, ControlsMdn:582ms, U=41.0, p<.000, r= -0.56). In functional connectivity medio temporal gyrus (seed) was negatively associated with primary auditory cortex, inferior frontal gyrus, precentral gyrus, and fusiform gyrus. Conclusion: MI seems to be deficient for ADHD patients for stimuli that need top-down attentional allocation. This finding is supported by stronger functional connectivity from unimodal sensory areas to polymodal, MI convergence zones for complex stimuli in ADHD patients.Keywords: attention-deficit hyperactivity disorder, audiovisual integration, McGurk-effect, resting-state functional connectivity
Procedia PDF Downloads 127508 Multicomponent Positive Psychology Intervention for Health Promotion of Retirees: A Feasibility Study
Authors: Helen Durgante, Mariana F. Sparremberger, Flavia C. Bernardes, Debora D. DellAglio
Abstract:
Health promotion programmes for retirees, based on Positive Psychology perspectives for the development of strengths and virtues, demand broadened empirical investigation in Brazil. In the case of evidence-based applied research, it is suggested feasibility studies are conducted prior to efficacy trials of the intervention, in order to identify and rectify possible faults in the design and implementation of the intervention. The aim of this study was to evaluate the feasibility of a multicomponent Positive Psychology programme for health promotion of retirees, based on Cognitive Behavioural Therapy and Positive Psychology perspectives. The programme structure included six weekly group sessions (two hours each) encompassing strengths such as Values and self-care, Optimism, Empathy, Gratitude, Forgiveness, and Meaning of life and work. The feasibility criteria evaluated were: Demand, Acceptability, Satisfaction with the programme and with the moderator, Comprehension/Generalization of contents, Evaluation of the moderator (Social Skills and Integrity/Fidelity), Adherence, and programme implementation. Overall, 11 retirees (F=11), age range 54-75, from the metropolitan region of Porto Alegre-RS-Brazil took part in the study. The instruments used were: Qualitative Admission Questionnaire; Moderator Field Diary; the Programme Evaluation Form to assess participants satisfaction with the programme and with the moderator (a six-item 4-point likert scale), and Comprehension/Generalization of contents (a three-item 4-point likert scale); Observers’ Evaluation Form to assess the moderator Social Skills (a five-item 4-point likert scale), Integrity/Fidelity (a 10 item 4-point likert scale), and Adherence (a nine-item 5-point likert scale). Qualitative data were analyzed using content analysis. Descriptive statistics as well as Intraclass Correlations coefficients were used for quantitative data and inter-rater reliability analysis. The results revealed high demand (N = 55 interested people) and acceptability (n = 10 concluded the programme with overall 88.3% frequency rate), satisfaction with the program and with the moderator (X = 3.76, SD = .34), and participants self-report of Comprehension/Generalization of contents provided in the programme (X = 2.82, SD = .51). In terms of the moderator Social Skills (X = 3.93; SD = .40; ICC = .752 [IC = .429-.919]), Integrity/Fidelity (X = 3.93; SD = .31; ICC = .936 [IC = .854-.981]), and participants Adherence (X = 4.90; SD = .29; ICC = .906 [IC = .783-.969]), evaluated by two independent observers present in each session of the programme, descriptive and Intraclass Correlation results were considered adequate. Structural changes were introduced in the intervention design and implementation methods, as well as the removal of items from questionnaires and evaluation forms. The obtained results were satisfactory, allowing changes to be made for further efficacy trials of the programme. Results are discussed taking cultural and contextual demands in Brazil into account.Keywords: feasibility study, health promotion, positive psychology intervention, programme evaluation, retirees
Procedia PDF Downloads 195507 Temporal Estimation of Hydrodynamic Parameter Variability in Constructed Wetlands
Authors: Mohammad Moezzibadi, Isabelle Charpentier, Adrien Wanko, Robert Mosé
Abstract:
The calibration of hydrodynamic parameters for subsurface constructed wetlands (CWs) is a sensitive process since highly non-linear equations are involved in unsaturated flow modeling. CW systems are engineered systems designed to favour natural treatment processes involving wetland vegetation, soil, and their microbial flora. Their significant efficiency at reducing the ecological impact of urban runoff has been recently proved in the field. Numerical flow modeling in a vertical variably saturated CW is here carried out by implementing the Richards model by means of a mixed hybrid finite element method (MHFEM), particularly well adapted to the simulation of heterogeneous media, and the van Genuchten-Mualem parametrization. For validation purposes, MHFEM results were compared to those of HYDRUS (a software based on a finite element discretization). As van Genuchten-Mualem soil hydrodynamic parameters depend on water content, their estimation is subject to considerable experimental and numerical studies. In particular, the sensitivity analysis performed with respect to the van Genuchten-Mualem parameters reveals a predominant influence of the shape parameters α, n and the saturated conductivity of the filter on the piezometric heads, during saturation and desaturation. Modeling issues arise when the soil reaches oven-dry conditions. A particular attention should also be brought to boundary condition modeling (surface ponding or evaporation) to be able to tackle different sequences of rainfall-runoff events. For proper parameter identification, large field datasets would be needed. As these are usually not available, notably due to the randomness of the storm events, we thus propose a simple, robust and low-cost numerical method for the inverse modeling of the soil hydrodynamic properties. Among the methods, the variational data assimilation technique introduced by Le Dimet and Talagrand is applied. To that end, a variational data assimilation technique is implemented by applying automatic differentiation (AD) to augment computer codes with derivative computations. Note that very little effort is needed to obtain the differentiated code using the on-line Tapenade AD engine. Field data are collected for a three-layered CW located in Strasbourg (Alsace, France) at the water edge of the urban water stream Ostwaldergraben, during several months. Identification experiments are conducted by comparing measured and computed piezometric head by means of the least square objective function. The temporal variability of hydrodynamic parameter is then assessed and analyzed.Keywords: automatic differentiation, constructed wetland, inverse method, mixed hybrid FEM, sensitivity analysis
Procedia PDF Downloads 164506 3D Classification Optimization of Low-Density Airborne Light Detection and Ranging Point Cloud by Parameters Selection
Authors: Baha Eddine Aissou, Aichouche Belhadj Aissa
Abstract:
Light detection and ranging (LiDAR) is an active remote sensing technology used for several applications. Airborne LiDAR is becoming an important technology for the acquisition of a highly accurate dense point cloud. A classification of airborne laser scanning (ALS) point cloud is a very important task that still remains a real challenge for many scientists. Support vector machine (SVM) is one of the most used statistical learning algorithms based on kernels. SVM is a non-parametric method, and it is recommended to be used in cases where the data distribution cannot be well modeled by a standard parametric probability density function. Using a kernel, it performs a robust non-linear classification of samples. Often, the data are rarely linearly separable. SVMs are able to map the data into a higher-dimensional space to become linearly separable, which allows performing all the computations in the original space. This is one of the main reasons that SVMs are well suited for high-dimensional classification problems. Only a few training samples, called support vectors, are required. SVM has also shown its potential to cope with uncertainty in data caused by noise and fluctuation, and it is computationally efficient as compared to several other methods. Such properties are particularly suited for remote sensing classification problems and explain their recent adoption. In this poster, the SVM classification of ALS LiDAR data is proposed. Firstly, connected component analysis is applied for clustering the point cloud. Secondly, the resulting clusters are incorporated in the SVM classifier. Radial basic function (RFB) kernel is used due to the few numbers of parameters (C and γ) that needs to be chosen, which decreases the computation time. In order to optimize the classification rates, the parameters selection is explored. It consists to find the parameters (C and γ) leading to the best overall accuracy using grid search and 5-fold cross-validation. The exploited LiDAR point cloud is provided by the German Society for Photogrammetry, Remote Sensing, and Geoinformation. The ALS data used is characterized by a low density (4-6 points/m²) and is covering an urban area located in residential parts of the city Vaihingen in southern Germany. The class ground and three other classes belonging to roof superstructures are considered, i.e., a total of 4 classes. The training and test sets are selected randomly several times. The obtained results demonstrated that a parameters selection can orient the selection in a restricted interval of (C and γ) that can be further explored but does not systematically lead to the optimal rates. The SVM classifier with hyper-parameters is compared with the most used classifiers in literature for LiDAR data, random forest, AdaBoost, and decision tree. The comparison showed the superiority of the SVM classifier using parameters selection for LiDAR data compared to other classifiers.Keywords: classification, airborne LiDAR, parameters selection, support vector machine
Procedia PDF Downloads 147505 Optimization of Structures with Mixed Integer Non-linear Programming (MINLP)
Authors: Stojan Kravanja, Andrej Ivanič, Tomaž Žula
Abstract:
This contribution focuses on structural optimization in civil engineering using mixed integer non-linear programming (MINLP). MINLP is characterized as a versatile method that can handle both continuous and discrete optimization variables simultaneously. Continuous variables are used to optimize parameters such as dimensions, stresses, masses, or costs, while discrete variables represent binary decisions to determine the presence or absence of structural elements within a structure while also calculating discrete materials and standard sections. The optimization process is divided into three main steps. First, a mechanical superstructure with a variety of different topology-, material- and dimensional alternatives. Next, a MINLP model is formulated to encapsulate the optimization problem. Finally, an optimal solution is searched in the direction of the defined objective function while respecting the structural constraints. The economic or mass objective function of the material and labor costs of a structure is subjected to the constraints known from structural analysis. These constraints include equations for the calculation of internal forces and deflections, as well as equations for the dimensioning of structural components (in accordance with the Eurocode standards). Given the complex, non-convex and highly non-linear nature of optimization problems in civil engineering, the Modified Outer-Approximation/Equality-Relaxation (OA/ER) algorithm is applied. This algorithm alternately solves subproblems of non-linear programming (NLP) and main problems of mixed-integer linear programming (MILP), in this way gradually refines the solution space up to the optimal solution. The NLP corresponds to the continuous optimization of parameters (with fixed topology, discrete materials and standard dimensions, all determined in the previous MILP), while the MILP involves a global approximation to the superstructure of alternatives, where a new topology, materials, standard dimensions are determined. The optimization of a convex problem is stopped when the MILP solution becomes better than the best NLP solution. Otherwise, it is terminated when the NLP solution can no longer be improved. While the OA/ER algorithm, like all other algorithms, does not guarantee global optimality due to the presence of non-convex functions, various modifications, including convexity tests, are implemented in OA/ER to mitigate these difficulties. The effectiveness of the proposed MINLP approach is demonstrated by its application to various structural optimization tasks, such as mass optimization of steel buildings, cost optimization of timber halls, composite floor systems, etc. Special optimization models have been developed for the optimization of these structures. The MINLP optimizations, facilitated by the user-friendly software package MIPSYN, provide insights into a mass or cost-optimal solutions, optimal structural topologies, optimal material and standard cross-section choices, confirming MINLP as a valuable method for the optimization of structures in civil engineering.Keywords: MINLP, mixed-integer non-linear programming, optimization, structures
Procedia PDF Downloads 46504 Validation of Mapping Historical Linked Data to International Committee for Documentation (CIDOC) Conceptual Reference Model Using Shapes Constraint Language
Authors: Ghazal Faraj, András Micsik
Abstract:
Shapes Constraint Language (SHACL), a World Wide Web Consortium (W3C) language, provides well-defined shapes and RDF graphs, named "shape graphs". These shape graphs validate other resource description framework (RDF) graphs which are called "data graphs". The structural features of SHACL permit generating a variety of conditions to evaluate string matching patterns, value type, and other constraints. Moreover, the framework of SHACL supports high-level validation by expressing more complex conditions in languages such as SPARQL protocol and RDF Query Language (SPARQL). SHACL includes two parts: SHACL Core and SHACL-SPARQL. SHACL Core includes all shapes that cover the most frequent constraint components. While SHACL-SPARQL is an extension that allows SHACL to express more complex customized constraints. Validating the efficacy of dataset mapping is an essential component of reconciled data mechanisms, as the enhancement of different datasets linking is a sustainable process. The conventional validation methods are the semantic reasoner and SPARQL queries. The former checks formalization errors and data type inconsistency, while the latter validates the data contradiction. After executing SPARQL queries, the retrieved information needs to be checked manually by an expert. However, this methodology is time-consuming and inaccurate as it does not test the mapping model comprehensively. Therefore, there is a serious need to expose a new methodology that covers the entire validation aspects for linking and mapping diverse datasets. Our goal is to conduct a new approach to achieve optimal validation outcomes. The first step towards this goal is implementing SHACL to validate the mapping between the International Committee for Documentation (CIDOC) conceptual reference model (CRM) and one of its ontologies. To initiate this project successfully, a thorough understanding of both source and target ontologies was required. Subsequently, the proper environment to run SHACL and its shape graphs were determined. As a case study, we performed SHACL over a CIDOC-CRM dataset after running a Pellet reasoner via the Protégé program. The applied validation falls under multiple categories: a) data type validation which constrains whether the source data is mapped to the correct data type. For instance, checking whether a birthdate is assigned to xsd:datetime and linked to Person entity via crm:P82a_begin_of_the_begin property. b) Data integrity validation which detects inconsistent data. For instance, inspecting whether a person's birthdate occurred before any of the linked event creation dates. The expected results of our work are: 1) highlighting validation techniques and categories, 2) selecting the most suitable techniques for those various categories of validation tasks. The next plan is to establish a comprehensive validation model and generate SHACL shapes automatically.Keywords: SHACL, CIDOC-CRM, SPARQL, validation of ontology mapping
Procedia PDF Downloads 253