Search results for: mobile game based learning
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 32518

Search results for: mobile game based learning

20008 A Collective Intelligence Approach to Safe Artificial General Intelligence

Authors: Craig A. Kaplan

Abstract:

If AGI proves to be a “winner-take-all” scenario where the first company or country to develop AGI dominates, then the first AGI must also be the safest. The safest, and fastest, path to Artificial General Intelligence (AGI) may be to harness the collective intelligence of multiple AI and human agents in an AGI network. This approach has roots in seminal ideas from four of the scientists who founded the field of Artificial Intelligence: Allen Newell, Marvin Minsky, Claude Shannon, and Herbert Simon. Extrapolating key insights from these founders of AI, and combining them with the work of modern researchers, results in a fast and safe path to AGI. The seminal ideas discussed are: 1) Society of Mind (Minsky), 2) Information Theory (Shannon), 3) Problem Solving Theory (Newell & Simon), and 4) Bounded Rationality (Simon). Society of Mind describes a collective intelligence approach that can be used with AI and human agents to create an AGI network. Information theory helps address the critical issue of how an AGI system will increase its intelligence over time. Problem Solving Theory provides a universal framework that AI and human agents can use to communicate efficiently, effectively, and safely. Bounded Rationality helps us better understand not only the capabilities of SuperIntelligent AGI but also how humans can remain relevant in a world where the intelligence of AGI vastly exceeds that of its human creators. Each key idea can be combined with recent work in the fields of Artificial Intelligence, Machine Learning, and Large Language Models to accelerate the development of a working, safe, AGI system.

Keywords: AI Agents, Collective Intelligence, Minsky, Newell, Shannon, Simon, AGI, AGI Safety

Procedia PDF Downloads 71
20007 Empirical Study of Innovative Development of Shenzhen Creative Industries Based on Triple Helix Theory

Authors: Yi Wang, Greg Hearn, Terry Flew

Abstract:

In order to understand how cultural innovation occurs, this paper explores the interaction in Shenzhen of China between universities, creative industries, and government in creative economic using the Triple Helix framework. During the past two decades, Triple Helix has been recognized as a new theory of innovation to inform and guide policy-making in national and regional development. Universities and governments around the world, especially in developing countries, have taken actions to strengthen connections with creative industries to develop regional economies. To date research based on the Triple Helix model has focused primarily on Science and Technology collaborations, largely ignoring other fields. Hence, there is an opportunity for work to be done in seeking to better understand how the Triple Helix framework might apply in the field of creative industries and what knowledge might be gleaned from such an undertaking. Since the late 1990s, the concept of ‘creative industries’ has been introduced as policy and academic discourse. The development of creative industries policy by city agencies has improved city wealth creation and economic capital. It claims to generate a ‘new economy’ of enterprise dynamics and activities for urban renewal through the arts and digital media, via knowledge transfer in knowledge-based economies. Creative industries also involve commercial inputs to the creative economy, to dynamically reshape the city into an innovative culture. In particular, this paper will concentrate on creative spaces (incubators, digital tech parks, maker spaces, art hubs) where academic, industry and government interact. China has sought to enhance the brand of their manufacturing industry in cultural policy. It aims to transfer the image of ‘Made in China’ to ‘Created in China’ as well as to give Chinese brands more international competitiveness in a global economy. Shenzhen is a notable example in China as an international knowledge-based city following this path. In 2009, the Shenzhen Municipal Government proposed the city slogan ‘Build a Leading Cultural City”’ to show the ambition of government’s strong will to develop Shenzhen’s cultural capacity and creativity. The vision of Shenzhen is to become a cultural innovation center, a regional cultural center and an international cultural city. However, there has been a lack of attention to the triple helix interactions in the creative industries in China. In particular, there is limited knowledge about how interactions in creative spaces co-location within triple helix networks significantly influence city based innovation. That is, the roles of participating institutions need to be better understood. Thus, this paper discusses the interplay between university, creative industries and government in Shenzhen. Secondary analysis and documentary analysis will be used as methods in an effort to practically ground and illustrate this theoretical framework. Furthermore, this paper explores how are creative spaces being used to implement Triple Helix in creative industries. In particular, the new combination of resources generated from the synthesized consolidation and interactions through the institutions. This study will thus provide an innovative lens to understand the components, relationships and functions that exist within creative spaces by applying Triple Helix framework to the creative industries.

Keywords: cultural policy, creative industries, creative city, triple Helix

Procedia PDF Downloads 182
20006 Comparison of Rumen Microbial Analysis Pipelines Based on 16s rRNA Gene Sequencing

Authors: Xiaoxing Ye

Abstract:

To investigate complex rumen microbial communities, 16S ribosomal RNA (rRNA) sequencing is widely used. Here, we evaluated the impact of bioinformatics pipelines on the observation of OTUs and taxonomic classification of 750 cattle rumen microbial samples by comparing three commonly used pipelines (LotuS, UPARSE, and QIIME) with Usearch. In LotuS-based analyses, 189 archaeal and 3894 bacterial OTUs were observed. The observed OTUs for the Usearch analysis were significantly larger than the LotuS results. We discovered 1495 OTUs for archaea and 92665 OTUs for bacteria using Usearch analysis. In addition, taxonomic assignments were made for the rumen microbial samples. All pipelines had consistent taxonomic annotations from the phylum to the genus level. A difference in relative abundance was calculated for all microbial levels, including Bacteroidetes (QIIME: 72.2%, Usearch: 74.09%), Firmicutes (QIIME: 18.3%, Usearch: 20.20%) for the bacterial phylum, Methanobacteriales (QIIME: 64.2%, Usearch: 45.7%) for the archaeal class, Methanobacteriaceae (QIIME: 35%, Usearch: 45.7%) and Methanomassiliicoccaceae (QIIME: 35%, Usearch: 31.13%) for archaeal family. However, the most prevalent archaeal class varied between these two annotation pipelines. The Thermoplasmata was the top class according to the QIIME annotation, whereas Methanobacteria was the top class according to Usearch.

Keywords: cattle rumen, rumen microbial, 16S rRNA gene sequencing, bioinformatics pipeline

Procedia PDF Downloads 67
20005 Preparation and Characterization of Pectin Based Proton Exchange Membranes Derived by Solution Casting Method for Direct Methanol Fuel Cells

Authors: Mohanapriya Subramanian, V. Raj

Abstract:

Direct methanol fuel cells (DMFCs) are considered to be one of the most promising candidates for portable and stationary applications in the view of their advantages such as high energy density, easy manipulation, high efficiency and they operate with liquid fuel which could be used without requiring any fuel-processing units. Electrolyte membrane of DMFC plays a key role as a proton conductor as well as a separator between electrodes. Increasing concern over environmental protection, biopolymers gain tremendous interest owing to their eco-friendly bio-degradable nature. Pectin is a natural anionic polysaccharide which plays an essential part in regulating mechanical behavior of plant cell wall and it is extracted from outer cells of most of the plants. The aim of this study is to develop and demonstrate pectin based polymer composite membranes as methanol impermeable polymer electrolyte membranes for DMFCs. Pectin based nanocomposites membranes are prepared by solution-casting technique wherein pectin is blended with chitosan followed by the addition of optimal amount of sulphonic acid modified Titanium dioxide nanoparticle (S-TiO2). Nanocomposite membranes are characterized by Fourier Transform-Infra Red spectroscopy, Scanning electron microscopy, and Energy dispersive spectroscopy analyses. Proton conductivity and methanol permeability are determined into order to evaluate their suitability for DMFC application. Pectin-chitosan blends endow with a flexible polymeric network which is appropriate to disperse rigid S-TiO2 nanoparticles. Resulting nanocomposite membranes possess adequate thermo-mechanical stabilities as well as high charge-density per unit volume. Pectin-chitosan natural polymeric nanocomposite comprising optimal S-TiO2 exhibits good electrochemical selectivity and therefore desirable for DMFC application.

Keywords: biopolymers, fuel cells, nanocomposite, methanol crossover

Procedia PDF Downloads 123
20004 Big Data Applications for Transportation Planning

Authors: Antonella Falanga, Armando Cartenì

Abstract:

"Big data" refers to extremely vast and complex sets of data, encompassing extraordinarily large and intricate datasets that require specific tools for meaningful analysis and processing. These datasets can stem from diverse origins like sensors, mobile devices, online transactions, social media platforms, and more. The utilization of big data is pivotal, offering the chance to leverage vast information for substantial advantages across diverse fields, thereby enhancing comprehension, decision-making, efficiency, and fostering innovation in various domains. Big data, distinguished by its remarkable attributes of enormous volume, high velocity, diverse variety, and significant value, represent a transformative force reshaping the industry worldwide. Their pervasive impact continues to unlock new possibilities, driving innovation and advancements in technology, decision-making processes, and societal progress in an increasingly data-centric world. The use of these technologies is becoming more widespread, facilitating and accelerating operations that were once much more complicated. In particular, big data impacts across multiple sectors such as business and commerce, healthcare and science, finance, education, geography, agriculture, media and entertainment and also mobility and logistics. Within the transportation sector, which is the focus of this study, big data applications encompass a wide variety, spanning across optimization in vehicle routing, real-time traffic management and monitoring, logistics efficiency, reduction of travel times and congestion, enhancement of the overall transportation systems, but also mitigation of pollutant emissions contributing to environmental sustainability. Meanwhile, in public administration and the development of smart cities, big data aids in improving public services, urban planning, and decision-making processes, leading to more efficient and sustainable urban environments. Access to vast data reservoirs enables deeper insights, revealing hidden patterns and facilitating more precise and timely decision-making. Additionally, advancements in cloud computing and artificial intelligence (AI) have further amplified the potential of big data, enabling more sophisticated and comprehensive analyses. Certainly, utilizing big data presents various advantages but also entails several challenges regarding data privacy and security, ensuring data quality, managing and storing large volumes of data effectively, integrating data from diverse sources, the need for specialized skills to interpret analysis results, ethical considerations in data use, and evaluating costs against benefits. Addressing these difficulties requires well-structured strategies and policies to balance the benefits of big data with privacy, security, and efficient data management concerns. Building upon these premises, the current research investigates the efficacy and influence of big data by conducting an overview of the primary and recent implementations of big data in transportation systems. Overall, this research allows us to conclude that big data better provide to enhance rational decision-making for mobility choices and is imperative for adeptly planning and allocating investments in transportation infrastructures and services.

Keywords: big data, public transport, sustainable mobility, transport demand, transportation planning

Procedia PDF Downloads 48
20003 Exploring Mtb-Mle Practices in Selected Schools in Benguet, Philippines

Authors: Jocelyn L. Alimondo, Juna O. Sabelo

Abstract:

This study explored the MTB-MLE implementation practices of teachers in one monolingual elementary school and one multilingual elementary school in Benguet, Philippines. It used phenomenological approach employing participant-observation, focus group discussion and individual interview. Data were gathered using a video camera, an audio recorder, and an FGD guide and were treated through triangulation and coding. From the data collected, varied ways in implementing the MTB-MLE program were noted. These are: Teaching using a hybrid first language, teaching using a foreign LOI, using translation and multilingual instruction, and using L2/L3 to unlock L1. However, these practices come with challenges such as the a conflict between the mandated LOI and what pupils need, lack of proficiency of teachers in the mandated LOI, facing unreceptive parents, stagnation of knowledge resulting from over-familiarity of input, and zero learning resulting from an incomprehensible language input. From the practices and challenges experienced by the teachers, a model of MTB-MLE approach, the 3L-in-one approach, to teaching was created to illustrate the practice which teachers claimed to be the best way to address the challenges besetting them while at the same time satisfying the academic needs of their pupils. From the findings, this paper concludes that despite the challenges besetting the teachers, they still displayed creativity in coming up with relevant teaching practices, the unreceptiveness of some teachers and parents sprung from the fact that they do not understand the real concept of MTB-MLE, greater challenges are being faced by teachers in multilingual school due to the diverse linguistic background of their clients, and the most effective approach in implementing MTB-MLE is the multilingual approach, allowing the use of the pupils’ mother tongue, L2 (Filipino), L3 (English), and other languages familiar to the students.

Keywords: MTB-MLE Philippines, MTB-MLE model, first language, multilingual instruction

Procedia PDF Downloads 411
20002 Domain Driven Design vs Soft Domain Driven Design Frameworks

Authors: Mohammed Salahat, Steve Wade

Abstract:

This paper presents and compares the SSDDD “Systematic Soft Domain Driven Design Framework” to DDD “Domain Driven Design Framework” as a soft system approach of information systems development. The framework use SSM as a guiding methodology within which we have embedded a sequence of design tasks based on the UML leading to the implementation of a software system using the Naked Objects framework. This framework has been used in action research projects that have involved the investigation and modelling of business processes using object-oriented domain models and the implementation of software systems based on those domain models. Within this framework, Soft Systems Methodology (SSM) is used as a guiding methodology to explore the problem situation and to develop the domain model using UML for the given business domain. The framework is proposed and evaluated in our previous works, a comparison between SSDDD and DDD is presented in this paper, to show how SSDDD improved DDD as an approach to modelling and implementing business domain perspectives for Information Systems Development. The comparison process, the results, and the improvements are presented in the following sections of this paper.

Keywords: domain-driven design, soft domain-driven design, naked objects, soft language

Procedia PDF Downloads 280
20001 Can the Intervention of SCAMPER Bring about Changes of Neural Activation While Taking Creativity Tasks?

Authors: Yu-Chu Yeh, WeiChin Hsu, Chih-Yen Chang

Abstract:

Substitution, combination, modification, putting to other uses, elimination, and rearrangement (SCAMPER) has been regarded as an effective technique that provides a structured way to help people to produce creative ideas and solutions. Although some neuroscience studies regarding creativity training have been conducted, no study has focused on SCAMPER. This study therefore aimed at examining whether the learning of SCAMPER through video tutorials would result in alternations of neural activation. Thirty college students were randomly assigned to the experimental group or the control group. The experimental group was requested to watch SCAMPER videos, whereas the control group was asked to watch natural-scene videos which were regarded as neutral stimulating materials. Each participant was brain scanned in a Functional magnetic resonance imaging (fMRI) machine while undertaking a creativity test before and after watching the videos. Furthermore, a two-way ANOVA was used to analyze the interaction between groups (the experimental group; the control group) and tasks (C task; M task; X task). The results revealed that the left precuneus significantly activated in the interaction of groups and tasks, as well as in the main effect of group. Furthermore, compared with the control group, the experimental group had greater activation in the default mode network (left precuneus and left inferior parietal cortex) and the motor network (left postcentral gyrus and left supplementary area). The findings suggest that the SCAMPER training may facilitate creativity through the stimulation of the default mode network and the motor network.

Keywords: creativity, default mode network, neural activation, SCAMPER

Procedia PDF Downloads 90
20000 Finite Elemental Simulation of the Combined Process of Asymmetric Rolling and Plastic Bending

Authors: A. Pesin, D. Pustovoytov, M. Sverdlik

Abstract:

Traditionally, the need in items represents a large body of rotation (e.g. shrouds of various process units: a converter, a mixer, a scrubber, a steel ladle and etc.) is satisfied by using them at engineering enterprises. At these enterprises large parts of bodies of rotation are made on stamping units or bending and forming machines. In Nosov Magnitogorsk State Technical University in alliance with JSC "Magnitogorsk Metal and Steel Works" there was suggested and implemented the technology for producing such items based on a combination of asymmetric rolling processes and plastic bending under conditions of the plate mill. In this paper, based on finite elemental mathematical simulation in technology of a combined process of asymmetric rolling and bending plastic has been improved. It is shown that for the same curvature along the entire length of the metal sheet it is necessary to introduce additional asymmetry speed when rolling front end and tape trailer. Production of large bodies of rotation at mill 4500 JSC "Magnitogorsk Metal and Steel Works" showed good convergence of theoretical and experimental values of the curvature of the metal. Economic effect obtained more than 1.0 million dollars.

Keywords: asymmetric rolling, plastic bending, combined process, FEM

Procedia PDF Downloads 305
19999 Analysis of Vibratory Signals Based on Local Mean Decomposition (LMD) for Rolling Bearing Fault Diagnosis

Authors: Toufik Bensana, Medkour Mihoub, Slimane Mekhilef

Abstract:

The use of vibration analysis has been established as the most common and reliable method of analysis in the field of condition monitoring and diagnostics of rotating machinery. Rolling bearings cover a broad range of rotary machines and plays a crucial role in the modern manufacturing industry. Unfortunately, the vibration signals collected from a faulty bearing are generally nonstationary, nonlinear and with strong noise interference, so it is essential to obtain the fault features correctly. In this paper, a novel numerical analysis method based on local mean decomposition (LMD) is proposed. LMD decompose the signal into a series of product functions (PFs), each of which is the product of an envelope signal and a purely frequency modulated FM signal. The envelope of a PF is the instantaneous amplitude (IA), and the derivative of the unwrapped phase of a purely flat frequency demodulated (FM) signal is the IF. After that, the fault characteristic frequency of the roller bearing can be extracted by performing spectrum analysis to the instantaneous amplitude of PF component containing dominant fault information. The results show the effectiveness of the proposed technique in fault detection and diagnosis of rolling element bearing.

Keywords: fault diagnosis, rolling element bearing, local mean decomposition, condition monitoring

Procedia PDF Downloads 378
19998 Reciprocity and Empathy in Motivating Altruism among Sixth Grade Students

Authors: Rylle Evan Gabriel Zamora, Micah Dennise Malia, Abygail Deniese Villabona

Abstract:

The primary motivators of altruism are usually viewed as mutually exclusive. In this study, we wanted to know if the two primary motivators, reciprocity and empathy, can work together in motivating altruism. Therefore, we wanted to find out if there is a significant interaction of effects between reciprocity and empathy. To show how this may occur, we devised the combined altruism model, which is based on Batson’s empathy altruism hypothesis. A sample of 120, 6th-grade students were randomly selected and then randomly assigned to four treatment groups. A 2x2 between subjects’ design was used, which had empathy and reciprocity as independent variables, and altruism as the dependent variable. The study made use of materials that were effort based, where subjects were required to complete a task or a puzzle to help a person in a given scenario, two videos, one to prime empathy were also used. This along with Witt & Boleman’s adapted Self-Reported Altruism Scale was used to determine an individual’s altruism. It was found that both variables were significant in motivating altruism, with empathy being the greater of the two. However, there was no significant interaction of effects between the two variables. To explain why this occurred, we turned to the combined altruism model, where it was found that when empathically primed, we tend to not think of ourselves when helping others. Future studies could focus on other variables, especially age which is said to be one of the greatest factors that influenced the results of the experiment.

Keywords: reciprocity, empathy, altruism, experimental psychology, social psychology

Procedia PDF Downloads 240
19997 Identification of Flooding Attack (Zero Day Attack) at Application Layer Using Mathematical Model and Detection Using Correlations

Authors: Hamsini Pulugurtha, V.S. Lakshmi Jagadmaba Paluri

Abstract:

Distributed denial of service attack (DDoS) is one altogether the top-rated cyber threats presently. It runs down the victim server resources like a system of measurement and buffer size by obstructing the server to supply resources to legitimate shoppers. Throughout this text, we tend to tend to propose a mathematical model of DDoS attack; we discuss its relevancy to the choices like inter-arrival time or rate of arrival of the assault customers accessing the server. We tend to tend to further analyze the attack model in context to the exhausting system of measurement and buffer size of the victim server. The projected technique uses an associate in nursing unattended learning technique, self-organizing map, to make the clusters of identical choices. Lastly, the abstract applies mathematical correlation and so the standard likelihood distribution on the clusters and analyses their behaviors to look at a DDoS attack. These systems not exclusively interconnect very little devices exchanging personal data, but to boot essential infrastructures news standing of nuclear facilities. Although this interconnection brings many edges and blessings, it to boot creates new vulnerabilities and threats which might be conversant in mount attacks. In such sophisticated interconnected systems, the power to look at attacks as early as accomplishable is of paramount importance.

Keywords: application attack, bandwidth, buffer correlation, DDoS distribution flooding intrusion layer, normal prevention probability size

Procedia PDF Downloads 205
19996 A Brief Trauma Treatment Program for Survivors of Trauma: A Single-Case Design

Authors: Duane Booysen, Ashraf Kagee

Abstract:

There is a high prevalence of violent crime and trauma exposure in South African society. Considering the prevalence of continuous violent crimes and traumatization in South Africa, the public mental health sector is required to combat the burgeoning effect of traumatic stress in South Africa. Trauma counselors, especially, provide important mental health services at primary health care to persons affected by traumatic events. Therefore, the evaluation and implementation of evidence-based trauma therapies is essential at a primary health care level in treating traumatic stress. A single-case design was used to evaluate the treatment effect of a Brief Trauma Treatment Programme treating persons who present with symptoms of posttraumatic stress disorder at a primary care trauma centre in Cape Town, South Africa. The sample consisted of six adult participants who presented with symptoms of posttraumatic stress and were assessed at baseline, during treatment, post-intervention and at 3-month follow. All participants received six sessions of trauma therapy. Assessment measures included the posttraumatic stress disorder symptom scale interviews for Diagnostic and Statistical Manual fifth edition (DSM5), the posttraumatic disorder checklist for DSM5, Beck Depression Inventory and Beck Anxiety Inventory. Results demonstrate that participants had noticeable reduced symptoms for traumatic stress, anxiety and depression despite living in contexts of violent crime and trauma. In conclusion, the article critically reflects on the need to evaluate and implement evidence-based treatments for the South African context, and how evidence-based treatments are used in developing socio-economic and cultural diverse contexts with continuous levels of violence and traumatization.

Keywords: psychological interventions, public mental health, traumatic stress, single-case design

Procedia PDF Downloads 143
19995 Precise Identification of Clustered Regularly Interspaced Short Palindromic Repeats-Induced Mutations via Hidden Markov Model-Based Sequence Alignment

Authors: Jingyuan Hu, Zhandong Liu

Abstract:

CRISPR genome editing technology has transformed molecular biology by accurately targeting and altering an organism’s DNA. Despite the state-of-art precision of CRISPR genome editing, the imprecise mutation outcome and off-target effects present considerable risk, potentially leading to unintended genetic changes. Targeted deep sequencing, combined with bioinformatics sequence alignment, can detect such unwanted mutations. Nevertheless, the classical method, Needleman-Wunsch (NW) algorithm may produce false alignment outcomes, resulting in inaccurate mutation identification. The key to precisely identifying CRISPR-induced mutations lies in determining optimal parameters for the sequence alignment algorithm. Hidden Markov models (HMM) are ideally suited for this task, offering flexibility across CRISPR systems by leveraging forward-backward algorithms for parameter estimation. In this study, we introduce CRISPR-HMM, a statistical software to precisely call CRISPR-induced mutations. We demonstrate that the software significantly improves precision in identifying CRISPR-induced mutations compared to NW-based alignment, thereby enhancing the overall understanding of the CRISPR gene-editing process.

Keywords: CRISPR, HMM, sequence alignment, gene editing

Procedia PDF Downloads 34
19994 Multi Tier Data Collection and Estimation, Utilizing Queue Model in Wireless Sensor Networks

Authors: Amirhossein Mohajerzadeh, Abolghasem Mohajerzadeh

Abstract:

In this paper, target parameter is estimated with desirable precision in hierarchical wireless sensor networks (WSN) while the proposed algorithm also tries to prolong network lifetime as much as possible, using efficient data collecting algorithm. Target parameter distribution function is considered unknown. Sensor nodes sense the environment and send the data to the base station called fusion center (FC) using hierarchical data collecting algorithm. FC builds underlying phenomena based on collected data. Considering the aggregation level, x, the goal is providing the essential infrastructure to find the best value for aggregation level in order to prolong network lifetime as much as possible, while desirable accuracy is guaranteed (required sample size is fully depended on desirable precision). First, the sample size calculation algorithm is discussed, second, the average queue length based on M/M[x]/1/K queue model is determined and it is used for energy consumption calculation. Nodes can decrease transmission cost by aggregating incoming data. Furthermore, the performance of the new algorithm is evaluated in terms of lifetime and estimation accuracy.

Keywords: aggregation, estimation, queuing, wireless sensor network

Procedia PDF Downloads 174
19993 A Study on the Disclosure Experience of Adoptees

Authors: Tsung Chieh Ma, I-Ling Chen

Abstract:

Disclosing family origins to adoptees is an important topic in the adoption process. Adoption agencies usually educate adoptive parents on how to disclose to adoptees, but many adoptive parents worry that the disclosure will affect the parent–child relationship. Thus, how adoptees would like to receive the disclosure and whether they subjectively feel that the parent–child relationship is affected are both topics worthy of further discussion. This research takes a qualitative approach and connects with adoption agencies to interview six adoptees who are now adults. The purpose of the interviews is to learn about their experience receiving disclosures and their subjective feelings after learning of their family origins. The aim is to reveal the changes disclosure brought to the parent–child relationship and whether common concerns are raised due to the adoptive status. We also want to know about factors that affect their identification with their adopted status so that we can consequently give advice to other adoptive families. in this study finds that adoptees see disclosure as a process rather than an isolated event. The majority want to be told their family origin as early and proactively as possible and expect to learn the reasons they were given up for adoption and taken in as adoptees. The disclosure does not necessarily influence the parent–child relationship, and adoptees care more about the positive experiences they had with adoptive parents in their childhood. Moreover, adopted children seek contact with their original family mostly to understand why they were given up for adoption. The effects of disclosure depend on how the adoptive parents or other significant people in the lives of adoptees interpret the identity of the adoptees. That is, their response and attitude toward the identity have a lasting impact on the adoptees. The study suggests that early disclosure gives adoptees a chance to internalize the experience in the process and find self-identification.

Keywords: adoption, adoptees, disclosure of family origins, parent–child relationship, self-identity

Procedia PDF Downloads 60
19992 Property Rights and Trade Specialization

Authors: Sarma Binti Aralas

Abstract:

The relationship between property rights and trade specialization is examined for developing and developed countries using panel data analysis. Property rights is measured using the international property rights index while trade specialization is measured using the comparative advantage index. Cross country differences in property rights are hypothesized to lead to differences in trade specialization. Based on the argument that a weak protection of natural resources implies greater trade in resource-intensive goods, developing countries with less defined property rights are hypothesized to have a comparative advantage in resource-based exports while countries with more defined property rights will not have an advantage in resource-intensive goods. Evidence suggests that developing countries with weaker environmental protection index but are rich in natural resources do specialize in the trade of resource-intensive goods. The finding suggests that institutional frameworks to increase the stringency of environmental protection of resources may be needed to diversify exports away from the trade of resource-intensive goods.

Keywords: environmental protection, panel data, renewable resources, trade specialization

Procedia PDF Downloads 433
19991 Determination of Community Based Reference Interval of Aspartate Aminotransferase to Platelet Ratio Index (APRI) among Healthy Populations in Mekelle City Tigray, Northern Ethiopia

Authors: Getachew Belay Kassahun

Abstract:

Background: Aspartate aminotransferase to Platelet Ratio Index (APRI) currently becomes a biomarker for screening liver fibrosis since liver biopsy procedure is invasive and variation in pathological interpretation. Clinical Laboratory Standard Institute recommends establishing age, sex and environment specific reference interval for biomarkers in a homogenous population. The current study was aimed to derive community based reference interval of APRI aged between 12 and 60 years old in Mekelle city Tigrai, Northern Ethiopia. Method: Six hundred eighty eight study participants were collected from three districts in Mekelle city. The 3 districts were selected through random sampling technique and sample size to kebelles (small administration) were distributed proportional to household number in each district. Lottery method was used at household level if more than 2 study participants to each age partition were found. A community based cross sectional in a total of 534 study participants, 264 male and 270 females, were included in the final laboratory and data analysis but around 154 study participants were excluded through exclusion criteria. Aspartate aminotransferase was analyzed through Biosystem chemistry analyzer and Sysmix machine was used to analyze platelet. Man Whitney U test non parametric stastical tool was used to appreciate stastical difference among gender after excluding the outliers through Box and Whisker. Result: The study appreciated stastical difference among gender for APRI reference interval. The combined, male and female reference interval in the current study was 0.098-0.390, 0.133-0.428 and 0.090-0.319 respectively. The upper and lower reference interval of males was higher than females in all age partition and there was no stastical difference (p-value (<0.05)) between age partition. Conclusion: The current study showed using sex specific reference interval is significant to APRI biomarker in clinical practice for result interpretation.

Keywords: reference interval, aspartate aminotransferase to platelet ratio Index, Ethiopia, tigray

Procedia PDF Downloads 92
19990 Coping Mechanisms of Batangueño Families Facing Cancer

Authors: Aiza G. Clanor, Lotlot B. Hernandez, Jonna Marie T. Ibuna

Abstract:

This study aimed to know the coping mechanisms of Batangueño families facing cancer, specifically, those from Cancer Warriors Foundation, Inc. Batangas chapter. The researchers used purposive sampling. This study was limited to the responses provided by the Batangueño families of the cancer patients. A family member of the immediate family with a child facing cancer represents the family as a whole. A total number of forty six (46) respondents were given the questionnaires. Upon analysis, most of the respondents came from rural areas and nuclear family and have Php 5000 and below family monthly income. Most of them have their own houses, and 3 to 5 members, one of whom is a cancer patient diagnosed for more than 2 years. The two most frequently utilized coping strategies were mobilizing the family to acquire and accept help, and reframing. Passive appraisal is the least utilized one. There was a significant difference on the coping mechanisms of the family relative to passive appraisal based on the length of time since the illness was first diagnosed. Based from the study, the researchers developed modules with discussions and activities on cancer awareness, ideas on coping and how to deal with the cancer patients that may help the respondents and other Batangueño families overcome the difficulties in facing cancer. The researchers recommend the modules for they are found to be effective ways to help the families relieve stress, reduce anxiety and improve quality of life.

Keywords: coping with chronic illness, family, psychology, cancer

Procedia PDF Downloads 529
19989 Study on Optimization Design of Pressure Hull for Underwater Vehicle

Authors: Qasim Idrees, Gao Liangtian, Liu Bo, Miao Yiran

Abstract:

In order to improve the efficiency and accuracy of the pressure hull structure, optimization of underwater vehicle based on response surface methodology, a method for optimizing the design of pressure hull structure was studied. To determine the pressure shell of five dimensions as a design variable, the application of thin shell theory and the Chinese Classification Society (CCS) specification was carried on the preliminary design. In order to optimize variables of the feasible region, different methods were studied and implemented such as Opt LHD method (to determine the design test sample points in the feasible domain space), parametric ABAQUS solution for each sample point response, and the two-order polynomial response for the surface model of the limit load of structures. Based on the ultimate load of the structure and the quality of the shell, the two-generation genetic algorithm was used to solve the response surface, and the Pareto optimal solution set was obtained. The final optimization result was 41.68% higher than that of the initial design, and the shell quality was reduced by about 27.26%. The parametric method can ensure the accuracy of the test and improve the efficiency of optimization.

Keywords: parameterization, response surface, structure optimization, pressure hull

Procedia PDF Downloads 217
19988 Human Action Recognition Using Variational Bayesian HMM with Dirichlet Process Mixture of Gaussian Wishart Emission Model

Authors: Wanhyun Cho, Soonja Kang, Sangkyoon Kim, Soonyoung Park

Abstract:

In this paper, we present the human action recognition method using the variational Bayesian HMM with the Dirichlet process mixture (DPM) of the Gaussian-Wishart emission model (GWEM). First, we define the Bayesian HMM based on the Dirichlet process, which allows an infinite number of Gaussian-Wishart components to support continuous emission observations. Second, we have considered an efficient variational Bayesian inference method that can be applied to drive the posterior distribution of hidden variables and model parameters for the proposed model based on training data. And then we have derived the predictive distribution that may be used to classify new action. Third, the paper proposes a process of extracting appropriate spatial-temporal feature vectors that can be used to recognize a wide range of human behaviors from input video image. Finally, we have conducted experiments that can evaluate the performance of the proposed method. The experimental results show that the method presented is more efficient with human action recognition than existing methods.

Keywords: human action recognition, Bayesian HMM, Dirichlet process mixture model, Gaussian-Wishart emission model, Variational Bayesian inference, prior distribution and approximate posterior distribution, KTH dataset

Procedia PDF Downloads 341
19987 Vine Growers' Climate Change Adaptation Strategies in Hungary

Authors: Gabor Kiraly

Abstract:

Wine regions are based on equilibria between climate, soil, grape varieties, and farming expertise that define the special character and quality of local vine farming and wine production. Changes in climate conditions may increase risk of destabilizing this equilibrium. Adaptation decisions, including adjusting practices, processes and capitals in response to climate change stresses – may reduce this risk. However, farmers’ adaptive behavior are subject to a wide range of factors and forces such as links between climate change implications and production, farm - scale adaptive capacity and other external forces that might hinder them to make efficient response to climate change challenges. This paper will aim to study climate change adaptation practices and strategies of grape growers in a way of applying a complex and holistic approach involving theories, methods and tools both from environmental and social sciences. It will introduce the field of adaptation studies as an evidence - based discourse by presenting an overview of examples from wine regions where adaptation studies have already reached an advanced stage. This will serve as a theoretical background for a preliminary research with the aim to examine the feasibility and applicability of such a research approach in the Hungarian context.

Keywords: climate change, adaptation, viticulture, Hungary

Procedia PDF Downloads 220
19986 Research on Natural Lighting Design of Atriums Based on Energy-Saving Aim

Authors: Fan Yu

Abstract:

An atrium is a place for natural climate exchanging of indoor and outdoor space of buildings, which plays an active role in the overall energy conservation, climate control and environmental purification of buildings. Its greatest contribution is serving as a natural light collector and distributor to solve the problem of natural lighting in large and deep spaces. However, in real situations, the atrium space often results in energy consumption due to improper design in considering its big size and large amount use of glass. Based on the purpose of energy conservation of buildings, this paper emphasizes the significance of natural lighting of atriums. Through literature research, case analysis and other methods, four factors, namely: the light transmittance through the top of the atrium, the geometric proportion of the atrium space, the size and position of windows and the material of the surface of walls in the atrium, were studied, and the influence of different architectural compositions on the natural light distribution of the atrium is discussed. Relying on the analysis of relevant cases, it is proposed that when designing the natural lighting of the atrium, the height and width of the atrium should be paid attention to, the atrium walls are required being rough surfaces and the atrium top-level windows need to be minimized in order to introduce more natural light into the buildings and achieve the purpose of energy conservation.

Keywords: energy conservation, atrium, natural lighting, architectural design

Procedia PDF Downloads 173
19985 Genetic Diversity of Sorghum bicolor (L.) Moench Genotypes as Revealed by Microsatellite Markers

Authors: Maletsema Alina Mofokeng, Hussein Shimelis, Mark Laing, Pangirayi Tongoona

Abstract:

Sorghum is one of the most important cereal crops grown for food, feed and bioenergy. Knowledge of genetic diversity is important for conservation of genetic resources and improvement of crop plants through breeding. The objective of this study was to assess the level of genetic diversity among sorghum genotypes using microsatellite markers. A total of 103 accessions of sorghum genotypes obtained from the Department of Agriculture, Forestry and Fisheries, the African Centre for Crop Improvement and Agricultural Research Council-Grain Crops Institute collections in South Africa were estimated using 30 microsatellite markers. For all the loci analysed, 306 polymorphic alleles were detected with a mean value of 6.4 per locus. The polymorphic information content had an average value of 0.50 with heterozygosity mean value of 0.55 suggesting an important genetic diversity within the sorghum genotypes used. The unweighted pair group method with arithmetic mean clustering based on Euclidian coefficients revealed two major distinct groups without allocating genotypes based on the source of collection or origin. The genotypes 4154.1.1.1, 2055.1.1.1, 4441.1.1.1, 4442.1.1.1, 4722.1.1.1, and 4606.1.1.1 were the most diverse. The sorghum genotypes with high genetic diversity could serve as important sources of novel alleles for breeding and strategic genetic conservation.

Keywords: Genetic Diversity, Genotypes, Microsatellites, Sorghum

Procedia PDF Downloads 358
19984 Modification Encryption Time and Permutation in Advanced Encryption Standard Algorithm

Authors: Dalal N. Hammod, Ekhlas K. Gbashi

Abstract:

Today, cryptography is used in many applications to achieve high security in data transmission and in real-time communications. AES has long gained global acceptance and is used for securing sensitive data in various industries but has suffered from slow processing and take a large time to transfer data. This paper suggests a method to enhance Advance Encryption Standard (AES) Algorithm based on time and permutation. The suggested method (MAES) is based on modifying the SubByte and ShiftRrows in the encryption part and modification the InvSubByte and InvShiftRows in the decryption part. After the implementation of the proposal and testing the results, the Modified AES achieved good results in accomplishing the communication with high performance criteria in terms of randomness, encryption time, storage space, and avalanche effects. The proposed method has good randomness to ciphertext because this method passed NIST statistical tests against attacks; also, (MAES) reduced the encryption time by (10 %) than the time of the original AES; therefore, the modified AES is faster than the original AES. Also, the proposed method showed good results in memory utilization where the value is (54.36) for the MAES, but the value for the original AES is (66.23). Also, the avalanche effects used for calculating diffusion property are (52.08%) for the modified AES and (51.82%) percentage for the original AES.

Keywords: modified AES, randomness test, encryption time, avalanche effects

Procedia PDF Downloads 229
19983 The Effect of Environmental, Social, and Governance (ESG) Disclosure on Firms’ Credit Rating and Capital Structure

Authors: Heba Abdelmotaal

Abstract:

This paper explores the impact of the extent of a company's environmental, social, and governance (ESG) disclosure on credit rating and capital structure. The analysis is based on a sample of 202 firms from the 350 FTSE firms over the period of 2008-2013. ESG disclosure score is measured using Proprietary Bloomberg score based on the extent of a company's Environmental, Social, and Governance (ESG) disclosure. The credit rating is measured by The QuiScore, which is a measure of the likelihood that a company will become bankrupt in the twelve months following the date of calculation. The Capital Structure is measured by long term debt ratio. Two hypotheses are test using panel data regression. The results suggested that the higher degree of ESG disclosure leads to better credit rating. There is significant negative relationship between ESG disclosure and the long term debit percentage. The paper includes implications for the transparency which is resulting of the ESG disclosure could support the Monitoring Function. The monitoring role of disclosure is the increasing in the transparency of the credit rating agencies, also it could affect on managers’ actions. This study provides empirical evidence on the material of ESG disclosure on credit ratings changes and the firms’ capital decision making.

Keywords: capital structure, credit rating agencies, ESG disclosure, panel data regression

Procedia PDF Downloads 345
19982 Study on the Prediction of Serviceability of Garments Based on the Seam Efficiency and Selection of the Right Seam to Ensure Better Serviceability of Garments

Authors: Md Azizul Islam

Abstract:

Seam is the line of joining two separate fabric layers for functional or aesthetic purposes. Different kinds of seams are used for assembling the different areas or parts of the garment to increase serviceability. To empirically support the importance of seam efficiency on serviceability of garments, this study is focused on choosing the right type of seams for particular sewing parts of the garments based on the seam efficiency to ensure better serviceability. Seam efficiency is the ratio of seam strength and fabric strength. Single jersey knitted finished fabrics of four different GSMs (gram per square meter) were used to make the test garments T-shirt. Three distinct types of the seam: superimposed, lapped and flat seam was applied to the side seams of T-shirt and sewn by lockstitch (stitch class- 301) in a flat-bed plain sewing machine (maximum sewing speed: 5000 rpm) to make (3x4) 12 T-shirts. For experimental purposes, needle thread count (50/3 Ne), bobbin thread count (50/2 Ne) and the stitch density (stitch per inch: 8-9), Needle size (16 in singer system), stitch length (31 cm), and seam allowance (2.5cm) were kept same for all specimens. The grab test (ASTM D5034-08) was done in the Universal tensile tester to measure the seam strength and fabric strength. The produced T-shirts were given to 12 soccer players who wore the shirts for 20 soccer matches (each match of 90 minutes duration). Serviceability of the shirt were measured by visual inspection of a 5 points scale based on the seam conditions. The study found that T-shirts produced with lapped seam show better serviceability and T-shirts made of flat seams perform the lowest score in serviceability score. From the calculated seam efficiency (seam strength/ fabric strength), it was obvious that the performance (in terms of strength) of the lapped and bound seam is higher than that of the superimposed seam and the performance of superimposed seam is far better than that of the flat seam. So it can be predicted that to get a garment of high serviceability, lapped seams could be used instead of superimposed or other types of the seam. In addition, less stressed garments can be assembled by others seems like superimposed seams or flat seams.

Keywords: seam, seam efficiency, serviceability, T-shirt

Procedia PDF Downloads 185
19981 Formulating Rough Approximations in Information Tables with Possibilistic Information

Authors: Michinori Nakata, Hiroshi Sakai

Abstract:

A rough set, which consists of lower and upper approximations, is formulated in information tables containing possibilistic information. First, lower and upper approximations on the basis of possible world semantics in the same way as Lipski did in the field of incomplete databases are shown in order to clarify fundamentals of rough sets under possibilistic information. Possibility and necessity measures are used, as is done in possibilistic databases. As a result, each object has certain and possible membership degrees to lower and upper approximations, which degrees are the lower and upper bounds. Therefore, the degree that the object belongs to lower and upper approximations is expressed by an interval value. And the complementary property linked with the lower and upper approximations holds, as is valid under complete information. Second, the approach based on indiscernibility relations, which is proposed by Dubois and Prade, are extended in three cases. The first case is that objects used to approximate a set of objects are characterized by possibilistic information. The second case is that objects used to approximate a set of objects with possibilistic information are characterized by complete information. The third case is that objects that are characterized by possibilistic information approximate a set of objects with possibilistic information. The extended approach create the same results as the approach based on possible world semantics. This justifies our extension.

Keywords: rough sets, possibilistic information, possible world semantics, indiscernibility relations, lower approximations, upper approximations

Procedia PDF Downloads 310
19980 Pattern Recognition Approach Based on Metabolite Profiling Using In vitro Cancer Cell Line

Authors: Amanina Iymia Jeffree, Reena Thriumani, Mohammad Iqbal Omar, Ammar Zakaria, Yumi Zuhanis Has-Yun Hashim, Ali Yeon Md Shakaff

Abstract:

Metabolite profiling is a strategy to be approached in the pattern recognition method focused on three types of cancer cell line that driving the most to death specifically lung, breast, and colon cancer. The purpose of this study was to discriminate the VOCs pattern among cancerous and control group based on metabolite profiling. The sampling was executed utilizing the cell culture technique. All culture flasks were incubated till 72 hours and data collection started after 24 hours. Every running sample took 24 minutes to be completed accordingly. The comparative metabolite patterns were identified by the implementation of headspace-solid phase micro-extraction (HS-SPME) sampling coupled with gas chromatography-mass spectrometry (GCMS). The optimizations of the main experimental variables such as oven temperature and time were evaluated by response surface methodology (RSM) to get the optimal condition. Volatiles were acknowledged through the National Institute of Standards and Technology (NIST) mass spectral database and retention time libraries. To improve the reliability of significance, it is of crucial importance to eliminate background noise which data from 3rd minutes to 17th minutes were selected for statistical analysis. Targeted metabolites, of which were annotated as known compounds with the peak area greater than 0.5 percent were highlighted and subsequently treated statistically. Volatiles produced contain hundreds to thousands of compounds; therefore, it will be optimized by chemometric analysis, such as principal component analysis (PCA) as a preliminary analysis before subjected to a pattern classifier for identification of VOC samples. The volatile organic compound profiling has shown to be significantly distinguished among cancerous and control group based on metabolite profiling.

Keywords: in vitro cancer cell line, metabolite profiling, pattern recognition, volatile organic compounds

Procedia PDF Downloads 353
19979 One-off Separation of Multiple Types of Oil-in-Water Emulsions with Surface-Engineered Graphene-Based Multilevel Structure Materials

Authors: Han Longxiang

Abstract:

In the process of treating industrial oil wastewater with complex components, the traditional treatment methods (flotation, coagulation, microwave heating, etc.) often produce high operating costs, secondary pollution, and other problems. In order to solve these problems, the materials with high flux and stability applied to surfactant-stabilized emulsions separation have gained huge attention in the treatment of oily wastewater. Nevertheless, four stable oil-in-water emulsions can be formed due to different surfactants (surfactant-free, anionic surfactant, cationic surfactant, and non-ionic surfactant), and the previous advanced materials can only separate one or several of them, cannot effectively separate in one step. Herein, a facile synthesis method of graphene-based multilevel filter materials (GMFM) can efficiently separate the oil-in-water emulsions stabilized with different surfactants only through its gravity. The prepared materials with high stability of 20 cycles show a high flux of ~ 5000 L m-2 h-1 with a high separation efficiency of > 99.9 %. GMFM can effectively separate the emulsion stabilized by mixed surfactants and oily wastewater from factories. The results indicate that the GMFM has a wide range of applications in oil-in-water emulsions separation in industry and environmental science.

Keywords: emulsion, filtration, graphene, one-step

Procedia PDF Downloads 69