Search results for: computational thinking
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3122

Search results for: computational thinking

1982 The Impact of Artificial Intelligence on Food Nutrition

Authors: Antonyous Fawzy Boshra Girgis

Abstract:

Nutrition labels are diet-related health policies. They help individuals improve food-choice decisions and reduce intake of calories and unhealthy food elements, like cholesterol. However, many individuals do not pay attention to nutrition labels or fail to appropriately understand them. According to the literature, thinking and cognitive styles can have significant effects on attention to nutrition labels. According to the author's knowledge, the effect of global/local processing on attention to nutrition labels has not been previously studied. Global/local processing encourages individuals to attend to the whole/specific parts of an object and can have a significant impact on people's visual attention. In this study, this effect was examined with an experimental design using the eye-tracking technique. The research hypothesis was that individuals with local processing would pay more attention to nutrition labels, including nutrition tables and traffic lights. An experiment was designed with two conditions: global and local information processing. Forty participants were randomly assigned to either global or local conditions, and their processing style was manipulated accordingly. Results supported the hypothesis for nutrition tables but not for traffic lights.

Keywords: nutrition, public health, SA Harvest, foodeye-tracking, nutrition labelling, global/local information processing, individual differencesmobile computing, cloud computing, nutrition label use, nutrition management, barcode scanning

Procedia PDF Downloads 40
1981 Biomimetic Paradigms in Architectural Conceptualization: Science, Technology, Engineering, Arts and Mathematics in Higher Education

Authors: Maryam Kalkatechi

Abstract:

The application of algorithms in architecture has been realized as geometric forms which are increasingly being used by architecture firms. The abstraction of ideas in a formulated algorithm is not possible. There is still a gap between design innovation and final built in prescribed formulas, even the most aesthetical realizations. This paper presents the application of erudite design process to conceptualize biomimetic paradigms in architecture. The process is customized to material and tectonics. The first part of the paper outlines the design process elements within four biomimetic pre-concepts. The pre-concepts are chosen from plants family. These include the pine leaf, the dandelion flower; the cactus flower and the sun flower. The choice of these are related to material qualities and natural pattern of the tectonics of these plants. It then focuses on four versions of tectonic comprehension of one of the biomimetic pre-concepts. The next part of the paper discusses the implementation of STEAM in higher education in architecture. This is shown by the relations within the design process and the manifestation of the thinking processes. The A in the SETAM, in this case, is only achieved by the design process, an engaging event as a performing arts, in which the conceptualization and development is realized in final built.

Keywords: biomimetic paradigm, erudite design process, tectonic, STEAM (Science, Technology, Engineering, Arts, Mathematic)

Procedia PDF Downloads 211
1980 Iranian Intellectuals, Localism, Globalization and the Challenge of Rebuilding National Identity

Authors: Mohammad Afghari

Abstract:

Since the inception of intellectual movements in Iran, Iranian thinkers have perennially found themselves at the crossroads of indigenous traditionalism and Western orientation. On the one hand, supporters of indigenous thinking have emphasized the defense of cultural, national, and religious values. On the other hand, Western-leaning intellectuals, often derogatorily labeled as ‘Westoxication’ by their indigenous counterparts, have been inclined towards embracing non-indigenous ideas and ideologies, primarily of Western origin. In this historical context, the dualistic nature of Iranian intellectuals, evolving amidst the era of globalization and its swift advancements in communication, has not only retained its inherent character but has evolved into a broader duality that can identified as ‘Iranian-Cosmopolitan’. In this duality, both in its classical form of indigenous-Western and its contemporary manifestation as Iranian-Cosmopolitan, the Iranian national identity has consistently been a significant part of intellectual discussions. While critically examining this dualism through a historical lens and drawing upon the theories of Anthony Smith, a historical sociologist and British theorist of nationalism, this article delves into the importance of aligning national identity with the prevailing societal transformations, especially globalization. It underscores that Iranian intellectuals, to national identity reconstruction in the present age, will find no solution other than discarding this dualism and reconstructing national identity within a global framework.

Keywords: Iran, Iranian intellectuals, globalization, localism, national identity, cosmopolitan

Procedia PDF Downloads 48
1979 Design, Synthesis, and Catalytic Applications of Functionalized Metal Complexes and Nanomaterials for Selective Oxidation and Coupling Reactions

Authors: Roghaye Behroozi

Abstract:

The development of functionalized metal complexes and nanomaterials has gained significant attention due to their potential in catalyzing selective oxidation and coupling reactions. These catalysts play a crucial role in various industrial and pharmaceutical processes, enhancing the efficiency, selectivity, and sustainability of chemical reactions. This research aims to design and synthesize new functionalized metal complexes and nanomaterials to explore their catalytic applications in the selective oxidation of alcohols and coupling reactions, focusing on improving yield, selectivity, and catalyst reusability. The study involves the synthesis of a nickel Schiff base complex stabilized within 41-MCM as a heterogeneous catalyst. A Schiff base ligand derived from glycine was used to create a tin (IV) metal complex characterized through spectroscopic techniques and computational analysis. Additionally, iron-based magnetic nanoparticles functionalized with melamine were synthesized for catalytic evaluation. Lastly, a palladium (IV) complex was prepared, and its oxidative stability was analyzed. The nickel Schiff base catalyst showed high selectivity in converting primary and secondary alcohols to aldehydes and ketones, with yields ranging from 73% to 90%. The tin (IV) complex demonstrated accurate structural and electronic properties, with consistent results between experimental and computational data. The melamine-functionalized iron nanoparticles exhibited efficient catalytic activity in producing triazoles, with enhanced reaction speed and reusability. The palladium (IV) complex displayed remarkable stability and low reactivity towards C–C bond formation due to its symmetrical structure. The synthesized metal complexes and nanomaterials demonstrated significant potential as efficient, selective, and reusable catalysts for oxidation and coupling reactions. These findings pave the way for developing environmentally friendly and cost-effective catalytic systems for industrial applications.

Keywords: catalysts, Schiff base complexes, metal-organic frameworks, oxidation reactions, nanoparticles, reusability

Procedia PDF Downloads 15
1978 19th Century Exam, 21st Century Policing: An Examination of the New York State Civil Service and Police Officer Recruitment Efforts

Authors: A. Edwards

Abstract:

The civil service was created to reform the hiring process for public officials, changing the patronage system to a merit-based system. Though exam reforms continued throughout the 20th century, there have been few during the 21st century, particularly in New York state. In the case of police departments, the civil service exam has acted as a hindrance to its ‘21st Century Policing’ goals and new exam reform efforts have left out officers voices and concerns. Through in-depth interviews of current and retired police officers and local and state civil service administrators in Albany County in New York, this study seeks to understand police influence and insight regarding the civil service exam, placing some of the voice and input for civil service reform on police departments, instead of local and state bureaucrats. The study also looks at the relationship between civil service administrators and police departments. Using practice theory, the study seeks to understand the ways in which the civil service exam was defined in the 20th century and how it is out of step with current thinking while examining possible changes to the civil service exam that would lead to a more equitable hiring process and successful police departments.

Keywords: civil service, hiring, merit, policing

Procedia PDF Downloads 203
1977 Enhancing English Language Skills Integratively through Short Stories

Authors: Dinesh Kumar Yadav

Abstract:

Short stories for language development are deeply rooted elsewhere in any language syllabus. Its relevance is manifold. The short stories have the power to take the students to the target culture directly from the classroom. It works as a crucial factor in enhancing language skills in different ways. This article is an outcome of an experimental study conducted for a month on the 12th graders where they were engaged in different creative and critical-thinking activities along with various tasks that ranged from knowledge level to application level. The sole purpose was to build up their confidence in speaking in the classroom as well as develop all their language skills simultaneously. With the start of the class in August 2021, the students' speaking skill and their confidence in speaking in the class was tested. The test was abruptly followed by a presentation of a short story from their culture. The students were engaged in different tasks related to the story. The PowerPoint slides, handouts with the story, and tasks on photocopy were used as tools whenever needed. A one-month class exclusively on speaking skills through sharing stories was found to be very helpful in developing confidence in the learners. The result was very satisfactory. A large number of students became responsive in the class. The proficiency level was not satisfactory; however, their effort to speak in class showed a very positive sign in language development.

Keywords: short stories, relevance, language enhancement, language proficiency

Procedia PDF Downloads 94
1976 Emotional Artificial Intelligence and the Right to Privacy

Authors: Emine Akar

Abstract:

The majority of privacy-related regulation has traditionally focused on concepts that are perceived to be well-understood or easily describable, such as certain categories of data and personal information or images. In the past century, such regulation appeared reasonably suitable for its purposes. However, technologies such as AI, combined with ever-increasing capabilities to collect, process, and store “big data”, not only require calibration of these traditional understandings but may require re-thinking of entire categories of privacy law. In the presentation, it will be explained, against the background of various emerging technologies under the umbrella term “emotional artificial intelligence”, why modern privacy law will need to embrace human emotions as potentially private subject matter. This argument can be made on a jurisprudential level, given that human emotions can plausibly be accommodated within the various concepts that are traditionally regarded as the underlying foundation of privacy protection, such as, for example, dignity, autonomy, and liberal values. However, the practical reasons for regarding human emotions as potentially private subject matter are perhaps more important (and very likely more convincing from the perspective of regulators). In that respect, it should be regarded as alarming that, according to most projections, the usefulness of emotional data to governments and, particularly, private companies will not only lead to radically increased processing and analysing of such data but, concerningly, to an exponential growth in the collection of such data. In light of this, it is also necessity to discuss options for how regulators could address this emerging threat.

Keywords: AI, privacy law, data protection, big data

Procedia PDF Downloads 88
1975 Assessment of the Performance of the Sonoreactors Operated at Different Ultrasound Frequencies, to Remove Pollutants from Aqueous Media

Authors: Gabriela Rivadeneyra-Romero, Claudia del C. Gutierrez Torres, Sergio A. Martinez-Delgadillo, Victor X. Mendoza-Escamilla, Alejandro Alonzo-Garcia

Abstract:

Ultrasonic degradation is currently being used in sonochemical reactors to degrade pollutant compounds from aqueous media, as emerging contaminants (e.g. pharmaceuticals, drugs and personal care products.) because they can produce possible ecological impacts on the environment. For this reason, it is important to develop appropriate water and wastewater treatments able to reduce pollution and increase reuse. Pollutants such as textile dyes, aromatic and phenolic compounds, cholorobenzene, bisphenol-A and carboxylic acid and other organic pollutants, can be removed from wastewaters by sonochemical oxidation. The effect on the removal of pollutants depends on the type of the ultrasonic frequency used; however, not much studies have been done related to the behavior of the fluid into the sonoreactors operated at different ultrasonic frequencies. Based on the above, it is necessary to study the hydrodynamic behavior of the liquid generated by the ultrasonic irradiation to design efficient sonoreactors to reduce treatment times and costs. In this work, it was studied the hydrodynamic behavior of the fluid in sonochemical reactors at different frequencies (250 kHz, 500 kHz and 1000 kHz). The performances of the sonoreactors at those frequencies were simulated using computational fluid dynamics (CFD). Due to there is great sound speed gradient between piezoelectric and fluid, k-e models were used. Piezoelectric was defined as a vibration surface, to evaluate the different frequencies effect on the fluid into sonochemical reactor. Structured hexahedral cells were used to mesh the computational liquid domain, and fine triangular cells were used to mesh the piezoelectric transducers. Unsteady state conditions were used in the solver. Estimation of the dissipation rate, flow field velocities, Reynolds stress and turbulent quantities were evaluated by CFD and 2D-PIV measurements. Test results show that there is no necessary correlation between an increase of the ultrasonic frequency and the pollutant degradation, moreover, the reactor geometry and power density are important factors that should be considered in the sonochemical reactor design.

Keywords: CFD, reactor, ultrasound, wastewater

Procedia PDF Downloads 190
1974 The Symbiotic Relation of Mythical Stories in Transforming Human Lives

Authors: Gayatri Kanwar

Abstract:

The purpose of this research paper is to explore the power of myth in changing human lives; it establishes patterns in the human psyche, affects the way of thinking, as myths unveil various subjects, ideas, and challenges. Through mythological stories one comes to understand the images behind the emotions and feelings, they influence him as it changes his thought patterns, their therapeutic sets the individual on the path of healing and transforms human lives. Every civilization in the olden times had a vast source of myths which they lived by. They were not ordinary stories of everyday life, but exemplary cases narrated through oral traditions in a sacred manner revealed the 'way to live life'. The mythical stories have a spiritual touch which brought him to the acceptance of suffering or finding a solution to his life problems. In modern times, the significance of the age old myth has lost their touch. Each one of us bears countless stories inside ourselves of our own lives and all its happenings. Therefore, each being is a natural narrator. Everybody tells stories about their lives; hence, one tends to know oneself as well as seeks understanding of others through them. When one remembers their stories they speak in narratives. As stated by Jung, these narratives grow into a personal mythology one lives by. Nonetheless, there are times when one becomes stuck in their own stories or myths. Hence, mythology can change one’s perception and can open pathways to other ways of discovering, feeling and experiencing one’s lives.

Keywords: Power of Myths, Significance of myths in modern times, Transforming human lives, Benefits to Society

Procedia PDF Downloads 402
1973 Intervention of Self-Limiting L1 Inner Speech during L2 Presentations: A Study of Bangla-English Bilinguals

Authors: Abdul Wahid

Abstract:

Inner speech, also known as verbal thinking, self-talk or private speech, is characterized by the subjective language experience in the absence of overt or audible speech. It is a psychological form of verbal activity which is being rehearsed without the articulation of any sound wave. In Psychology, self-limiting speech means the type of speech which contains information that inhibits the development of the self. People, in most cases, experience inner speech in their first language. It is very frequent in Bangladesh where the Bangla (L1) speaking students lose track of speech during their presentations in English (L2). This paper investigates into the long pauses (more than 0.4 seconds long) in English (L2) presentations by Bangla speaking students (18-21 year old) and finds the intervention of Bangla (L1) inner speech as one of its causes. The overt speeches of the presenters are placed on Audacity Audio Editing software where the length of pauses are measured in milliseconds. Varieties of inner speech questionnaire (VISQ) have been conducted randomly amongst the participants out of whom 20 were selected who have similar phenomenology of inner speech. They have been interviewed to describe the type and content of the voices that went on in their head during the long pauses. The qualitative interview data are then codified and converted into quantitative data. It was observed that in more than 80% cases students experience self-limiting inner speech/self-talk during their unwanted pauses in L2 presentations.

Keywords: Bangla-English Bilinguals, inner speech, L1 intervention in bilingualism, motor schema, pauses, phonological loop, phonological store, working memory

Procedia PDF Downloads 152
1972 High-Fidelity Materials Screening with a Multi-Fidelity Graph Neural Network and Semi-Supervised Learning

Authors: Akeel A. Shah, Tong Zhang

Abstract:

Computational approaches to learning the properties of materials are commonplace, motivated by the need to screen or design materials for a given application, e.g., semiconductors and energy storage. Experimental approaches can be both time consuming and costly. Unfortunately, computational approaches such as ab-initio electronic structure calculations and classical or ab-initio molecular dynamics are themselves can be too slow for the rapid evaluation of materials, often involving thousands to hundreds of thousands of candidates. Machine learning assisted approaches have been developed to overcome the time limitations of purely physics-based approaches. These approaches, on the other hand, require large volumes of data for training (hundreds of thousands on many standard data sets such as QM7b). This means that they are limited by how quickly such a large data set of physics-based simulations can be established. At high fidelity, such as configuration interaction, composite methods such as G4, and coupled cluster theory, gathering such a large data set can become infeasible, which can compromise the accuracy of the predictions - many applications require high accuracy, for example band structures and energy levels in semiconductor materials and the energetics of charge transfer in energy storage materials. In order to circumvent this problem, multi-fidelity approaches can be adopted, for example the Δ-ML method, which learns a high-fidelity output from a low-fidelity result such as Hartree-Fock or density functional theory (DFT). The general strategy is to learn a map between the low and high fidelity outputs, so that the high-fidelity output is obtained a simple sum of the physics-based low-fidelity and correction, Although this requires a low-fidelity calculation, it typically requires far fewer high-fidelity results to learn the correction map, and furthermore, the low-fidelity result, such as Hartree-Fock or semi-empirical ZINDO, is typically quick to obtain, For high-fidelity outputs the result can be an order of magnitude or more in speed up. In this work, a new multi-fidelity approach is developed, based on a graph convolutional network (GCN) combined with semi-supervised learning. The GCN allows for the material or molecule to be represented as a graph, which is known to improve accuracy, for example SchNet and MEGNET. The graph incorporates information regarding the numbers of, types and properties of atoms; the types of bonds; and bond angles. They key to the accuracy in multi-fidelity methods, however, is the incorporation of low-fidelity output to learn the high-fidelity equivalent, in this case by learning their difference. Semi-supervised learning is employed to allow for different numbers of low and high-fidelity training points, by using an additional GCN-based low-fidelity map to predict high fidelity outputs. It is shown on 4 different data sets that a significant (at least one order of magnitude) increase in accuracy is obtained, using one to two orders of magnitude fewer low and high fidelity training points. One of the data sets is developed in this work, pertaining to 1000 simulations of quinone molecules (up to 24 atoms) at 5 different levels of fidelity, furnishing the energy, dipole moment and HOMO/LUMO.

Keywords: .materials screening, computational materials, machine learning, multi-fidelity, graph convolutional network, semi-supervised learning

Procedia PDF Downloads 41
1971 Design and Construction of an Intelligent Multiplication Table for Enhanced Education and Increased Student Engagement

Authors: Zahra Alikhani Koopaei

Abstract:

In the fifth lesson of the third-grade mathematics book, students are introduced to the concept of multiplication. However, some students showed a lack of interest in learning this topic. To address this, a simple electronic multiplication table was designed with the aim of making the concept of multiplication entertaining and engaging for students. It provides them with moments of excitement during the learning process. To achieve this goal, a device was created that produced a bell sound when two wire ends were connected. Each wire end was connected to a specific number in the multiplication table, and the other end was linked to the corresponding answer. Consequently, if the answer is correct, the bell will ring. This study employs interactive and engaging methods to teach mathematics, particularly to students who have previously shown little interest in the subject. By integrating game-based learning and critical thinking, we observed an increase in understanding and interest in learning multiplication compared to before using this method. This further motivated the students. As a result, the intelligent multiplication table was successfully designed. Students, under the instructor's supervision, could easily construct the device during the lesson. Through the implementation of these operations, the concept of multiplication was firmly established in the students' minds. Engaging multiple intelligences in each student enhances a more stable and improved understanding of the concept of multiplication.

Keywords: intelligent multiplication table, design, construction, education, increased interest, students

Procedia PDF Downloads 69
1970 Study of Morning-Glory Spillway Structure in Hydraulic Characteristics by CFD Model

Authors: Mostafa Zandi, Ramin Mansouri

Abstract:

Spillways are one of the most important hydraulic structures of dams that provide the stability of the dam and downstream areas at the time of flood. Morning-Glory spillway is one of the common spillways for discharging the overflow water behind dams, these kinds of spillways are constructed in dams with small reservoirs. In this research, the hydraulic flow characteristics of a morning-glory spillways are investigated with CFD model. Two dimensional unsteady RANS equations were solved numerically using Finite Volume Method. The PISO scheme was applied for the velocity-pressure coupling. The mostly used two-equation turbulence models, k- and k-, were chosen to model Reynolds shear stress term. The power law scheme was used for discretization of momentum, k , and  equations. The VOF method (geometrically reconstruction algorithm) was adopted for interface simulation. The results show that the fine computational grid, the input speed condition for the flow input boundary, and the output pressure for the boundaries that are in contact with the air provide the best possible results. Also, the standard wall function is chosen for the effect of the wall function, and the turbulent model k -ε (Standard) has the most consistent results with experimental results. When the jet is getting closer to end of basin, the computational results increase with the numerical results of their differences. The lower profile of the water jet has less sensitivity to the hydraulic jet profile than the hydraulic jet profile. In the pressure test, it was also found that the results show that the numerical values of the pressure in the lower landing number differ greatly in experimental results. The characteristics of the complex flows over a Morning-Glory spillway were studied numerically using a RANS solver. Grid study showed that numerical results of a 57512-node grid had the best agreement with the experimental values. The desired downstream channel length was preferred to be 1.5 meter, and the standard k-ε turbulence model produced the best results in Morning-Glory spillway. The numerical free-surface profiles followed the theoretical equations very well.

Keywords: morning-glory spillway, CFD model, hydraulic characteristics, wall function

Procedia PDF Downloads 77
1969 The Impact of Technology on Handicapped and Disability

Authors: George Kamil Kamal Abdelnor

Abstract:

Every major educational institution has incorporated diversity, equity, and inclusion (DEI) principles into its administrative, hiring, and pedagogical practices. Yet these DEI principles rarely incorporate explicit language or critical thinking about disability. Despite the fact that according to the World Health Organization, one in five people worldwide is disabled, making disabled people the larger minority group in the world, disability remains the neglected stepchild of DEI. Drawing on disability studies and crip theory frameworks, the underlying causes of this exclusion of disability from DEI, such as stigma, shame, invisible disabilities, institutionalization/segregation/delineation from family, and competing models and definitions of disability are examined. This paper explores both the ideological and practical shifts necessary to include disability in university DEI initiatives. It offers positive examples as well as conceptual frameworks such as 'divers ability' for so doing. Using Georgetown University’s 2020-2022 DEI initiatives as a case study, this paper describes how curricular infusion, accessibility, identity, community, and diversity administration infused one university’s DEI initiatives with concrete disability-inclusive measures. It concludes with a consideration of how the very framework of DEI itself might be challenged and transformed if disability were to be included.

Keywords: cognitive disability, cognitive diversity, disability, higher education disability, Standardized Index of Diversity of Disability (SIDD), differential and diversity in disability, 60+ population diversity, equity, inclusion, crip theory, accessibility

Procedia PDF Downloads 38
1968 Convolutional Neural Network Based on Random Kernels for Analyzing Visual Imagery

Authors: Ja-Keoung Koo, Kensuke Nakamura, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Byung-Woo Hong

Abstract:

The machine learning techniques based on a convolutional neural network (CNN) have been actively developed and successfully applied to a variety of image analysis tasks including reconstruction, noise reduction, resolution enhancement, segmentation, motion estimation, object recognition. The classical visual information processing that ranges from low level tasks to high level ones has been widely developed in the deep learning framework. It is generally considered as a challenging problem to derive visual interpretation from high dimensional imagery data. A CNN is a class of feed-forward artificial neural network that usually consists of deep layers the connections of which are established by a series of non-linear operations. The CNN architecture is known to be shift invariant due to its shared weights and translation invariance characteristics. However, it is often computationally intractable to optimize the network in particular with a large number of convolution layers due to a large number of unknowns to be optimized with respect to the training set that is generally required to be large enough to effectively generalize the model under consideration. It is also necessary to limit the size of convolution kernels due to the computational expense despite of the recent development of effective parallel processing machinery, which leads to the use of the constantly small size of the convolution kernels throughout the deep CNN architecture. However, it is often desired to consider different scales in the analysis of visual features at different layers in the network. Thus, we propose a CNN model where different sizes of the convolution kernels are applied at each layer based on the random projection. We apply random filters with varying sizes and associate the filter responses with scalar weights that correspond to the standard deviation of the random filters. We are allowed to use large number of random filters with the cost of one scalar unknown for each filter. The computational cost in the back-propagation procedure does not increase with the larger size of the filters even though the additional computational cost is required in the computation of convolution in the feed-forward procedure. The use of random kernels with varying sizes allows to effectively analyze image features at multiple scales leading to a better generalization. The robustness and effectiveness of the proposed CNN based on random kernels are demonstrated by numerical experiments where the quantitative comparison of the well-known CNN architectures and our models that simply replace the convolution kernels with the random filters is performed. The experimental results indicate that our model achieves better performance with less number of unknown weights. The proposed algorithm has a high potential in the application of a variety of visual tasks based on the CNN framework. Acknowledgement—This work was supported by the MISP (Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by IITP, and NRF-2014R1A2A1A11051941, NRF2017R1A2B4006023.

Keywords: deep learning, convolutional neural network, random kernel, random projection, dimensionality reduction, object recognition

Procedia PDF Downloads 290
1967 Artificial Neural Network Based Model for Detecting Attacks in Smart Grid Cloud

Authors: Sandeep Mehmi, Harsh Verma, A. L. Sangal

Abstract:

Ever since the idea of using computing services as commodity that can be delivered like other utilities e.g. electric and telephone has been floated, the scientific fraternity has diverted their research towards a new area called utility computing. New paradigms like cluster computing and grid computing came into existence while edging closer to utility computing. With the advent of internet the demand of anytime, anywhere access of the resources that could be provisioned dynamically as a service, gave rise to the next generation computing paradigm known as cloud computing. Today, cloud computing has become one of the most aggressively growing computer paradigm, resulting in growing rate of applications in area of IT outsourcing. Besides catering the computational and storage demands, cloud computing has economically benefitted almost all the fields, education, research, entertainment, medical, banking, military operations, weather forecasting, business and finance to name a few. Smart grid is another discipline that direly needs to be benefitted from the cloud computing advantages. Smart grid system is a new technology that has revolutionized the power sector by automating the transmission and distribution system and integration of smart devices. Cloud based smart grid can fulfill the storage requirement of unstructured and uncorrelated data generated by smart sensors as well as computational needs for self-healing, load balancing and demand response features. But, security issues such as confidentiality, integrity, availability, accountability and privacy need to be resolved for the development of smart grid cloud. In recent years, a number of intrusion prevention techniques have been proposed in the cloud, but hackers/intruders still manage to bypass the security of the cloud. Therefore, precise intrusion detection systems need to be developed in order to secure the critical information infrastructure like smart grid cloud. Considering the success of artificial neural networks in building robust intrusion detection, this research proposes an artificial neural network based model for detecting attacks in smart grid cloud.

Keywords: artificial neural networks, cloud computing, intrusion detection systems, security issues, smart grid

Procedia PDF Downloads 318
1966 A Design Approach in Architectural Education: Parasitic Architecture

Authors: Ozlem Senyigit, Nur Yilmaz

Abstract:

Throughout the architectural education, it is aimed to provide students with the ability to find original solutions to current problems. In this sense, workshops that provide creative thinking within the action, experiencing the environment, and finding instant solutions to problems have an important place in the education process. Parasitic architecture, which is a contemporary design approach in the architectural agenda, includes small scale designs integrated into the carrier system of existing structures in spaces of the existing urban fabric which resembles the host-parasite relationship in the biology field. The scope of this study consists of a 12-weeks long experimental workshop of the 'parasitic architecture', which was designed within the scope of Basic Design 2 course of the Department of Architecture of Çukurova University in the 2017-2018 academic year. In this study, parasitic architecture was discussed as a space design method. Students analyzed the campus of the Çukurova University and drew sketches to identify gaps in it. During the workshop, the function-form-context relationship was discussed. The output products were evaluated within the context of urban spaces/gaps, functional requirements, and students gained awareness not just about the urban occupancy but also gaps.

Keywords: design approach, parasitic architecture, experimental workshop, architectural education

Procedia PDF Downloads 157
1965 Numerical Solutions of Generalized Burger-Fisher Equation by Modified Variational Iteration Method

Authors: M. O. Olayiwola

Abstract:

Numerical solutions of the generalized Burger-Fisher are obtained using a Modified Variational Iteration Method (MVIM) with minimal computational efforts. The computed results with this technique have been compared with other results. The present method is seen to be a very reliable alternative method to some existing techniques for such nonlinear problems.

Keywords: burger-fisher, modified variational iteration method, lagrange multiplier, Taylor’s series, partial differential equation

Procedia PDF Downloads 430
1964 Learn Better to Earn Better: Importance of CPD in Dentistry

Authors: Junaid Ahmed, Nandita Shenoy

Abstract:

Maintaining lifelong knowledge and skills is essential for safe clinical practice. Continuing Professional Development (CPD) is an established method that can facilitate lifelong learning. It focuses on maintaining or developing knowledge, skills and relationships to ensure competent practice.To date, relatively little has been done to comprehensively and systematically synthesize evidence to identify subjects of interest among practising dentist. Hence the aim of our study was to identify areas in clinical practice that would be favourable for continuing professional dental education amongst practicing dentists. Participants of this study consisted of the practicing dental surgeons of Mangalore, a city in Dakshina Kannada, Karnataka. 95% of our practitioners felt that regular updating as a one day program once in 3-6 months is required, to keep them abreast in clinical practice. 60% of subjects feel that CPD programs enrich their theoretical knowledge and helps in patient care. 27% of them felt that CPD programs should be related to general dentistry. Most of them felt that CPD programs should not be charged nominally between one to two thousand rupees. The acronym ‘CPD’ should be seen in a broader view in which professionals continuously enhance not only their knowledge and skills, but also their thinking,understanding and maturity; they grow not only as professionals, but also as persons; their development is not restricted to their work roles, but may also extend to new roles and responsibilities.

Keywords: continuing professional development, competent practice, dental education, practising dentist

Procedia PDF Downloads 260
1963 Stories of Digital Technology and Online Safety: Storytelling as a Tool to Find out Young Children’s Views on Digital Technology and Online Safety

Authors: Lindsey Watson

Abstract:

This research is aimed at facilitating and listening to the voices of younger children, recognising their contributions to research about the things that matter to them. Digital technology increasingly impacts on the lives of young children, therefore this study aimed at increasing children’s agency through recognising and involving their perspectives to help contribute to a wider understanding of younger children’s perceptions of online safety. Using a phenomenological approach, the paper discusses how storytelling as a creative methodological approach enabled an agentic space for children to express their views, knowledge, and perceptions of their engagement with the digital world. Setting and parental informed consent were gained in addition to an adapted approach to child assent through the use of child-friendly language and emoji stickers, which was also recorded verbally. Findings demonstrate that younger children are thinking about many aspects of digital technology and how this impacts on their lives and that storytelling as a research method is a useful tool to facilitate conversations with young children. The paper thus seeks to recognise and evaluate how creative methodologies can provide insights into children’s understanding of online safety and how this can influence practitioners and parents in supporting younger children in a digital world.

Keywords: early childhood, family, online safety, phenomenology, storytelling

Procedia PDF Downloads 129
1962 Analyzing Sociocultural Factors Shaping Architects’ Construction Material Choices: The Case of Jordan

Authors: Maiss Razem

Abstract:

The construction sector is considered a major consumer of materials that undergoes processes of extraction, processing, transportation, and maintaining when used in buildings. Several metrics have been devised to capture the environmental impact of the materials consumed during construction using lifecycle thinking. Rarely has the materiality of this sector been explored qualitatively and systemically. This paper aims to explore socio-cultural forces that drive the use of certain materials in the Jordanian construction industry, using practice theory as a heuristic method of analysis, more specifically Shove et al. three-element model. By conducting semi-structured interviews with architects, the results unravel contextually embedded routines when determining qualities of three materialities highlighted herein; stone, glass and spatial openness. The study highlights the inadequacy of only using efficiency as a quantitative metric of sustainable materials and argues for the need to link material consumption with socio-economic, cultural, and aesthetic driving forces. The operationalization of practice theory by tracing materials’ lifetimes as they integrate with competencies and meanings captures dynamic engagements through the analyzed routines of actors in the construction practice. This study can offer policymakers better-nuanced representation to green this sector beyond efficiency rhetoric and quantitative metrics.

Keywords: architects' practices, construction materials, Jordan, practice theory

Procedia PDF Downloads 169
1961 Bayesian Parameter Inference for Continuous Time Markov Chains with Intractable Likelihood

Authors: Randa Alharbi, Vladislav Vyshemirsky

Abstract:

Systems biology is an important field in science which focuses on studying behaviour of biological systems. Modelling is required to produce detailed description of the elements of a biological system, their function, and their interactions. A well-designed model requires selecting a suitable mechanism which can capture the main features of the system, define the essential components of the system and represent an appropriate law that can define the interactions between its components. Complex biological systems exhibit stochastic behaviour. Thus, using probabilistic models are suitable to describe and analyse biological systems. Continuous-Time Markov Chain (CTMC) is one of the probabilistic models that describe the system as a set of discrete states with continuous time transitions between them. The system is then characterised by a set of probability distributions that describe the transition from one state to another at a given time. The evolution of these probabilities through time can be obtained by chemical master equation which is analytically intractable but it can be simulated. Uncertain parameters of such a model can be inferred using methods of Bayesian inference. Yet, inference in such a complex system is challenging as it requires the evaluation of the likelihood which is intractable in most cases. There are different statistical methods that allow simulating from the model despite intractability of the likelihood. Approximate Bayesian computation is a common approach for tackling inference which relies on simulation of the model to approximate the intractable likelihood. Particle Markov chain Monte Carlo (PMCMC) is another approach which is based on using sequential Monte Carlo to estimate intractable likelihood. However, both methods are computationally expensive. In this paper we discuss the efficiency and possible practical issues for each method, taking into account the computational time for these methods. We demonstrate likelihood-free inference by performing analysing a model of the Repressilator using both methods. Detailed investigation is performed to quantify the difference between these methods in terms of efficiency and computational cost.

Keywords: Approximate Bayesian computation(ABC), Continuous-Time Markov Chains, Sequential Monte Carlo, Particle Markov chain Monte Carlo (PMCMC)

Procedia PDF Downloads 202
1960 Exploration of Cone Foam Breaker Behavior Using Computational Fluid Dynamic

Authors: G. St-Pierre-Lemieux, E. Askari Mahvelati, D. Groleau, P. Proulx

Abstract:

Mathematical modeling has become an important tool for the study of foam behavior. Computational Fluid Dynamic (CFD) can be used to investigate the behavior of foam around foam breakers to better understand the mechanisms leading to the ‘destruction’ of foam. The focus of this investigation was the simple cone foam breaker, whose performance has been identified in numerous studies. While the optimal pumping angle is known from the literature, the contribution of pressure drop, shearing, and centrifugal forces to the foam syneresis are subject to speculation. This work provides a screening of those factors against changes in the cone angle and foam rheology. The CFD simulation was made with the open source OpenFOAM toolkits on a full three-dimensional model discretized using hexahedral cells. The geometry was generated using a python script then meshed with blockMesh. The OpenFOAM Volume Of Fluid (VOF) method was used (interFOAM) to obtain a detailed description of the interfacial forces, and the model k-omega SST was used to calculate the turbulence fields. The cone configuration allows the use of a rotating wall boundary condition. In each case, a pair of immiscible fluids, foam/air or water/air was used. The foam was modeled as a shear thinning (Herschel-Buckley) fluid. The results were compared to our measurements and to results found in the literature, first by computing the pumping rate of the cone, and second by the liquid break-up at the exit of the cone. A 3D printed version of the cones submerged in foam (shaving cream or soap solution) and water, at speeds varying between 400 RPM and 1500 RPM, was also used to validate the modeling results by calculating the torque exerted on the shaft. While most of the literature is focusing on cone behavior using Newtonian fluids, this works explore its behavior in shear thinning fluid which better reflects foam apparent rheology. Those simulations bring new light on the cone behavior within the foam and allow the computation of shearing, pressure, and velocity of the fluid, enabling to better evaluate the efficiency of the cones as foam breakers. This study contributes to clarify the mechanisms behind foam breaker performances, at least in part, using modern CFD techniques.

Keywords: bioreactor, CFD, foam breaker, foam mitigation, OpenFOAM

Procedia PDF Downloads 204
1959 Cognitive Dissonance in Robots: A Computational Architecture for Emotional Influence on the Belief System

Authors: Nicolas M. Beleski, Gustavo A. G. Lugo

Abstract:

Robotic agents are taking more and increasingly important roles in society. In order to make these robots and agents more autonomous and efficient, their systems have grown to be considerably complex and convoluted. This growth in complexity has led recent researchers to investigate forms to explain the AI behavior behind these systems in search for more trustworthy interactions. A current problem in explainable AI is the inner workings with the logic inference process and how to conduct a sensibility analysis of the process of valuation and alteration of beliefs. In a social HRI (human-robot interaction) setup, theory of mind is crucial to ease the intentionality gap and to achieve that we should be able to infer over observed human behaviors, such as cases of cognitive dissonance. One specific case inspired in human cognition is the role emotions play on our belief system and the effects caused when observed behavior does not match the expected outcome. In such scenarios emotions can make a person wrongly assume the antecedent P for an observed consequent Q, and as a result, incorrectly assert that P is true. This form of cognitive dissonance where an unproven cause is taken as truth induces changes in the belief base which can directly affect future decisions and actions. If we aim to be inspired by human thoughts in order to apply levels of theory of mind to these artificial agents, we must find the conditions to replicate these observable cognitive mechanisms. To achieve this, a computational architecture is proposed to model the modulation effect emotions have on the belief system and how it affects logic inference process and consequently the decision making of an agent. To validate the model, an experiment based on the prisoner's dilemma is currently under development. The hypothesis to be tested involves two main points: how emotions, modeled as internal argument strength modulators, can alter inference outcomes, and how can explainable outcomes be produced under specific forms of cognitive dissonance.

Keywords: cognitive architecture, cognitive dissonance, explainable ai, sensitivity analysis, theory of mind

Procedia PDF Downloads 132
1958 A Multistep Broyden’s-Type Method for Solving Systems of Nonlinear Equations

Authors: M. Y. Waziri, M. A. Aliyu

Abstract:

The paper proposes an approach to improve the performance of Broyden’s method for solving systems of nonlinear equations. In this work, we consider the information from two preceding iterates rather than a single preceding iterate to update the Broyden’s matrix that will produce a better approximation of the Jacobian matrix in each iteration. The numerical results verify that the proposed method has clearly enhanced the numerical performance of Broyden’s Method.

Keywords: mulit-step Broyden, nonlinear systems of equations, computational efficiency, iterate

Procedia PDF Downloads 638
1957 Numerical Evolution Methods of Rational Form for Diffusion Equations

Authors: Said Algarni

Abstract:

The purpose of this study was to investigate selected numerical methods that demonstrate good performance in solving PDEs. We adapted alternative method that involve rational polynomials. Padé time stepping (PTS) method, which is highly stable for the purposes of the present application and is associated with lower computational costs, was applied. Furthermore, PTS was modified for our study which focused on diffusion equations. Numerical runs were conducted to obtain the optimal local error control threshold.

Keywords: Padé time stepping, finite difference, reaction diffusion equation, PDEs

Procedia PDF Downloads 299
1956 Acceleration Techniques of DEM Simulation for Dynamics of Particle Damping

Authors: Masato Saeki

Abstract:

Presented herein is a novel algorithms for calculating the damping performance of particle dampers. The particle damper is a passive vibration control technique and has many practical applications due to simple design. It consists of granular materials constrained to move between two ends in the cavity of a primary vibrating system. The damping effect results from the exchange of momentum during the impact of granular materials against the wall of the cavity. This damping has the advantage of being independent of the environment. Therefore, particle damping can be applied in extreme temperature environments, where most conventional dampers would fail. It was shown experimentally in many papers that the efficiency of the particle dampers is high in the case of resonant vibration. In order to use the particle dampers effectively, it is necessary to solve the equations of motion for each particle, considering the granularity. The discrete element method (DEM) has been found to be effective for revealing the dynamics of particle damping. In this method, individual particles are assumed as rigid body and interparticle collisions are modeled by mechanical elements as springs and dashpots. However, the computational cost is significant since the equation of motion for each particle must be solved at each time step. In order to improve the computational efficiency of the DEM, the new algorithms are needed. In this study, new algorithms are proposed for implementing the high performance DEM. On the assumption that behaviors of the granular particles in the each divided area of the damper container are the same, the contact force of the primary system with all particles can be considered to be equal to the product of the divided number of the damper area and the contact force of the primary system with granular materials per divided area. This convenience makes it possible to considerably reduce the calculation time. The validity of this calculation method was investigated and the calculated results were compared with the experimental ones. This paper also presents the results of experimental studies of the performance of particle dampers. It is shown that the particle radius affect the noise level. It is also shown that the particle size and the particle material influence the damper performance.

Keywords: particle damping, discrete element method (DEM), granular materials, numerical analysis, equivalent noise level

Procedia PDF Downloads 453
1955 Gender Diversity in Early Years Education: An Exploratory Study Applied to Preschool Curriculum System in Romania

Authors: Emilia-Gheorghina Negru

Abstract:

As an EU goal, gender diversity in early year’s education aims and promotes equality of chances and respect for gender peculiarities of the pupils which are involved in formal educational activities. Early year’s education, as the first step to the Curriculum, prints to teachers the need to identify the role of the gender dimension on this stage, depending on the age level of preschool children through effective, complex, innovative and analytical awareness of gender diversity teaching and management strategies. Through gender educational work we, as teachers, will examine the effectiveness of the PATHS (Promoting Alternative Thinking Strategies) curriculum the gender development of school-aged children. PATHS and a school-based preventive intervention model are necessary to be designed to improve children's ability to discuss and understand equality and gender concepts. Our teachers must create an intervention model and provide PATHS lessons during the school year. Results of the intervention will be effective for both low- and high-risk children in improving their range of math’s skills for girls and vocabulary, fluency and emotional part for boys in discussing gender experiences, their efficacy beliefs regarding the management of equality in gender area, and their developmental understanding of some aspects of gender.

Keywords: gender, gender differences, gender equality, gender role, gender stereotypes

Procedia PDF Downloads 378
1954 Statistical Comparison of Machine and Manual Translation: A Corpus-Based Study of Gone with the Wind

Authors: Yanmeng Liu

Abstract:

This article analyzes and compares the linguistic differences between machine translation and manual translation, through a case study of the book Gone with the Wind. As an important carrier of human feeling and thinking, the literature translation poses a huge difficulty for machine translation, and it is supposed to expose distinct translation features apart from manual translation. In order to display linguistic features objectively, tentative uses of computerized and statistical evidence to the systematic investigation of large scale translation corpora by using quantitative methods have been deployed. This study compiles bilingual corpus with four versions of Chinese translations of the book Gone with the Wind, namely, Piao by Chunhai Fan, Piao by Huairen Huang, translations by Google Translation and Baidu Translation. After processing the corpus with the software of Stanford Segmenter, Stanford Postagger, and AntConc, etc., the study analyzes linguistic data and answers the following questions: 1. How does the machine translation differ from manual translation linguistically? 2. Why do these deviances happen? This paper combines translation study with the knowledge of corpus linguistics, and concretes divergent linguistic dimensions in translated text analysis, in order to present linguistic deviances in manual and machine translation. Consequently, this study provides a more accurate and more fine-grained understanding of machine translation products, and it also proposes several suggestions for machine translation development in the future.

Keywords: corpus-based analysis, linguistic deviances, machine translation, statistical evidence

Procedia PDF Downloads 144
1953 Automation of Finite Element Simulations for the Design Space Exploration and Optimization of Type IV Pressure Vessel

Authors: Weili Jiang, Simon Cadavid Lopera, Klaus Drechsler

Abstract:

Fuel cell vehicle has become the most competitive solution for the transportation sector in the hydrogen economy. Type IV pressure vessel is currently the most popular and widely developed technology for the on-board storage, based on their high reliability and relatively low cost. Due to the stringent requirement on mechanical performance, the pressure vessel is subject to great amount of composite material, a major cost driver for the hydrogen tanks. Evidently, the optimization of composite layup design shows great potential in reducing the overall material usage, yet requires comprehensive understanding on underlying mechanisms as well as the influence of different design parameters on mechanical performance. Given the type of materials and manufacturing processes by which the type IV pressure vessels are manufactured, the design and optimization are a nuanced subject. The manifold of stacking sequence and fiber orientation variation possibilities have an out-standing effect on vessel strength due to the anisotropic property of carbon fiber composites, which make the design space high dimensional. Each variation of design parameters requires computational resources. Using finite element analysis to evaluate different designs is the most common method, however, the model-ing, setup and simulation process can be very time consuming and result in high computational cost. For this reason, it is necessary to build a reliable automation scheme to set up and analyze the di-verse composite layups. In this research, the simulation process of different tank designs regarding various parameters is conducted and automatized in a commercial finite element analysis framework Abaqus. Worth mentioning, the modeling of the composite overwrap is automatically generated using an Abaqus-Python scripting interface. The prediction of the winding angle of each layer and corresponding thickness variation on dome region is the most crucial step of the modeling, which is calculated and implemented using analytical methods. Subsequently, these different composites layups are simulated as axisymmetric models to facilitate the computational complexity and reduce the calculation time. Finally, the results are evaluated and compared regarding the ultimate tank strength. By automatically modeling, evaluating and comparing various composites layups, this system is applicable for the optimization of the tanks structures. As mentioned above, the mechanical property of the pressure vessel is highly dependent on composites layup, which requires big amount of simulations. Consequently, to automatize the simulation process gains a rapid way to compare the various designs and provide an indication of the optimum one. Moreover, this automation process can also be operated for creating a data bank of layups and corresponding mechanical properties with few preliminary configuration steps for the further case analysis. Subsequently, using e.g. machine learning to gather the optimum by the data pool directly without the simulation process.

Keywords: type IV pressure vessels, carbon composites, finite element analy-sis, automation of simulation process

Procedia PDF Downloads 135