Search results for: expert visual evaluations
1610 An Application of Fuzzy Analytical Network Process to Select a New Production Base: An AEC Perspective
Authors: Walailak Atthirawong
Abstract:
By the end of 2015, the Association of Southeast Asian Nations (ASEAN) countries proclaim to transform into the next stage of an economic era by having a single market and production base called ASEAN Economic Community (AEC). One objective of the AEC is to establish ASEAN as a single market and one production base making ASEAN highly competitive economic region and competitive with new mechanisms. As a result, it will open more opportunities to enterprises in both trade and investment, which offering a competitive market of US$ 2.6 trillion and over 622 million people. Location decision plays a key role in achieving corporate competitiveness. Hence, it may be necessary for enterprises to redesign their supply chains via enlarging a new production base which has low labor cost, high labor skill and numerous of labor available. This strategy will help companies especially for apparel industry in order to maintain a competitive position in the global market. Therefore, in this paper a generic model for location selection decision for Thai apparel industry using Fuzzy Analytical Network Process (FANP) is proposed. Myanmar, Vietnam and Cambodia are referred for alternative location decision from interviewing expert persons in this industry who have planned to enlarge their businesses in AEC countries. The contribution of this paper lies in proposing an approach model that is more practical and trustworthy to top management in making a decision on location selection.Keywords: apparel industry, ASEAN Economic Community (AEC), Fuzzy Analytical Network Process (FANP), location decision
Procedia PDF Downloads 2361609 Digital Literacy, Assessment and Higher Education
Authors: James Moir
Abstract:
Recent evidence suggests that academic staff face difficulties in applying new technologies as a means of assessing higher order assessment outcomes such as critical thinking, problem solving and creativity. Although higher education institutional mission statements and course unit outlines purport the value of these higher order skills there is still some question about how well academics are equipped to design curricula and, in particular, assessment strategies accordingly. Despite a rhetoric avowing the benefits of these higher order skills, it has been suggested that academics set assessment tasks up in such a way as to inadvertently lead students on the path towards lower order outcomes. This is a controversial claim, and one that this papers seeks to explore and critique in terms of challenging the conceptual basis of assessing higher order skills through new technologies. It is argued that the use of digital media in higher education is leading to a focus on students’ ability to use and manipulate of these products as an index of their flexibility and adaptability to the demands of the knowledge economy. This focus mirrors market flexibility and encourages programmes and courses of study to be rhetorically packaged as such. Curricular content has become a means to procure more or less elaborate aggregates of attributes. Higher education is now charged with producing graduates who are entrepreneurial and creative in order to drive forward economic sustainability. It is argued that critical independent learning can take place through the democratisation afforded by cultural and knowledge digitization and that assessment needs to acknowledge the changing relations between audience and author, expert and amateur, creator and consumer.Keywords: higher education, curriculum, new technologies, assessment, higher order skills
Procedia PDF Downloads 3751608 Electronic Transparency in Georgia as a Basis for Development of Electronic Governance
Authors: Lasha Mskhaladze, Guram Burchuladze, Khvicha Datunashvili
Abstract:
Technological changes have an impact not only on economic but also on social elements of society which in turn has created new challenges for states’ political systems and their regimes. As a result of unprecedented growth of information technologies and communications digital democracy and electronic governance have emerged. Nowadays effective state functioning cannot be imagined without electronic governance. In Georgia, special attention is paid to the development of such new systems and establishment of electronic governance. Therefore, in parallel to intensive development of information technologies an important priority for public sector in Georgia is the development of electronic governance. In spite of the fact that today Georgia with its economic indicators satisfies the standards of western informational society, and major part of its gross domestic product comes from the service sector (59.6%), it still remains a backward country on the world map in terms of information technologies and electronic governance. E-transparency in Georgia should be based on such parameters as government accountability when the government provides citizens information about their activities; e-participation which involves government’s consideration of external expert assessments; cooperation between officials and citizens in order to solve national problems. In order to improve electronic systems the government should actively do the following: Fully develop electronic programs concerning HR and exchange of data between public organizations; develop all possible electronic services; improve existing electronic programs; make electronic services available on different mobile platforms (iPhone, Android, etc.).Keywords: electronic transparency, electronic services, information technology, information society, electronic systems
Procedia PDF Downloads 2781607 Audio-Visual Co-Data Processing Pipeline
Authors: Rita Chattopadhyay, Vivek Anand Thoutam
Abstract:
Speech is the most acceptable means of communication where we can quickly exchange our feelings and thoughts. Quite often, people can communicate orally but cannot interact or work with computers or devices. It’s easy and quick to give speech commands than typing commands to computers. In the same way, it’s easy listening to audio played from a device than extract output from computers or devices. Especially with Robotics being an emerging market with applications in warehouses, the hospitality industry, consumer electronics, assistive technology, etc., speech-based human-machine interaction is emerging as a lucrative feature for robot manufacturers. Considering this factor, the objective of this paper is to design the “Audio-Visual Co-Data Processing Pipeline.” This pipeline is an integrated version of Automatic speech recognition, a Natural language model for text understanding, object detection, and text-to-speech modules. There are many Deep Learning models for each type of the modules mentioned above, but OpenVINO Model Zoo models are used because the OpenVINO toolkit covers both computer vision and non-computer vision workloads across Intel hardware and maximizes performance, and accelerates application development. A speech command is given as input that has information about target objects to be detected and start and end times to extract the required interval from the video. Speech is converted to text using the Automatic speech recognition QuartzNet model. The summary is extracted from text using a natural language model Generative Pre-Trained Transformer-3 (GPT-3). Based on the summary, essential frames from the video are extracted, and the You Only Look Once (YOLO) object detection model detects You Only Look Once (YOLO) objects on these extracted frames. Frame numbers that have target objects (specified objects in the speech command) are saved as text. Finally, this text (frame numbers) is converted to speech using text to speech model and will be played from the device. This project is developed for 80 You Only Look Once (YOLO) labels, and the user can extract frames based on only one or two target labels. This pipeline can be extended for more than two target labels easily by making appropriate changes in the object detection module. This project is developed for four different speech command formats by including sample examples in the prompt used by Generative Pre-Trained Transformer-3 (GPT-3) model. Based on user preference, one can come up with a new speech command format by including some examples of the respective format in the prompt used by the Generative Pre-Trained Transformer-3 (GPT-3) model. This pipeline can be used in many projects like human-machine interface, human-robot interaction, and surveillance through speech commands. All object detection projects can be upgraded using this pipeline so that one can give speech commands and output is played from the device.Keywords: OpenVINO, automatic speech recognition, natural language processing, object detection, text to speech
Procedia PDF Downloads 801606 Evaluation of the Shelf Life of Horsetail Stems Stored in Ecological Packaging
Authors: Rosana Goncalves Das Dores, Maira Fonseca, Fernando Finger, Vicente Casali
Abstract:
Equisetum hyemale L. (horsetail, Equisetaceae) is a medicinal plant used and commercialized in simple paper bags or non-ecological packaging in Brazil. The aim of this work was to evaluate the relation between the bioactive compounds of horsetail stems stored in ecological packages (multi-ply paper sacks) at room temperature. Stems in primary and secondary stage were harvested from an organic estate, on December 2016, selected, measured (length from the soil to the apex (cm), stem diameter at ground level (DGL mm) and breast height (DBH mm) and cut into 10 cm. For the post-harvest evaluations, stems were stored in multi-ply paper sacks and evaluated daily to the respiratory rate, fresh weight loss, pH, presence of fungi / mold, phenolic compounds and antioxidant activity. The analyses were done with four replicates, over time (regression) and compared at 1% significance (Tukey test). The measured heights were 103.7 cm and 143.5 cm, DGL was 2.5mm and 8.4 mm and DBH of 2.59 and 6.15 mm, respectively for primary and secondary stems stage. At both stages of development, in storage in multi-ply paper sacks, the greatest mass loss occurred at 48 h, decaying up to 120 hours, stabilizing at 192 hours. The peak respiratory rate increase occurred in 24 hours, coinciding with a change in pH (temperature and mean humidity was 23.5°C and 55%). No fungi or mold were detected, however, there was loss of color of the stems. The average yields of ethanolic extracts were equivalent (approximately 30%). Phenolic compounds and antioxidant activity were higher in secondary stems stage in up to 120 hours (AATt0 = 20%, AATt30 = 45%), decreasing at the end of the experiment (240 hours). The packaging used allows the commercialization of fresh stems of Equisetum for up to five days.Keywords: paper sacks, phenolic content, antioxidant activity, medicinal plants, post-harvest, ecological packages, Equisetum
Procedia PDF Downloads 1661605 Strategies to Promote Safety and Reduce the Vulnerability of Urban Worn-out Textures to the Potential Risk of Earthquake
Authors: Bahareh Montakhabi
Abstract:
Earthquake is known as one of the deadliest natural disasters, with a high potential for damage to life and property. Some of Iran's cities were completely destroyed after major earthquakes, and the people of the region suffered a lot of mental, financial and psychological damage. Tehran is one of the cities located on the fault line. According to experts, the only city that could be severely damaged by a moderate earthquake in Earthquake Engineering Intensity Scale (EEIS) (70% destruction) is Tehran because Tehran is built precisely on the fault. Seismic risk assessment (SRA) of cities in the scale of urban areas and neighborhoods is the first phase of the earthquake crisis management process, which can provide the information required to make optimal use of available resources and facilities in order to reduce the destructive effects and consequences of an earthquake. This study has investigated strategies to promote safety and reduce the vulnerability of worn-out urban textures in the District 12 of Tehran to the potential risk of earthquake aimed at prioritizing the factors affecting the vulnerability of worn-out urban textures to earthquake crises and how to reduce them, using the analytical-exploratory method, analytical hierarchy process (AHP), Expert choice and SWOT technique. The results of SWAT and AHP analysis of the vulnerability of the worn-out textures of District 12 to internal threats (1.70) and external threats (2.40) indicate weak safety of the textures of District 12 regarding internal and external factors and a high possibility of damage.Keywords: risk management, vulnerability, worn-out textures, earthquake
Procedia PDF Downloads 1931604 Causes and Consequences of Intuitive Animal Communication: A Case Study at Panthera Africa
Authors: Cathrine Scharning Cornwall-Nyquist, David Rafael Vaz Fernandes
Abstract:
Since its origins, mankind has been dreaming of communicating directly with other animals. Past civilizations interacted on different levels with other species and recognized them in their rituals and daily activities. However, recent scientific developments have limited the ability of humans to consider deeper levels of interaction beyond observation and/or physical behavior. In recent years, animal caretakers and facilities such as sanctuaries or rescue centers have been introducing new techniques based on intuition. Most of those initiatives are related to specific cases, such as the incapacity to understand an animal’s behavior. Respected organizations also include intuitive animal communication (IAC) sessions to follow up on past interventions with their animals. Despite the lack of credibility of this discipline, some animal caring structures have opted to integrate IAC into their daily routines and approaches to animal welfare. At this stage, animal communication will be generally defined as the ability of humans to communicate with animals on an intuitive level. The trend in the field remains to be explored. The lack of theory and previous research urges the scientific community to improve the description of the phenomenon and its consequences. Considering the current scenario, qualitative approaches may become a suitable pathway to explore this topic. The purpose of this case study is to explore the beliefs behind and the consequences of an approach based on intuitive animal communication techniques for Panthera Africa (PA), an ethical sanctuary located in South Africa. Due to their personal experience, the Sanctuary’s founders have developed a philosophy based on IAC while respecting the world's highest standards for big cat welfare. Their dual approach is reflected in their rescues, daily activities, and healing animals’ trauma. The case study's main research questions will be: (i) Why do they choose to apply IAC in their work? (ii) What consequences to their activities do IAC bring? (iii) What effects do IAC techniques bring in their interactions with the outside world? Data collection will be gathered on-site via: (i) Complete participation (field notes); (ii) Semi-structured interviews (audio transcriptions); (iii) Document analysis (internal procedures and policies); (iv) Audio-visual material (communication with third parties). The main researcher shall become an active member of the Sanctuary during a 30-day period and have full access to the site. Access to documents and audio-visual materials will be granted on a request basis. Interviews are expected to be held with PA founders and staff members and with IAC practitioners related to the facility. The information gathered shall enable the researcher to provide an extended description of the phenomenon and explore its internal and external consequences for Panthera Africa.Keywords: animal welfare, intuitive animal communication, Panthera Africa, rescue
Procedia PDF Downloads 921603 Affective Approach to Selected Ingmar Bergman Films
Authors: Grzegorz Zinkiewicz
Abstract:
The paper explores affective potential implicit in Bergman’s movies. This is done by the use of affect theory and the concept of affect in terms of paradigmatic and syntagmatic relations, from both diachronic and synchronic perspective. Since its inception in the early 2000s, affect theory has been applied to a number of academic fields. In Film Studies, it offers new avenues for discovering deeper, hidden layers of a given film. The aim is to show that the form and content of the films by Ingmar Bergman are determined by their inner affects that function independently of the viewer and, to an extent, are autonomous entities that can be analysed in separation from the auteur and actual characters. The paper discovers layers in Ingmar Bergman films and focuses on aspects that are often marginalised or studied from other viewpoints such as the connection between the content and visual side. As a result, a revaluation of Bergman films is possible that is more consistent with his original interpretations and comments included in his lectures, interviews and autobiography.Keywords: affect theory, experimental cinema, Ingmar Bergman, viewer response
Procedia PDF Downloads 1031602 Formulation of Sun Screen Cream and Sun Protecting Factor Activity from Standardized–Partition Compound of Mahkota Dewa Leaf (Phaleria macrocarpa (Scheff.) Boerl.)
Authors: Abdul Karim Zulkarnain, Marchaban, Subagus Wahyono, Ratna Asmah Susidarti
Abstract:
Mahkota Dewa contains phalerin which has activity as sun screen. In this study, 13 formulations of cream oil in water (o/w) were prepared and tested for their physical characteristics. The physical characteristics were then used for determining the optimum formula. This study aimed to explore the physical stability of optimized formulation of cream, its sun protecting factor (SPF) values using in vitro and in vivo tests. The optimum formula of o/w cream were prepared based on Simplex Lattice Design (LSD) method using software Design Expert®. The formulation of o/w cream were varied based on the proportion of cetyl alcohol, mineral oil and tween 80. The difference of physical characteristic of optimum and predicted formula was tested using t-test with significant level of 95%. The optimum formula of o/w cream was the formula which consists of cetyl alcohol 9.71%, mineral oil, 29%, and tween 80 3.29. Based on t-test, there was no significant difference of physical characteristics of optimum and predicted formulation. Viscosity, spread power, adhesive power, and separation volume ratio of o/w at week 0-4 were relatively stable. The o/w creams were relatively stable at extreme temperature. The o/w creams from mahkota dewa, phalerin, and benzophenone have SPF values of 21.32, 33.12, and 42.49, respectively. The formulas did not irritate the skin based on in vivo test.Keywords: cream, stability, In vitro, In vivo
Procedia PDF Downloads 2291601 Bias Minimization in Construction Project Dispute Resolution
Authors: Keyao Li, Sai On Cheung
Abstract:
Incorporation of alternative dispute resolution (ADR) mechanism has been the main feature of current trend of construction project dispute resolution (CPDR). ADR approaches have been identified as efficient mechanisms and are suitable alternatives to litigation and arbitration. Moreover, the use of ADR in this multi-tiered dispute resolution process often leads to repeated evaluations of a same dispute. Multi-tiered CPDR may become a breeding ground for cognitive biases. When completed knowledge is not available at the early tier of construction dispute resolution, disputing parties may form preconception of the dispute matter or the counterpart. This preconception would influence their information processing in the subsequent tier. Disputing parties tend to search and interpret further information in a self-defensive way to confirm their early positions. Their imbalanced information collection would boost their confidence in the held assessments. Their attitudes would be hardened and difficult to compromise. The occurrence of cognitive bias, therefore, impedes efficient dispute settlement. This study aims to explore ways to minimize bias in CPDR. Based on a comprehensive literature review, three types of bias minimizing approaches were collected: strategy-based, attitude-based and process-based. These approaches were further operationalized into bias minimizing measures. To verify the usefulness and practicability of these bias minimizing measures, semi-structured interviews were conducted with ten CPDR third party neutral professionals. All of the interviewees have at least twenty years of experience in facilitating settlement of construction dispute. The usefulness, as well as the implications of the bias minimizing measures, were validated and suggested by these experts. There are few studies on cognitive bias in construction management in general and in CPDR in particular. This study would be the first of its type to enhance the efficiency of construction dispute resolution by highlighting strategies to minimize the biases therein.Keywords: bias, construction project dispute resolution, minimization, multi-tiered, semi-structured interview
Procedia PDF Downloads 1861600 AI In Health and Wellbeing - A Seven-Step Engineering Method
Authors: Denis Özdemir, Max Senges
Abstract:
There are many examples of AI-supported apps for better health and wellbeing. Generally, these applications help people to achieve their goals based on scientific research and input data. Still, they do not always explain how those three are related, e.g. by making implicit assumptions about goals that hold for many but not for all. We present a seven-step method for designing health and wellbeing AIs considering goal setting, measurable results, real-time indicators, analytics, visual representations, communication, and feedback. It can help engineers as guidance in developing apps, recommendation algorithms, and interfaces that support humans in their decision-making without patronization. To illustrate the method, we create a recommender AI for tiny wellbeing habits and run a small case study, including a survey. From the results, we infer how people perceive the relationship between them and the AI and to what extent it helps them to achieve their goals. We review our seven-step engineering method and suggest modifications for the next iteration.Keywords: recommender systems, natural language processing, health apps, engineering methods
Procedia PDF Downloads 1651599 Contribution to the Study of Automatic Epileptiform Pattern Recognition in Long Term EEG Signals
Authors: Christine F. Boos, Fernando M. Azevedo
Abstract:
Electroencephalogram (EEG) is a record of the electrical activity of the brain that has many applications, such as monitoring alertness, coma and brain death; locating damaged areas of the brain after head injury, stroke and tumor; monitoring anesthesia depth; researching physiology and sleep disorders; researching epilepsy and localizing the seizure focus. Epilepsy is a chronic condition, or a group of diseases of high prevalence, still poorly explained by science and whose diagnosis is still predominantly clinical. The EEG recording is considered an important test for epilepsy investigation and its visual analysis is very often applied for clinical confirmation of epilepsy diagnosis. Moreover, this EEG analysis can also be used to help define the types of epileptic syndrome, determine epileptiform zone, assist in the planning of drug treatment and provide additional information about the feasibility of surgical intervention. In the context of diagnosis confirmation the analysis is made using long term EEG recordings with at least 24 hours long and acquired by a minimum of 24 electrodes in which the neurophysiologists perform a thorough visual evaluation of EEG screens in search of specific electrographic patterns called epileptiform discharges. Considering that the EEG screens usually display 10 seconds of the recording, the neurophysiologist has to evaluate 360 screens per hour of EEG or a minimum of 8,640 screens per long term EEG recording. Analyzing thousands of EEG screens in search patterns that have a maximum duration of 200 ms is a very time consuming, complex and exhaustive task. Because of this, over the years several studies have proposed automated methodologies that could facilitate the neurophysiologists’ task of identifying epileptiform discharges and a large number of methodologies used neural networks for the pattern classification. One of the differences between all of these methodologies is the type of input stimuli presented to the networks, i.e., how the EEG signal is introduced in the network. Five types of input stimuli have been commonly found in literature: raw EEG signal, morphological descriptors (i.e. parameters related to the signal’s morphology), Fast Fourier Transform (FFT) spectrum, Short-Time Fourier Transform (STFT) spectrograms and Wavelet Transform features. This study evaluates the application of these five types of input stimuli and compares the classification results of neural networks that were implemented using each of these inputs. The performance of using raw signal varied between 43 and 84% efficiency. The results of FFT spectrum and STFT spectrograms were quite similar with average efficiency being 73 and 77%, respectively. The efficiency of Wavelet Transform features varied between 57 and 81% while the descriptors presented efficiency values between 62 and 93%. After simulations we could observe that the best results were achieved when either morphological descriptors or Wavelet features were used as input stimuli.Keywords: Artificial neural network, electroencephalogram signal, pattern recognition, signal processing
Procedia PDF Downloads 5281598 Self-Determination and Mental Disorders: Phenomenological Approach
Authors: Neringa Bagdonaite
Abstract:
Background: The main focus of this paper is to explore how self-determination interplays in suicidal and addictive context leading one to autonomously choose self-destructive addictive behaviour or suicidal intentions. Methods: Phenomenological descriptions of the experiential structure of self-determination in addiction and suicidal mental life are used. The phenomenological method describes structures of mental life from the first-person-perspective, with a focus on how an experienced object is given in a subject’s conscious experience. Results: A sense of self-determination in the context of suicidal and addictive behaviour is possibly impaired. In the context of suicide, it's proposed that suicide is always experienced at least minimally self-determined, as it's the last freely discovered self-efficient behaviour, in terms of radically changing one's desperate mental state. Suicide can never be experienced as fully self-determined because no future retrospective re-evaluation of behaviour is possible. Understanding self-determination in addiction is challenging because addicts perceive themselves and experience situations differently depending on: (I) their level of intoxication; (II) whether the situation is in the moment or in retrospect; and (III) the goals set out in that situation. Furthermore, within phenomenology addiction is described as an embodied custom, which‘s acquired and established while performing 'psychotropic technique'. The main goal of performing such a technique is to continue 'floating in an indifference state' or being 'comfortably numb'. Conclusions: Based on rich phenomenological descriptions of the studied phenomenon, this paper draws on the premise that to experience self-determination in both suicide and addiction, underlying desperate or negative emotional states are needed. Such underlying desperate or negative mental life experiences are required for one to pre-reflectively evaluate suicide or addictive behaviours as positive, relieving or effective in terms of changing one's emotional states. Such pre-reflective positive evaluations serve as the base for the continuation of behaviour and later are identified reflectively.Keywords: addiction, phenomenology, self-determination, self-effectivity, suicide
Procedia PDF Downloads 1601597 Application of Change Detection Techniques in Monitoring Environmental Phenomena: A Review
Authors: T. Garba, Y. Y. Babanyara, T. O. Quddus, A. K. Mukatari
Abstract:
Human activities make environmental parameters in order to keep on changing globally. While some changes are necessary and beneficial to flora and fauna, others have serious consequences threatening the survival of their natural habitat if these changes are not properly monitored and mitigated. In-situ assessments are characterized by many challenges due to the absence of time series data and sometimes areas to be observed or monitored are inaccessible. Satellites Remote Sensing provide us with the digital images of same geographic areas within a pre-defined interval. This makes it possible to monitor and detect changes of environmental phenomena. This paper, therefore, reviewed the commonly use changes detection techniques globally such as image differencing, image rationing, image regression, vegetation index difference, change vector analysis, principal components analysis, multidate classification, post-classification comparison, and visual interpretation. The paper concludes by suggesting the use of more than one technique.Keywords: environmental phenomena, change detection, monitor, techniques
Procedia PDF Downloads 2741596 A Formal Approach for Instructional Design Integrated with Data Visualization for Learning Analytics
Authors: Douglas A. Menezes, Isabel D. Nunes, Ulrich Schiel
Abstract:
Most Virtual Learning Environments do not provide support mechanisms for the integrated planning, construction and follow-up of Instructional Design supported by Learning Analytic results. The present work aims to present an authoring tool that will be responsible for constructing the structure of an Instructional Design (ID), without the data being altered during the execution of the course. The visual interface aims to present the critical situations present in this ID, serving as a support tool for the course follow-up and possible improvements, which can be made during its execution or in the planning of a new edition of this course. The model for the ID is based on High-Level Petri Nets and the visualization forms are determined by the specific kind of the data generated by an e-course, a population of students generating sequentially dependent data.Keywords: educational data visualization, high-level petri nets, instructional design, learning analytics
Procedia PDF Downloads 2431595 Utilizing Bario Rice, a Natural Red-Pigmented Rice from Sarawak, Malaysia, in the Development of Gluten-Free Bread
Authors: Macdalyna Esther Ronie, Hasmadi Mamat, Ahmad Hazim Abdul Aziz, Mohamad Khairi Zainol
Abstract:
Current trends in gluten-free food development are increasingly leaning towards the utilization of pigmented rice flour, with a particular focus on Bario Merah Sederhana (BMS), a red-pigmented rice native to Sarawak, Malaysia. This study delves into the evaluation of the nutritional, textural, and sensory attributes of gluten-free rice bread produced from a blend of BMS rice flour and potato starch. The resulting samples are denoted as F1 (100% BMS rice flour), F2 (90% BMS rice flour and 10% potato starch), F3 (80% BMS rice flour and 20% potato starch), and F4 (70% BMS rice flour and 30% potato starch). Comparatively, these gluten-free rice bread formulations exhibit higher levels of ash and crude fiber, along with lower carbohydrate content when juxtaposed with conventional wheat bread. Notably, the crude protein content of the rice bread diminishes significantly (p<0.05) as the proportion of rice flour decreases, primarily due to the higher protein content found in wheat flour. The crumb of the rice bread appears darker owing to the red pigment in the rice flour, while the crust is lighter than that of the control sample, possibly attributable to a reduced Maillard reaction. Among the various rice bread formulations, F4 stands out with the least dough and bread hardness, accompanied by the highest levels of stickiness and springiness in both dough and bread, respectively. In sensory evaluations, wheat bread garners the highest rating (p<0.05). However, within the realm of rice breads, F4 emerges as a viable and acceptable formulation, as indicated by its commendable scores in color (7.03), flavor (5.73), texture (6.03), and overall acceptability (6.18). These findings underscore the potential of BMS in the creation of gluten-free rice breads, with the formulation consisting of 70% rice flour and 30% potato starch emerging as a well-received and suitable option.Keywords: gluten-free bread, bario rice, proximate composition, sensory evaluation
Procedia PDF Downloads 2421594 Knowledge Based Behaviour Modelling and Execution in Service Robotics
Authors: Suraj Nair, Aravindkumar Vijayalingam, Alexander Perzylo, Alois Knoll
Abstract:
In the last decade robotics research and development activities have grown rapidly, especially in the domain of service robotics. Integrating service robots into human occupied spaces such as homes, offices, hospitals, etc. has become increasingly worked upon. The primary motive is to ease daily lives of humans by taking over some of the household/office chores. However, several challenges remain in systematically integrating such systems in human shared work-spaces. In addition to sensing and indoor-navigation challenges, programmability of such systems is a major hurdle due to the fact that the potential user cannot be expected to have knowledge in robotics or similar mechatronic systems. In this paper, we propose a cognitive system for service robotics which allows non-expert users to easily model system behaviour in an underspecified manner through abstract tasks and objects associated with them. The system uses domain knowledge expressed in the form of an ontology along with logical reasoning mechanisms to infer all the missing pieces of information required for executing the tasks. Furthermore, the system is also capable of recovering from failed tasks arising due to on-line disturbances by using the knowledge base and inferring alternate methods to execute the same tasks. The system is demonstrated through a coffee fetching scenario in an office environment using a mobile robot equipped with sensors and software capabilities for autonomous navigation and human-interaction through natural language.Keywords: cognitive robotics, reasoning, service robotics, task based systems
Procedia PDF Downloads 2431593 Formulation, Evaluation and Statistical Optimization of Transdermal Niosomal Gel of Atenolol
Authors: Lakshmi Sirisha Kotikalapudi
Abstract:
Atenolol, the widely used antihypertensive drug is ionisable and degrades in the acidic environment of the GIT lessening the bioavailability. Transdermal route may be selected as an alternative to enhance the bioavailability. Half-life of the drug is 6-7 hours suggesting the requirement of prolonged release of the drug. The present work of transdermal niosomal gel aims to extend release of the drug and increase the bioavailability. Ethanol injection method was used for the preparation of niosomes using span-60 and cholesterol at different molar ratios following central composite design. The prepared niosomes were characterized for size, zeta-potential, entrapment efficiency, drug content and in-vitro drug release. Optimized formulation was selected by statistically analyzing the results obtained using the software Stat-Ease Design Expert. The optimized formulation also showed high drug retention inside the vesicles over a period of three months at a temperature of 4 °C indicating stability. Niosomes separated as a pellet were dried and incorporated into the hydrogel prepared using chitosan a natural polymer as a gelling agent. The effect of various chemical permeation enhancers was also studied over the gel formulations. The prepared formulations were characterized for viscosity, pH, drug release using Franz diffusion cells, and skin irritation test as well as in-vivo pharmacological activities. Atenolol niosomal gel preparations showed the prolonged release of the drug and pronounced antihypertensive activity indicating the suitability of niosomal gel for topical and systemic delivery of atenolol.Keywords: atenolol, chitosan, niosomes, transdermal
Procedia PDF Downloads 2951592 An Integrated Approach for Risk Management of Transportation of HAZMAT: Use of Quality Function Deployment and Risk Assessment
Authors: Guldana Zhigerbayeva, Ming Yang
Abstract:
Transportation of hazardous materials (HAZMAT) is inevitable in the process industries. The statistics show a significant number of accidents has occurred during the transportation of HAZMAT. This makes risk management of HAZMAT transportation an important topic. The tree-based methods including fault-trees, event-trees and cause-consequence analysis, and Bayesian network, have been applied to risk management of HAZMAT transportation. However, there is limited work on the development of a systematic approach. The existing approaches fail to build up the linkages between the regulatory requirements and the safety measures development. The analysis of historical data from the past accidents’ report databases would limit our focus on the specific incidents and their specific causes. Thus, we may overlook some essential elements in risk management, including regulatory compliance, field expert opinions, and suggestions. A systematic approach is needed to translate the regulatory requirements of HAZMAT transportation into specified safety measures (both technical and administrative) to support the risk management process. This study aims to first adapt the House of Quality (HoQ) to House of Safety (HoS) and proposes a new approach- Safety Function Deployment (SFD). The results of SFD will be used in a multi-criteria decision-support system to develop find an optimal route for HazMats transportation. The proposed approach will be demonstrated through a hypothetical transportation case in Kazakhstan.Keywords: hazardous materials, risk assessment, risk management, quality function deployment
Procedia PDF Downloads 1421591 Improving the Performance of Deep Learning in Facial Emotion Recognition with Image Sharpening
Authors: Ksheeraj Sai Vepuri, Nada Attar
Abstract:
We as humans use words with accompanying visual and facial cues to communicate effectively. Classifying facial emotion using computer vision methodologies has been an active research area in the computer vision field. In this paper, we propose a simple method for facial expression recognition that enhances accuracy. We tested our method on the FER-2013 dataset that contains static images. Instead of using Histogram equalization to preprocess the dataset, we used Unsharp Mask to emphasize texture and details and sharpened the edges. We also used ImageDataGenerator from Keras library for data augmentation. Then we used Convolutional Neural Networks (CNN) model to classify the images into 7 different facial expressions, yielding an accuracy of 69.46% on the test set. Our results show that using image preprocessing such as the sharpening technique for a CNN model can improve the performance, even when the CNN model is relatively simple.Keywords: facial expression recognittion, image preprocessing, deep learning, CNN
Procedia PDF Downloads 1431590 Investigate the Current Performance of Burger King Ho Chi Minh City in Terms of the Controllable Variables of the Overall Retail Strategy
Authors: Nhi Ngoc Thien
Abstract:
Franchising is a popular trend in Vietnam retail industry, especially in fast food industry. Several famous foreign fast food brands such as KFC, Lotteria, Jollibee or Pizza Hut invested on this potential market since the 1990s. Following this trend, in 2011, Burger King - the second largest fast food hamburger chain all over the world - entered Vietnam with its first store located in Tan Son Nhat International Airport, with the expectation to become the leading brand in the country. However, the business performance of Burger King was not going well in the first few years making it questioned about its strategy. The given assumption was that its business performance was affected negatively by its store location selection strategy. This research aims to investigate the current performance of Burger King Vietnam in terms of the controllable variables like store location as well as to explore the key factors influencing customer decision to choose Burger King. Therefore, a case study research method was conducted to approach deeply on the opinions and evaluations of 10 Burger King’s customers, Burger King's staffs and other fast food experts on Burger King’s performance through in-depth interview, direct observation and documentary analysis. Findings show that there are 8 determinants affecting the decision-making of Burger King’s customers, which are store location, quality of food, service quality, store atmosphere, price, promotion, menu and brand reputation. Moreover, findings present that Burger King’s staffs and fast food experts also mentioned the main problems of Burger King, which are about store location and food quality. As a result, there are some recommendations for Burger King Vietnam to improve its performance in the market and attract more Vietnamese target customers by giving suitable promotional activities among its customers and being differentiated itself from other fast food brands.Keywords: overall retail strategy, controllable variables, store location, quality of food
Procedia PDF Downloads 3441589 Saliency Detection Using a Background Probability Model
Authors: Junling Li, Fang Meng, Yichun Zhang
Abstract:
Image saliency detection has been long studied, while several challenging problems are still unsolved, such as detecting saliency inaccurately in complex scenes or suppressing salient objects in the image borders. In this paper, we propose a new saliency detection algorithm in order to solving these problems. We represent the image as a graph with superixels as nodes. By considering appearance similarity between the boundary and the background, the proposed method chooses non-saliency boundary nodes as background priors to construct the background probability model. The probability that each node belongs to the model is computed, which measures its similarity with backgrounds. Thus we can calculate saliency by the transformed probability as a metric. We compare our algorithm with ten-state-of-the-art salient detection methods on the public database. Experimental results show that our simple and effective approach can attack those challenging problems that had been baffling in image saliency detection.Keywords: visual saliency, background probability, boundary knowledge, background priors
Procedia PDF Downloads 4291588 Evaluation of Four Different DNA Targets in Polymerase Chain Reaction for Detection and Genotyping of Helicobacter pylori
Authors: Abu Salim Mustafa
Abstract:
Polymerase chain reaction (PCR) assays targeting genomic DNA segments have been established for the detection of Helicobacter pylori in clinical specimens. However, the data on comparative evaluations of various targets in detection of H. pylori are limited. Furthermore, the frequencies of vacA (s1 and s2) and cagA genotypes, which are suggested to be involved in the pathogenesis of H. pylori in other parts of the world, are not well studied in Kuwait. The aim of this study was to evaluate PCR assays for the detection and genotyping of H. pylori by targeting the amplification of DNA targets from four genomic segments. The genomic DNA were isolated from 72 clinical isolates of H. pylori and tested in PCR with four pairs of oligonucleotides primers, i.e. ECH-U/ECH-L, ET-5U/ET-5L, CagAF/CagAR and Vac1F/Vac1XR, which were expected to amplify targets of various sizes (471 bp, 230 bp, 183 bp and 176/203 bp, respectively) from the genomic DNA of H. pylori. The PCR-amplified DNA were analyzed by agarose gel electrophoresis. PCR products of expected size were obtained with all primer pairs by using genomic DNA isolated from H. pylori. DNA dilution experiments showed that the most sensitive PCR target was 471 bp DNA amplified by the primers ECH-U/ECH-L, followed by the targets of Vac1F/Vac1XR (176 bp/203 DNA), CagAF/CagAR (183 bp DNA) and ET-5U/ET-5L (230 bp DNA). However, when tested with undiluted genomic DNA isolated from single colonies of all isolates, the Vac1F/Vac1XR target provided the maximum positive results (71/72 (99% positives)), followed by ECH-U/ECH-L (69/72 (93% positives)), ET-5U/ET-5L (51/72 (71% positives)) and CagAF/CagAR (26/72 (46% positives)). The results of genotyping experiments showed that vacA s1 (46% positive) and vacA s2 (54% positive) genotypes were almost equally associated with VaCA+/CagA- isolates (P > 0.05), but with VacA+/CagA+ isolates, S1 genotype (92% positive) was more frequently detected than S2 genotype (8% positive) (P< 0.0001). In conclusion, among the primer pairs tested, Vac1F/Vac1XR provided the best results for detection of H. pylori. The genotyping experiments showed that vacA s1 and vacA s2 genotypes were almost equally associated with vaCA+/cagA- isolates, but vacA s1 genotype had a significantly increased association with vacA+/cagA+ isolates.Keywords: H. pylori, PCR, detection, genotyping
Procedia PDF Downloads 1331587 Comprehensive Evaluation of Thermal Environment and Its Countermeasures: A Case Study of Beijing
Authors: Yike Lamu, Jieyu Tang, Jialin Wu, Jianyun Huang
Abstract:
With the development of economy and science and technology, the urban heat island effect becomes more and more serious. Taking Beijing city as an example, this paper divides the value of each influence index of heat island intensity and establishes a mathematical model – neural network system based on the fuzzy comprehensive evaluation index of heat island effect. After data preprocessing, the algorithm of weight of each factor affecting heat island effect is generated, and the data of sex indexes affecting heat island intensity of Shenyang City and Shanghai City, Beijing, and Hangzhou City are input, and the result is automatically output by the neural network system. It is of practical significance to show the intensity of heat island effect by visual method, which is simple, intuitive and can be dynamically monitored.Keywords: heat island effect, neural network, comprehensive evaluation, visualization
Procedia PDF Downloads 1331586 Challenges in Translating Malay Idiomatic Expressions: A Study
Authors: Nor Ruba’Yah Binti Abd Rahim, Norsyahidah Binti Jaafar
Abstract:
Translating Malay idiomatic expressions into other languages presents unique challenges due to the deep cultural nuances and linguistic intricacies embedded within these expressions. This study examined these challenges through a two-pronged methodology: a comparative analysis using survey questionnaires and a quiz administered to 50 semester 6 students who are taking Translation 1 course, and in-depth interviews with their lecturers. The survey aimed to capture students’ experiences and difficulties in translating selected Malay idioms into English, highlighting common errors and misunderstandings. Complementing this, interviews with lecturers provided expert insights into the nuances of these expressions and effective translation strategies. The findings revealed that literal translations often fail to convey the intended meanings, underscoring the importance of cultural competence and contextual awareness. The study also identified key factors that contribute to successful translations, such as the translator’s familiarity with both source and target cultures and their ability to adapt expressions creatively. This research contributed to the field of translation studies by offering practical recommendations for improving the translation of idiomatic expressions, thereby enhancing cross-cultural communication. The insights gained from this study are valuable for translators, educators, and students, emphasizing the need for a nuanced approach that respects the cultural richness of the source language while ensuring clarity in the target language.Keywords: idiomatic expressions, cultural competence, translation strategies, cross-cultural communication, students’ difficulties
Procedia PDF Downloads 121585 Dislocation Density-Based Modeling of the Grain Refinement in Surface Mechanical Attrition Treatment
Authors: Reza Miresmaeili, Asghar Heydari Astaraee, Fereshteh Dolati
Abstract:
In the present study, an analytical model based on dislocation density model was developed to simulate grain refinement in surface mechanical attrition treatment (SMAT). The correlation between SMAT time and development in plastic strain on one hand, and dislocation density evolution, on the other hand, was established to simulate the grain refinement in SMAT. A dislocation density-based constitutive material law was implemented using VUHARD subroutine. A random sequence of shots is taken into consideration for multiple impacts model using Python programming language by utilizing a random function. The simulation technique was to model each impact in a separate run and then transferring the results of each run as initial conditions for the next run (impact). The developed Finite Element (FE) model of multiple impacts describes the coverage evolution in SMAT. Simulations were run to coverage levels as high as 4500%. It is shown that the coverage implemented in the FE model is equal to the experimental coverage. It is depicted that numerical SMAT coverage parameter is adequately conforming to the well-known Avrami model. Comparison between numerical results and experimental measurements for residual stresses and depth of deformation layers confirms the performance of the established FE model for surface engineering evaluations in SMA treatment. X-ray diffraction (XRD) studies of grain refinement, including resultant grain size and dislocation density, were conducted to validate the established model. The full width at half-maximum in XRD profiles can be used to measure the grain size. Numerical results and experimental measurements of grain refinement illustrate good agreement and show the capability of established FE model to predict the gradient microstructure in SMA treatment.Keywords: dislocation density, grain refinement, severe plastic deformation, simulation, surface mechanical attrition treatment
Procedia PDF Downloads 1361584 Baby Boomers and Millennials: Creating a Specialized Orientation Program
Authors: K. Rowan
Abstract:
In this paper, the author will discuss how developing a specialized orientation has improved nursing satisfaction and decrease the incidence of incivility among staff. With the predicted shortages in nursing, we must provide an environment that reflects the needs of the current workforce while also focusing on the sustainability of nursing. Each generation has different qualities and methods in which he or she prefers to learn. The Baby Boomer has a desire to share their knowledge. They feel that the quality of undergraduate nursing education has declined. Millennials have grown up with 'helicopter parents' and expect the preceptor to behave in the same manner. This information must be shared with the Baby Boomer, as it is these staff members who are passing the torch of perioperative nursing. Currently, nurse fellows are trained with the Association of periOperative Nurse’s Periop 101 program, with a didactic and clinical observation program. There is no specialized perioperative preceptor program. In creation of a preceptor program, the concept of Novice to Expert, communication techniques, dealing with horizontal violence and generational gap education is reviewed with the preceptor. The fellows are taught communication and de-escalation skills, and generational gaps information. The groups are then brought together for introductions and teamwork exercises. At the program’s core is the knowledge of generational differences. The preceptor training has increased preceptor satisfaction, as well as the new nurse fellows. The creation of a specialized education program has significantly decreased incivility amongst our nurses, all while increasing nursing satisfaction and improving nursing retention. This model of program can translate to all nursing specialties and assist in overcoming the impending shortage.Keywords: baby boomers, education, generational gap, millennials, nursing, perioperative
Procedia PDF Downloads 1661583 The Formulation of R&D Strategy for Biofuel Technology: A Case Study of the Aviation Industry in Iran
Authors: Maryam Amiri, Ali Rajabzade, Gholam Reza Goudarzi, Reza Heidari
Abstract:
Growth of technology and environmental changes are so fast and therefore, companies and industries have much tendency to do activities of R&D for active participation in the market and achievement to a competitive advantages. Aviation industry and its subdivisions have high level technology and play a special role in economic and social development of countries. So, in the aviation industry for getting new technologies and competing with other countries aviation industry, there is a requirement for capability in R&D. Considering of appropriate R&D strategy is supportive that day technologies of the world can be achieved. Biofuel technology is one of the newest technologies that has allocated discussion of the world in aviation industry to itself. The purpose of this research has been formulation of R&D strategy of biofuel technology in aviation industry of Iran. After reviewing of the theoretical foundations of the methods and R&D strategies, finally we classified R&D strategies in four main categories as follows: internal R&D, collaboration R&D, out sourcing R&D and in-house R&D. After a review of R&D strategies, a model for formulation of R&D strategy with the aim of developing biofuel technology in aviation industry in Iran was offered. With regard to the requirements and aracteristics of industry and technology in the model, we presented an integrated approach to R&D. Based on the techniques of decision making and analyzing of structured expert opinion, 4 R&D strategies for different scenarios and with the aim of developing biofuel technology in aviation industry in Iran were recommended. In this research, based on the common features of the implementation process of R&D, a logical classification of these methods are presented as R&D strategies. Then, R&D strategies and their characteristics was developed according to the experts. In the end, we introduced a model to consider the role of aviation industry and biofuel technology in R&D strategies. And lastly, for conditions and various scenarios of the aviation industry, we have formulated a specific R&D strategy.Keywords: aviation industry, biofuel technology, R&D, R&D strategy
Procedia PDF Downloads 5791582 Disease Level Assessment in Wheat Plots Using a Residual Deep Learning Algorithm
Authors: Felipe A. Guth, Shane Ward, Kevin McDonnell
Abstract:
The assessment of disease levels in crop fields is an important and time-consuming task that generally relies on expert knowledge of trained individuals. Image classification in agriculture problems historically has been based on classical machine learning strategies that make use of hand-engineered features in the top of a classification algorithm. This approach tends to not produce results with high accuracy and generalization to the classes classified by the system when the nature of the elements has a significant variability. The advent of deep convolutional neural networks has revolutionized the field of machine learning, especially in computer vision tasks. These networks have great resourcefulness of learning and have been applied successfully to image classification and object detection tasks in the last years. The objective of this work was to propose a new method based on deep learning convolutional neural networks towards the task of disease level monitoring. Common RGB images of winter wheat were obtained during a growing season. Five categories of disease levels presence were produced, in collaboration with agronomists, for the algorithm classification. Disease level tasks performed by experts provided ground truth data for the disease score of the same winter wheat plots were RGB images were acquired. The system had an overall accuracy of 84% on the discrimination of the disease level classes.Keywords: crop disease assessment, deep learning, precision agriculture, residual neural networks
Procedia PDF Downloads 3311581 Fashion as Identity Architect: Sikhs in Perspective
Authors: Anupreet B. Dugal, Suruchi Mittar
Abstract:
The research prospect explores fashion as a tool to effectively emancipate the Sikh identity. The study presents information on how fashion has played a critical and visible role in reflecting and helping to construct identities based on religiosity. It discusses the Sikh identity, its’ origin; its continuity and the contemporary ambivalence. Fashion has mostly, if not always been used as a means of establishing identity. This research creates a gateway to discuss the impact that fashion can have on the existing socio-cultural and religious models. The study focuses on the Sikhs, a small community of India with regard to their visual appearance. The research will be based on the case study of 1469, a store infusing Sikhism as a style quotient. Subsequently, in the research framework, a sample study would be conducted with Sikh youth (18-25 years old) hailing from New Delhi, the capital city of India. 1469 formulates a striking case study for examining the relationship between fashion and religious and personal identity.Keywords: fashion, identity, sikh identity, textiles
Procedia PDF Downloads 476