Search results for: advanced analytics
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2441

Search results for: advanced analytics

2141 The Spiritual Distress of Women Coping with the End of Life and Death of Their Spouses

Authors: Szu-Mei Hsiao

Abstract:

Many nurses have concerns about the difficulties of providing spiritual care for ethnic-Chinese patients and family members within their cultural context. This is due to a lack of knowledge and training. Most family caregivers are female. There has been little research exploring the potential impact of Chinese cultural values on the spiritual distress of couple dyadic participants in Taiwan. This study explores the spiritual issues of Taiwanese women coping with their husband’s advanced cancer during palliative care to death. Qualitative multiple case studies were used. Data was collected through participant observation and in-depth face-to-face interviews. Transcribed interview data was analyzed by using qualitative content analysis. Three couples were recruited from a community-based rural hospital in Taiwan where the husbands were hospitalized in a medical ward. Four spiritual distress themes emerged from the analysis: (1) A personal conflict in trying to come to terms with love and forgiveness; the inability to forgive their husband’s mistakes; and, lack of their family’s love and support. (2) A feeling of hopelessness due to advanced cancer, such as a feeling of disappointment in their destiny and karma, including expressing doubt on survival. (3) A feeling of uncertainty in facing death peacefully, such as fear of facing the unknown world; and, (4) A feeling of doubt causing them to question the meaning and values in their lives. This research has shown that caregivers needed family support, friends, social welfare, and the help of their religion to meet their spiritual needs in coping within the final stages of life and death. The findings of this study could assist health professionals to detect the spiritual distress of ethnic-Chinese patients and caregivers in the context of their cultural or religious background as early as possible.

Keywords: advanced cancer, Buddhism, Confucianism, Taoism, qualitative research, spiritual distress

Procedia PDF Downloads 157
2140 Control the Flow of Big Data

Authors: Shizra Waris, Saleem Akhtar

Abstract:

Big data is a research area receiving attention from academia and IT communities. In the digital world, the amounts of data produced and stored have within a short period of time. Consequently this fast increasing rate of data has created many challenges. In this paper, we use functionalism and structuralism paradigms to analyze the genesis of big data applications and its current trends. This paper presents a complete discussion on state-of-the-art big data technologies based on group and stream data processing. Moreover, strengths and weaknesses of these technologies are analyzed. This study also covers big data analytics techniques, processing methods, some reported case studies from different vendor, several open research challenges and the chances brought about by big data. The similarities and differences of these techniques and technologies based on important limitations are also investigated. Emerging technologies are suggested as a solution for big data problems.

Keywords: computer, it community, industry, big data

Procedia PDF Downloads 162
2139 Advanced Particle Characterisation of Suspended Sediment in the Danube River Using Automated Imaging and Laser Diffraction

Authors: Flóra Pomázi, Sándor Baranya, Zoltán Szalai

Abstract:

A harmonized monitoring of the suspended sediment transport along such a large river as the world’s most international river, the Danube River, is a rather challenging task. The traditional monitoring method in Hungary is obsolete but using indirect measurement devices and techniques like optical backscatter sensors (OBS), laser diffraction or acoustic backscatter sensors (ABS) could provide a fast and efficient alternative option of direct methods. However, these methods are strongly sensitive to the particle characteristics (i.e. particle shape, particle size and mineral composition). The current method does not provide sufficient information about particle size distribution, mineral analysis is rarely done, and the shape of the suspended sediment particles have not been examined yet. The aims of the study are (1) to determine the particle characterisation of suspended sediment in the Danube River using advanced particle characterisation methods as laser diffraction and automated imaging, and (2) to perform a sensitivity analysis of the indirect methods in order to determine the impact of suspended particle characteristics. The particle size distribution is determined by laser diffraction. The particle shape and mineral composition analysis is done by the Morphologi G3ID image analyser. The investigated indirect measurement devices are the LISST-Portable|XR, the LISST-ABS (Sequoia Inc.) and the Rio Grande 1200 kHz ADCP (Teledyne Marine). The major findings of this study are (1) the statistical shape of the suspended sediment particle - this is the first research in this context, (2) the actualised particle size distribution – that can be compared to historical information, so that the morphological changes can be tracked, (3) the actual mineral composition of the suspended sediment in the Danube River, and (4) the reliability of the tested indirect methods has been increased – based on the results of the sensitivity analysis and the previous findings.

Keywords: advanced particle characterisation, automated imaging, indirect methods, laser diffraction, mineral composition, suspended sediment

Procedia PDF Downloads 114
2138 Profit Share in Income: An Analysis of Its Influence on Macroeconomic Performance

Authors: Alain Villemeur

Abstract:

The relationships between the profit share in income on the one hand and the growth rates of output and employment on the other hand have been studied for 17 advanced economies since 1961. The vast majority (98%) of annual values for the profit share fall between 20% and 40%, with an average value of 33.9%. For the 17 advanced economies, Gross Domestic Product and productivity growth rates tend to fall as the profit share in income rises. For the employment growth rates, the relationships are complex; nevertheless, over long periods (1961-2000), it appears that the more job-creating economies are Australia, Canada, and the United States; they have experienced a profit share close to 1/3. This raises a number of questions, not least the value of 1/3 for the profit share and its role in macroeconomic fundamentals. To explain these facts, an endogenous growth model is developed. This growth and distribution model reconciles the great ideas of Kaldor (economic growth as a chain reaction), of Keynes (effective demand and marginal efficiency of capital) and of Ricardo (importance of the wage-profit distribution) in an economy facing creative destruction. A production function is obtained, depending mainly on the growth of employment, the rate of net investment and the profit share in income. In theory, we show the existence of incentives: an incentive for job creation when the profit share is less than 1/3 and another incentive for job destruction in the opposite case. Thus, increasing the profit share can boost the employment growth rate until it reaches the value of 1/3; otherwise lowers the employment growth rate. Three key findings can be drawn from these considerations. The first reveals that the best GDP and productivity growth rates are obtained with a profit share of less than 1/3. The second is that maximum job growth is associated with a 1/3 profit share, given the existence of incentives to create more jobs when the profit share is less than 1/3 or to destroy more jobs otherwise. The third is the decline in performance (GDP growth rate and productivity growth rate) when the profit share increases. In conclusion, increasing the profit share in income weakens GDP growth or productivity growth as a long-term trend, contrary to the trickle-down hypothesis. The employment growth rate is maximum for a profit share in income of 1/3. All these lessons suggest macroeconomic policies considering the profit share in income.

Keywords: advanced countries, GDP growth, employment growth, profit share, economic policies

Procedia PDF Downloads 34
2137 University Short Courses Web Application Using ASP.Net

Authors: Ahmed Hariri

Abstract:

E-Learning has become a necessity in the advanced education. It is easier for the student and teacher communication also it speed up the process with less time and less effort. With the progress and the enormous development of distance education must keep up with this age of making a website that allows students and teachers to take all the advantages of advanced education. In this regards, we developed University Short courses web application which is specially designed for Faculty of computing and information technology, Rabigh, Kingdom of Saudi Arabia. After an elaborate review of the current state-of-the-art methods of teaching and learning, we found that instructors deliver extra short courses and workshop to students to enhance the knowledge of students. Moreover, this process is completely manual. The prevailing methods of teaching and learning consume a lot of time; therefore in this context, University Short courses web application will help to make process easy and user friendly. The site allows for students can view and register short courses online conducted by instructor also they can see courses starting dates, finishing date and locations. It also allows the instructor to put things on his courses on the site and see the students enrolled in the study material. Finally, student can print the certificate after finished the course online. ASP.NET, SQLSERVER, JavaScript SQL SERVER Database will use to develop the University Short Courses web application.

Keywords: e-learning, short courses, ASP.NET, SQL SERVER

Procedia PDF Downloads 113
2136 Multi-Scale Damage and Mechanical Behavior of Sheet Molding Compound Composites Subjected to Fatigue, Dynamic, and Post-Fatigue Dynamic Loadings

Authors: M. Shirinbayan, J. Fitoussi, N. Abbasnezhad, A. Lucas, A. Tcharkhtchi

Abstract:

Sheet Molding Compounds (SMCs) with special microstructures are very attractive to use in automobile structures especially when they are accidentally subjected to collision type accidents because of their high energy absorption capacity. These are materials designated as standard SMC, Advanced Sheet Molding Compounds (A-SMC), Low-Density SMC (LD-SMC) and etc. In this study, testing methods have been performed to compare the mechanical responses and damage phenomena of SMC, LD-SMC, and A-SMC under quasi-static and high strain rate tensile tests. The paper also aims at investigating the effect of an initial pre-damage induced by fatigue on the tensile dynamic behavior of A-SMC. In the case of SMCs and A-SMCs, whatever the fibers orientation and applied strain rate are, the first observed phenomenon of damage corresponds to decohesion of the fiber-matrix interface which is followed by coalescence and multiplication of these micro-cracks and their propagations. For LD-SMCs, damage mechanisms depend on the presence of Hollow Glass Microspheres (HGM) and fibers orientation.

Keywords: SMC, Sheet Molding Compound, LD-SMC, Low-Density SMC, A-SMC, Advanced Sheet Molding Compounds, HGM, Hollow Glass Microspheres, damage

Procedia PDF Downloads 187
2135 The Use of Rule-Based Cellular Automata to Track and Forecast the Dispersal of Classical Biocontrol Agents at Scale, with an Application to the Fopius arisanus Fruit Fly Parasitoid

Authors: Agboka Komi Mensah, John Odindi, Elfatih M. Abdel-Rahman, Onisimo Mutanga, Henri Ez Tonnang

Abstract:

Ecosystems are networks of organisms and populations that form a community of various species interacting within their habitats. Such habitats are defined by abiotic and biotic conditions that establish the initial limits to a population's growth, development, and reproduction. The habitat’s conditions explain the context in which species interact to access resources such as food, water, space, shelter, and mates, allowing for feeding, dispersal, and reproduction. Dispersal is an essential life-history strategy that affects gene flow, resource competition, population dynamics, and species distributions. Despite the importance of dispersal in population dynamics and survival, understanding the mechanism underpinning the dispersal of organisms remains challenging. For instance, when an organism moves into an ecosystem for survival and resource competition, its progression is highly influenced by extrinsic factors such as its physiological state, climatic variables and ability to evade predation. Therefore, greater spatial detail is necessary to understand organism dispersal dynamics. Understanding organisms dispersal can be addressed using empirical and mechanistic modelling approaches, with the adopted approach depending on the study's purpose Cellular automata (CA) is an example of these approaches that have been successfully used in biological studies to analyze the dispersal of living organisms. Cellular automata can be briefly described as occupied cells by an individual that evolves based on proper decisions based on a set of neighbours' rules. However, in the ambit of modelling individual organisms dispersal at the landscape scale, we lack user friendly tools that do not require expertise in mathematical models and computing ability; such as a visual analytics framework for tracking and forecasting the dispersal behaviour of organisms. The term "visual analytics" (VA) describes a semiautomated approach to electronic data processing that is guided by users who can interact with data via an interface. Essentially, VA converts large amounts of quantitative or qualitative data into graphical formats that can be customized based on the operator's needs. Additionally, this approach can be used to enhance the ability of users from various backgrounds to understand data, communicate results, and disseminate information across a wide range of disciplines. To support effective analysis of the dispersal of organisms at the landscape scale, we therefore designed Pydisp which is a free visual data analytics tool for spatiotemporal dispersal modeling built in Python. Its user interface allows users to perform a quick and interactive spatiotemporal analysis of species dispersal using bioecological and climatic data. Pydisp enables reuse and upgrade through the use of simple principles such as Fuzzy cellular automata algorithms. The potential of dispersal modeling is demonstrated in a case study by predicting the dispersal of Fopius arisanus (Sonan), endoparasitoids to control Bactrocera dorsalis (Hendel) (Diptera: Tephritidae) in Kenya. The results obtained from our example clearly illustrate the parasitoid's dispersal process at the landscape level and confirm that dynamic processes in an agroecosystem are better understood when designed using mechanistic modelling approaches. Furthermore, as demonstrated in the example, the built software is highly effective in portraying the dispersal of organisms despite the unavailability of detailed data on the species dispersal mechanisms.

Keywords: cellular automata, fuzzy logic, landscape, spatiotemporal

Procedia PDF Downloads 55
2134 Investigating the Online Effect of Language on Gesture in Advanced Bilinguals of Two Structurally Different Languages in Comparison to L1 Native Speakers of L2 and Explores Whether Bilinguals Will Follow Target L2 Patterns in Speech and Co-speech

Authors: Armita Ghobadi, Samantha Emerson, Seyda Ozcaliskan

Abstract:

Being a bilingual involves mastery of both speech and gesture patterns in a second language (L2). We know from earlier work in first language (L1) production contexts that speech and co-speech gesture form a tightly integrated system: co-speech gesture mirrors the patterns observed in speech, suggesting an online effect of language on nonverbal representation of events in gesture during the act of speaking (i.e., “thinking for speaking”). Relatively less is known about the online effect of language on gesture in bilinguals speaking structurally different languages. The few existing studies—mostly with small sample sizes—suggests inconclusive findings: some show greater achievement of L2 patterns in gesture with more advanced L2 speech production, while others show preferences for L1 gesture patterns even in advanced bilinguals. In this study, we focus on advanced bilingual speakers of two structurally different languages (Spanish L1 with English L2) in comparison to L1 English speakers. We ask whether bilingual speakers will follow target L2 patterns not only in speech but also in gesture, or alternatively, follow L2 patterns in speech but resort to L1 patterns in gesture. We examined this question by studying speech and gestures produced by 23 advanced adult Spanish (L1)-English (L2) bilinguals (Mage=22; SD=7) and 23 monolingual English speakers (Mage=20; SD=2). Participants were shown 16 animated motion event scenes that included distinct manner and path components (e.g., "run over the bridge"). We recorded and transcribed all participant responses for speech and segmented it into sentence units that included at least one motion verb and its associated arguments. We also coded all gestures that accompanied each sentence unit. We focused on motion event descriptions as it shows strong crosslinguistic differences in the packaging of motion elements in speech and co-speech gesture in first language production contexts. English speakers synthesize manner and path into a single clause or gesture (he runs over the bridge; running fingers forward), while Spanish speakers express each component separately (manner-only: el corre=he is running; circle arms next to body conveying running; path-only: el cruza el puente=he crosses the bridge; trace finger forward conveying trajectory). We tallied all responses by group and packaging type, separately for speech and co-speech gesture. Our preliminary results (n=4/group) showed that productions in English L1 and Spanish L1 differed, with greater preference for conflated packaging in L1 English and separated packaging in L1 Spanish—a pattern that was also largely evident in co-speech gesture. Bilinguals’ production in L2 English, however, followed the patterns of the target language in speech—with greater preference for conflated packaging—but not in gesture. Bilinguals used separated and conflated strategies in gesture in roughly similar rates in their L2 English, showing an effect of both L1 and L2 on co-speech gesture. Our results suggest that online production of L2 language has more limited effects on L2 gestures and that mastery of native-like patterns in L2 gesture might take longer than native-like L2 speech patterns.

Keywords: bilingualism, cross-linguistic variation, gesture, second language acquisition, thinking for speaking hypothesis

Procedia PDF Downloads 49
2133 Blame Classification through N-Grams in E-Commerce Customer Reviews

Authors: Subhadeep Mandal, Sujoy Bhattacharya, Pabitra Mitra, Diya Guha Roy, Seema Bhattacharya

Abstract:

E-commerce firms allow customers to evaluate and review the things they buy as a positive or bad experience. The e-commerce transaction processes are made up of a variety of diverse organizations and activities that operate independently but are connected together to complete the transaction (from placing an order to the goods reaching the client). After a negative shopping experience, clients frequently disregard the critical assessment of these businesses and submit their feedback on an all-over basis, which benefits certain enterprises but is tedious for others. In this article, we solely dealt with negative reviews and attempted to distinguish between negative reviews where the e-commerce firm is explicitly blamed by customers for a bad purchasing experience and other negative reviews.

Keywords: e-commerce, online shopping, customer reviews, customer behaviour, text analytics, n-grams classification

Procedia PDF Downloads 228
2132 Advanced Energy Absorbers Used in Blast Resistant Systems

Authors: Martina Drdlová, Michal Frank, Radek Řídký, Jaroslav Buchar, Josef Krátký

Abstract:

The main aim of the presented experiments is to improve behaviour of sandwich structures under dynamic loading, such as crash or explosion. This paper describes experimental investigation on the response of new advanced materials to low and high velocity load. Blast wave energy absorbers were designed using two types of porous lightweight raw particle materials based on expanded glass and ceramics with dimensions of 0.5-1 mm, combined with polymeric binder. The effect of binder amount on the static and dynamic properties of designed materials was observed. Prism shaped specimens were prepared and loaded to obtain physico-mechanical parameters – bulk density, compressive and flexural strength under quasistatic load, the dynamic response was determined using Split Hopkinson Pressure bar apparatus. Numerical investigation of the material behaviour in sandwich structure was performed using implicit/explicit solver LS-Dyna. As the last step, the developed material was used as the interlayer of blast resistant litter bin, and it´s functionality was verified by real field blast tests.

Keywords: blast energy absorber, SHPB, expanded glass, expanded ceramics

Procedia PDF Downloads 434
2131 Developing a Translator Career Path: Based on the Dreyfus Model of Skills Acquisition

Authors: Noha A. Alowedi

Abstract:

This paper proposes a Translator Career Path (TCP) which is based on the Dreyfus Model of Skills Acquisition as the conceptual framework. In this qualitative study, the methodology to collect and analyze the data takes an inductive approach that draws upon the literature to form the criteria for the different steps in the TCP. This path is based on descriptors of expert translator performance and best employees’ practice documented in the literature. Each translator skill will be graded as novice, advanced beginner, competent, proficient, and expert. Consequently, five levels of translator performance are identified in the TCP as five ranks. The first rank is the intern translator, which is equivalent to the novice level; the second rank is the assistant translator, which is equivalent to the advanced beginner level; the third rank is the associate translator, which is equivalent to the competent level; the fourth rank is the translator, which is equivalent to the proficient level; finally, the fifth rank is the expert translator, which is equivalent to the expert level. The main function of this career path is to guide the processes of translator development in translation organizations. Although it is designed primarily for the need of in-house translators’ supervisors, the TCP can be used in academic settings for translation trainers and teachers.

Keywords: Dreyfus model, translation organization, translator career path, translator development, translator evaluation, translator promotion

Procedia PDF Downloads 343
2130 An Approach to Autonomous Drones Using Deep Reinforcement Learning and Object Detection

Authors: K. R. Roopesh Bharatwaj, Avinash Maharana, Favour Tobi Aborisade, Roger Young

Abstract:

Presently, there are few cases of complete automation of drones and its allied intelligence capabilities. In essence, the potential of the drone has not yet been fully utilized. This paper presents feasible methods to build an intelligent drone with smart capabilities such as self-driving, and obstacle avoidance. It does this through advanced Reinforcement Learning Techniques and performs object detection using latest advanced algorithms, which are capable of processing light weight models with fast training in real time instances. For the scope of this paper, after researching on the various algorithms and comparing them, we finally implemented the Deep-Q-Networks (DQN) algorithm in the AirSim Simulator. In future works, we plan to implement further advanced self-driving and object detection algorithms, we also plan to implement voice-based speech recognition for the entire drone operation which would provide an option of speech communication between users (People) and the drone in the time of unavoidable circumstances. Thus, making drones an interactive intelligent Robotic Voice Enabled Service Assistant. This proposed drone has a wide scope of usability and is applicable in scenarios such as Disaster management, Air Transport of essentials, Agriculture, Manufacturing, Monitoring people movements in public area, and Defense. Also discussed, is the entire drone communication based on the satellite broadband Internet technology for faster computation and seamless communication service for uninterrupted network during disasters and remote location operations. This paper will explain the feasible algorithms required to go about achieving this goal and is more of a reference paper for future researchers going down this path.

Keywords: convolution neural network, natural language processing, obstacle avoidance, satellite broadband technology, self-driving

Procedia PDF Downloads 218
2129 The Effects of Three Pre-Reading Activities (Text Summary, Vocabulary Definition, and Pre-Passage Questions) on the Reading Comprehension of Iranian EFL Learners

Authors: Leila Anjomshoa, Firooz Sadighi

Abstract:

This study investigated the effects of three types of pre-reading activities (vocabulary definitions, text summary and pre-passage questions) on EFL learners’ English reading comprehension. On the basis of the results of a placement test administered to two hundred and thirty English students at Kerman Azad University, 200 subjects (one hundred intermediate and one hundred advanced) were selected.Four texts, two of them at intermediate level and two of them at advanced level were chosen. The data gathered was subjected to the statistical procedures of ANOVA. A close examination of the results through Tukey’s HSD showed the fact that the experimental groups performed better than the control group, highlighting the effect of the treatment on them. Also, the experimental group C (text summary), performed remarkably better than the other three groups (both experimental & control). Group B subjects, vocabulary definitions, performed better than groups A and D. The pre-passage questions group’s (D) performance showed higher scores than the control condition.

Keywords: pre-reading activities, text summary, vocabulary definition, and pre-passage questions, reading comprehension

Procedia PDF Downloads 322
2128 Hybrid Treatment Method for Decolorization of Mixed Dyes: Rhodamine-B, Brilliant Green and Congo Red

Authors: D. Naresh Yadav, K. Anand Kishore, Bhaskar Bethi, Shirish H. Sonawane, D. Bhagawan

Abstract:

The untreated industrial wastewater discharged into the environment causes the contamination of soil, water and air. Advanced treatment methods for enhanced wastewater treatment are attracting substantial interest among the currently employed unit processes in wastewater treatment. The textile industry is one of the predominant in wastewater production at current industrialized situation. The refused dyes at textile industry need to be treated in proper manner before its discharge into water bodies. In the present investigation, hybrid treatment process has been developed for the treatment of synthetic mixed dye wastewater. Photocatalysis and ceramic nanoporous membrane are mainly used for process integration to minimize the fouling and increase the flux. Commercial semiconducting powders (TiO2 and ZnO) has used as a nano photocatalyst for the degradation of mixed dye in the hybrid system. Commercial ceramic nanoporous tubular membranes have been used for the rejection of dye and suspended catalysts. Photocatalysis with catalyst has shown the average of 34% of decolorization (RB-32%, BG-34% and CR-36%), whereas ceramic nanofiltration has shown the 56% (RB-54%, BG-56% and CR-58%) of decolorization. Integration of photocatalysis and ceramic nanofiltration has shown 96% (RB-94%, BG-96% and CR-98%) of dye decolorization over 90 min of operation.

Keywords: photocatalysis, ceramic nanoporous membrane, wastewater treatment, advanced oxidation process, process integration

Procedia PDF Downloads 235
2127 Video Analytics on Pedagogy Using Big Data

Authors: Jamuna Loganath

Abstract:

Education is the key to the development of any individual’s personality. Today’s students will be tomorrow’s citizens of the global society. The education of the student is the edifice on which his/her future will be built. Schools therefore should provide an all-round development of students so as to foster a healthy society. The behaviors and the attitude of the students in school play an essential role for the success of the education process. Frequent reports of misbehaviors such as clowning, harassing classmates, verbal insults are becoming common in schools today. If this issue is left unattended, it may develop a negative attitude and increase the delinquent behavior. So, the need of the hour is to find a solution to this problem. To solve this issue, it is important to monitor the students’ behaviors in school and give necessary feedback and mentor them to develop a positive attitude and help them to become a successful grownup. Nevertheless, measuring students’ behavior and attitude is extremely challenging. None of the present technology has proven to be effective in this measurement process because actions, reactions, interactions, response of the students are rarely used in the course of the data due to complexity. The purpose of this proposal is to recommend an effective supervising system after carrying out a feasibility study by measuring the behavior of the Students. This can be achieved by equipping schools with CCTV cameras. These CCTV cameras installed in various schools of the world capture the facial expressions and interactions of the students inside and outside their classroom. The real time raw videos captured from the CCTV can be uploaded to the cloud with the help of a network. The video feeds get scooped into various nodes in the same rack or on the different racks in the same cluster in Hadoop HDFS. The video feeds are converted into small frames and analyzed using various Pattern recognition algorithms and MapReduce algorithm. Then, the video frames are compared with the bench marking database (good behavior). When misbehavior is detected, an alert message can be sent to the counseling department which helps them in mentoring the students. This will help in improving the effectiveness of the education process. As Video feeds come from multiple geographical areas (schools from different parts of the world), BIG DATA helps in real time analysis as it analyzes computationally to reveal patterns, trends, and associations, especially relating to human behavior and interactions. It also analyzes data that can’t be analyzed by traditional software applications such as RDBMS, OODBMS. It has also proven successful in handling human reactions with ease. Therefore, BIG DATA could certainly play a vital role in handling this issue. Thus, effectiveness of the education process can be enhanced with the help of video analytics using the latest BIG DATA technology.

Keywords: big data, cloud, CCTV, education process

Procedia PDF Downloads 219
2126 Postharvest Studies Beyond Fresh Market Eating Quality: Phytochemical Changes in Peach Fruit During Ripening and Advanced Senescence

Authors: Mukesh Singh Mer, Brij Lal Attri, Raj Narayan, Anil Kumar

Abstract:

Postharvest studies were conducted under the concept that fruit do not qualify for the fresh market may be used as a source of bioactive compounds. One peach (Prunus persica cvs Red June) were evaluated for their photochemical content and antioxidant capacity during the ripening and over ripening periods (advanced senescence) for 12 and 15 d, respectively. Firmness decreased rapidly during this period from an initial pre –ripe stage of 5.85 lb/in2 for peach until the fruit reached the fully ripe stage of lb/in2. In this study we evaluate the varietal performance in respect of the quality beyond fresh market eating and nutrition levels. The varieties are (T-1 F-16-23), (T-2 Florda king), (T-3 Nectarine), (T-4 Red June). The result pertaining are there the highest fruit length (68.50 mm), fruit breadth (71.38 mm), fruit weight (186.11 g) found in T4 Red June and fruit firmness (8.74 lb/in 2) found in T3-Nectarine. The acidity (1.66 %), ascorbic acid (440 mg/100 g), reducing sugar (19.77 %) and total sugar (51.73 %) found in T4- Red June, T-2 Florda King, T-3 Nectarine at harvesting time but decrease in fruit length ( 60.81 mm), fruit breadth (51.84 mm), fruit weight (143.03 g) found in T4 Red June and fruit firmness (6.29 lb/in 2) found in T3-Nectarine. The acidity (0.80 %), ascorbic acid (329.50 mg/100 g), reducing sugar (34.03 %) and total sugar (26.97 %) found in T1- F-16-23, T-2 Florda King, T-1 F-16-23 and T-3 Nectarine after 15 days in freeze conditions when will have been since reached beyond market. The study reveals that the size and yield good in Red June and the nutritional value higher in Florda King and Nectarine peach. Fruit firmness remained unchanged afterwards. In addition, total soluble solids in peach were basically similar during the ripening and over ripening periods. Further research on secondary metabolism regulation during ripening and advanced senescence is needed to obtain fruit as enriched dietary sources of bioactive compounds or for its use in alternative high value health markets including dietary supplements, functional foods cosmetics and pharmaceuticals.

Keywords: metabolism, acidity, ascorbic acid, pharmaceuticals

Procedia PDF Downloads 524
2125 Research on Thermal Runaway Reaction of Ammonium Nitrate with Incompatible Substances

Authors: Weic-Ting Chen, Jo-Ming Tseng

Abstract:

Ammonium nitrate (AN) has caused many accidents in the world, which have caused a large number of people’s life and serious economic losses. In this study, the safety of the AN production process was discussed deeply, and the influence of incompatible substances was estimated according to the change of their heat value by mixing them with incompatible substances by thermal analysis techniques, and their safety parameters were calculated according to their kinetic parameters. In this study, differential scanning calorimeters (DSC) were applied for the temperature rise test and adiabatic thermal analysis in combination with the Advanced Reactive System Screening Tool (ARSST). The research results could contribute to the safety of the ammonium nitrate production process. Manufacturers can better understand the possibility of chemical heat release and the operating conditions that will cause a chemical reaction to be out of control when storing or adding new substances, so safety parameters were researched for these complex reactions. The results of this study will benefit the process of AN and the relevant staff, which also have safety protection in the working environment.

Keywords: ammonium nitrate, incompatible substances, differential scanning calorimeters, advanced reactive system screening tool, safety parameters

Procedia PDF Downloads 67
2124 Design and Implementation of Machine Learning Model for Short-Term Energy Forecasting in Smart Home Management System

Authors: R. Ramesh, K. K. Shivaraman

Abstract:

The main aim of this paper is to handle the energy requirement in an efficient manner by merging the advanced digital communication and control technologies for smart grid applications. In order to reduce user home load during peak load hours, utility applies several incentives such as real-time pricing, time of use, demand response for residential customer through smart meter. However, this method provides inconvenience in the sense that user needs to respond manually to prices that vary in real time. To overcome these inconvenience, this paper proposes a convolutional neural network (CNN) with k-means clustering machine learning model which have ability to forecast energy requirement in short term, i.e., hour of the day or day of the week. By integrating our proposed technique with home energy management based on Bluetooth low energy provides predicted value to user for scheduling appliance in advanced. This paper describes detail about CNN configuration and k-means clustering algorithm for short-term energy forecasting.

Keywords: convolutional neural network, fuzzy logic, k-means clustering approach, smart home energy management

Procedia PDF Downloads 281
2123 A Methodology of Using Fuzzy Logics and Data Analytics to Estimate the Life Cycle Indicators of Solar Photovoltaics

Authors: Thor Alexis Sazon, Alexander Guzman-Urbina, Yasuhiro Fukushima

Abstract:

This study outlines the method of how to develop a surrogate life cycle model based on fuzzy logic using three fuzzy inference methods: (1) the conventional Fuzzy Inference System (FIS), (2) the hybrid system of Data Analytics and Fuzzy Inference (DAFIS), which uses data clustering for defining the membership functions, and (3) the Adaptive-Neuro Fuzzy Inference System (ANFIS), a combination of fuzzy inference and artificial neural network. These methods were demonstrated with a case study where the Global Warming Potential (GWP) and the Levelized Cost of Energy (LCOE) of solar photovoltaic (PV) were estimated using Solar Irradiation, Module Efficiency, and Performance Ratio as inputs. The effects of using different fuzzy inference types, either Sugeno- or Mamdani-type, and of changing the number of input membership functions to the error between the calibration data and the model-generated outputs were also illustrated. The solution spaces of the three methods were consequently examined with a sensitivity analysis. ANFIS exhibited the lowest error while DAFIS gave slightly lower errors compared to FIS. Increasing the number of input membership functions helped with error reduction in some cases but, at times, resulted in the opposite. Sugeno-type models gave errors that are slightly lower than those of the Mamdani-type. While ANFIS is superior in terms of error minimization, it could generate solutions that are questionable, i.e. the negative GWP values of the Solar PV system when the inputs were all at the upper end of their range. This shows that the applicability of the ANFIS models highly depends on the range of cases at which it was calibrated. FIS and DAFIS generated more intuitive trends in the sensitivity runs. DAFIS demonstrated an optimal design point wherein increasing the input values does not improve the GWP and LCOE anymore. In the absence of data that could be used for calibration, conventional FIS presents a knowledge-based model that could be used for prediction. In the PV case study, conventional FIS generated errors that are just slightly higher than those of DAFIS. The inherent complexity of a Life Cycle study often hinders its widespread use in the industry and policy-making sectors. While the methodology does not guarantee a more accurate result compared to those generated by the Life Cycle Methodology, it does provide a relatively simpler way of generating knowledge- and data-based estimates that could be used during the initial design of a system.

Keywords: solar photovoltaic, fuzzy logic, inference system, artificial neural networks

Procedia PDF Downloads 131
2122 Data Quality as a Pillar of Data-Driven Organizations: Exploring the Benefits of Data Mesh

Authors: Marc Bachelet, Abhijit Kumar Chatterjee, José Manuel Avila

Abstract:

Data quality is a key component of any data-driven organization. Without data quality, organizations cannot effectively make data-driven decisions, which often leads to poor business performance. Therefore, it is important for an organization to ensure that the data they use is of high quality. This is where the concept of data mesh comes in. Data mesh is an organizational and architectural decentralized approach to data management that can help organizations improve the quality of data. The concept of data mesh was first introduced in 2020. Its purpose is to decentralize data ownership, making it easier for domain experts to manage the data. This can help organizations improve data quality by reducing the reliance on centralized data teams and allowing domain experts to take charge of their data. This paper intends to discuss how a set of elements, including data mesh, are tools capable of increasing data quality. One of the key benefits of data mesh is improved metadata management. In a traditional data architecture, metadata management is typically centralized, which can lead to data silos and poor data quality. With data mesh, metadata is managed in a decentralized manner, ensuring accurate and up-to-date metadata, thereby improving data quality. Another benefit of data mesh is the clarification of roles and responsibilities. In a traditional data architecture, data teams are responsible for managing all aspects of data, which can lead to confusion and ambiguity in responsibilities. With data mesh, domain experts are responsible for managing their own data, which can help provide clarity in roles and responsibilities and improve data quality. Additionally, data mesh can also contribute to a new form of organization that is more agile and adaptable. By decentralizing data ownership, organizations can respond more quickly to changes in their business environment, which in turn can help improve overall performance by allowing better insights into business as an effect of better reports and visualization tools. Monitoring and analytics are also important aspects of data quality. With data mesh, monitoring, and analytics are decentralized, allowing domain experts to monitor and analyze their own data. This will help in identifying and addressing data quality problems in quick time, leading to improved data quality. Data culture is another major aspect of data quality. With data mesh, domain experts are encouraged to take ownership of their data, which can help create a data-driven culture within the organization. This can lead to improved data quality and better business outcomes. Finally, the paper explores the contribution of AI in the coming years. AI can help enhance data quality by automating many data-related tasks, like data cleaning and data validation. By integrating AI into data mesh, organizations can further enhance the quality of their data. The concepts mentioned above are illustrated by AEKIDEN experience feedback. AEKIDEN is an international data-driven consultancy that has successfully implemented a data mesh approach. By sharing their experience, AEKIDEN can help other organizations understand the benefits and challenges of implementing data mesh and improving data quality.

Keywords: data culture, data-driven organization, data mesh, data quality for business success

Procedia PDF Downloads 98
2121 Diversity and Equality in Four Finnish and Italian Energy Companies' Open Access Material

Authors: Elisa Bertagna

Abstract:

A frame analysis of the work done by various energy multinational companies concerning diversity issues and gender equality is presented. Documents of four multinational companies - two from Finland and two from Italy - have been studied. The array of companies’ documents includes data from their websites, policies and so on. The Finnish and Italian contexts have been chosen as a sample of North and South Europe, of 'advanced' and 'less advanced'. The aim of the analysis is to understand if and how human resource and diversity management in Finnish and Italian multinational energy companies communicate their activity towards the employees. Attention is given on how employees are reacting in their role and on the consequences of its social positioning. The findings of this essay are crucially important. They show how the companies in object tend to focus on the HR and DM positive actions towards female employees’ struggles since the industry is characterized by multinationals with male-dominated employees. In this way, other categories, which are also depicted as sensitive such as young and elderly people or foreigners, do not receive the same amount of attention. Consequently, power hierarchies can be found: 'women' as a social category are given more importance and space in the companies’ data than others. Consequently, the present work analysis reflects on possible struggles that such companies might be facing concerning gender biases and further diverse issues.

Keywords: energy, diversity, gender, multinationals, power hierarchies

Procedia PDF Downloads 114
2120 Sustainable Engineering: Synergy of BIM and Environmental Assessment Tools in Hong Kong Construction Industry

Authors: Kwok Tak Kit

Abstract:

The construction industry plays an important role in environmental and carbon emissions as it consumes a huge amount of natural resources and energy. Sustainable engineering involves the process of planning, design, procurement, construction and delivery in which the whole building and construction process resulting from building and construction can be effectively and sustainability managed to achieve the use of natural resources. Implementation of sustainable technology development and innovation, adoption of the advanced construction process and facilitate the facilities management to implement the energy and waste control more accurately and effectively. Study and research in the relationship of BIM and environment assessment tools lack a clear discussion. In this paper, we will focus on the synergy of BIM technology and sustainable engineering in the AEC industry and outline the key factors which enhance the use of advanced innovation, technology and method and define the role of stakeholders to achieve zero-carbon emission toward the Paris Agreement to limit global warming to well below 2ᵒC above pre-industrial levels. A case study of the adoption of Building Information Modeling (BIM) and environmental assessment tools in Hong Kong will be discussed in this paper.

Keywords: sustainability, sustainable engineering, BIM, LEED

Procedia PDF Downloads 121
2119 Predicting the Lack of GDP Growth: A Logit Model for 40 Advanced and Developing Countries

Authors: Hamidou Diallo, Marianne Guille

Abstract:

This paper identifies leading triggers of deficient episodes in terms of GDP growth based on a sample of countries at different stages of development over 1994-2017. Using logit models, we build early warning systems (EWS), and our results show important differences between developing countries (DCs) and advanced economies (AEs). For AEs, the main predictors of the probability of entering in a GDP growth deficient episode are the deterioration of external imbalances and the vulnerability of fiscal position while DCs face different challenges that need to be considered. The key indicators for them are first, the low ability to pay their debts, and second, their belonging or not to a common currency area. We also build homogeneous pools of countries inside AEs and DCs. The evolution of the proportion of AE countries in the riskiest pool is marked first, by three distinct peaks just after the high-tech bubble burst, the global financial crisis, and the European sovereign debt crisis, and second by a very low minimum level in 2006 and 2007. In contrast, the situation of DCs is characterized first by the relative stability of this proportion and then by an upward trend from 2006, that can be explained by a more unfavorable socio-political environment leading to shortcomings in the fiscal consolidation.

Keywords: currency area, early warning system, external imbalances, fiscal vulnerability, GDP growth, public debt

Procedia PDF Downloads 95
2118 A Design Framework for an Open Market Platform of Enriched Card-Based Transactional Data for Big Data Analytics and Open Banking

Authors: Trevor Toy, Josef Langerman

Abstract:

Around a quarter of the world’s data is generated by financial with an estimated 708.5 billion global non-cash transactions reached between 2018 and. And with Open Banking still a rapidly developing concept within the financial industry, there is an opportunity to create a secure mechanism for connecting its stakeholders to openly, legitimately and consensually share the data required to enable it. Integration and data sharing of anonymised transactional data are still operated in silos and centralised between the large corporate entities in the ecosystem that have the resources to do so. Smaller fintechs generating data and businesses looking to consume data are largely excluded from the process. Therefore there is a growing demand for accessible transactional data for analytical purposes and also to support the rapid global adoption of Open Banking. The following research has provided a solution framework that aims to provide a secure decentralised marketplace for 1.) data providers to list their transactional data, 2.) data consumers to find and access that data, and 3.) data subjects (the individuals making the transactions that generate the data) to manage and sell the data that relates to themselves. The platform also provides an integrated system for downstream transactional-related data from merchants, enriching the data product available to build a comprehensive view of a data subject’s spending habits. A robust and sustainable data market can be developed by providing a more accessible mechanism for data producers to monetise their data investments and encouraging data subjects to share their data through the same financial incentives. At the centre of the platform is the market mechanism that connects the data providers and their data subjects to the data consumers. This core component of the platform is developed on a decentralised blockchain contract with a market layer that manages transaction, user, pricing, payment, tagging, contract, control, and lineage features that pertain to the user interactions on the platform. One of the platform’s key features is enabling the participation and management of personal data by the individuals from whom the data is being generated. This framework developed a proof-of-concept on the Etheruem blockchain base where an individual can securely manage access to their own personal data and that individual’s identifiable relationship to the card-based transaction data provided by financial institutions. This gives data consumers access to a complete view of transactional spending behaviour in correlation to key demographic information. This platform solution can ultimately support the growth, prosperity, and development of economies, businesses, communities, and individuals by providing accessible and relevant transactional data for big data analytics and open banking.

Keywords: big data markets, open banking, blockchain, personal data management

Procedia PDF Downloads 48
2117 Comparison of 18F-FDG and 11C-Methionine PET-CT for Assessment of Response to Neoadjuvant Chemotherapy in Locally Advanced Breast Carcinoma

Authors: Sonia Mahajan Dinesh, Anant Dinesh, Madhavi Tripathi, Vinod Kumar Ramteke, Rajnish Sharma, Anupam Mondal

Abstract:

Background: Neo-adjuvant chemotherapy plays an important role in treatment of breast cancer by decreasing the tumour load and it offers an opportunity to evaluate response of primary tumour to chemotherapy. Standard anatomical imaging modalities are unable to accurately reflect the response to chemotherapy until several cycles of drug treatment have been completed. Metabolic imaging using tracers like 18F-fluorodeoxyglucose (FDG) as a marker of glucose metabolism or amino acid tracers like L-methyl-11C methionine (MET) have potential role for the measurement of treatment response. In this study, our objective was to compare these two PET tracers for assessment of response to neoadjuvant chemotherapy, in locally advanced breast carcinoma. Methods: In our prospective study, 20 female patients with histology proven locally advanced breast carcinoma underwent PET-CT imaging using FDG and MET before and after three cycles of neoadjuvant chemotherapy (CAF regimen). Thereafter, all patients were taken for MRM and the resected specimen was sent for histo-pathological analysis. Tumour response to the neoadjuvant chemotherapy was evaluated by PET-CT imaging using PERCIST criteria and correlated with histological results. Responses calculated were compared for statistical significance using paired t- test. Results: Mean SUVmax for primary lesion in FDG PET and MET PET was 15.88±11.12 and 5.01±2.14 respectively (p<0.001) and for axillary lymph nodes was 7.61±7.31 and 2.75±2.27 respectively (p=0.001). Statistically significant response in primary tumour and axilla was noted on both FDG and MET PET after three cycles of NAC. Complete response in primary tumour was seen in only 1 patient in FDG and 7 patients in MET PET (p=0.001) whereas there was no histological complete resolution of tumor in any patient. Response to therapy in axillary nodes noted on both PET scans were similar (p=0.45) and correlated well with histological findings. Conclusions: For the primary breast tumour, FDG PET has a higher sensitivity and accuracy than MET PET and for axilla both have comparable sensitivity and specificity. FDG PET shows higher target to background ratios so response is better predicted for primary breast tumour and axilla. Also, FDG-PET is widely available and has the advantage of a whole body evaluation in one study.

Keywords: 11C-methionine, 18F-FDG, breast carcinoma, neoadjuvant chemotherapy

Procedia PDF Downloads 480
2116 Analyzing Large Scale Recurrent Event Data with a Divide-And-Conquer Approach

Authors: Jerry Q. Cheng

Abstract:

Currently, in analyzing large-scale recurrent event data, there are many challenges such as memory limitations, unscalable computing time, etc. In this research, a divide-and-conquer method is proposed using parametric frailty models. Specifically, the data is randomly divided into many subsets, and the maximum likelihood estimator from each individual data set is obtained. Then a weighted method is proposed to combine these individual estimators as the final estimator. It is shown that this divide-and-conquer estimator is asymptotically equivalent to the estimator based on the full data. Simulation studies are conducted to demonstrate the performance of this proposed method. This approach is applied to a large real dataset of repeated heart failure hospitalizations.

Keywords: big data analytics, divide-and-conquer, recurrent event data, statistical computing

Procedia PDF Downloads 137
2115 Application of Intelligent City and Hierarchy Intelligent Buildings in Kuala Lumpur

Authors: Jalalludin Abdul Malek, Zurinah Tahir

Abstract:

When the Multimedia Super Corridor (MSC) was launched in 1995, it became the catalyst for the implementation of the intelligent city concept, an area that covers about 15 x 50 kilometres from Kuala Lumpur City Centre (KLCC), Putrajaya and Kuala Lumpur International Airport (KLIA). The concept of intelligent city means that the city has an advanced infrastructure and infostructure such as information technology, advanced telecommunication systems, electronic technology and mechanical technology to be utilized for the development of urban elements such as industries, health, services, transportation and communications. For example, the Golden Triangle of Kuala Lumpur has also many intelligent buildings developed by the private sector such as the KLCC Tower to implement the intelligent city concept. Consequently, the intelligent buildings in the Golden Triangle can be linked directly to the Putrajaya Intelligent City and Cyberjaya Intelligent City within the confines of the MSC. However, the reality of the situation is that there are not many intelligent buildings within the Golden Triangle Kuala Lumpur scope which can be considered of high-standard intelligent buildings as referred to by the Intelligence Quotient (IQ) building standard. This increases the need to implement the real ‘intelligent city’ concept. This paper aims to show the strengths and weaknesses of the intelligent buildings in the Golden Triangle by taking into account aspects of 'intelligence' in the areas of technology and infrastructure of buildings.

Keywords: intelligent city concepts, intelligent building, Golden Triangle, Kuala Lumpur

Procedia PDF Downloads 262
2114 Inferring Cognitive Skill in Concept Space

Authors: Rania A. Aboalela, Javed I. Khan

Abstract:

This research presents a learning assessment theory of Cognitive Skill in Concept Space (CS2) to measure the assessed knowledge in terms of cognitive skill levels of the concepts. The cognitive skill levels refer to levels such as if a student has acquired the state at the level of understanding, or applying, or analyzing, etc. The theory is comprised of three constructions: Graph paradigm of a semantic/ ontological scheme, the concept states of the theory and the assessment analytics which is the process to estimate the sets of concept state at a certain skill level. Concept state means if a student has already learned, or is ready to learn, or is not ready to learn a certain skill level. The experiment is conducted to prove the validation of the theory CS2.

Keywords: cognitive skill levels, concept states, concept space, knowledge assessment theory

Procedia PDF Downloads 289
2113 A Case Study on the Impact of Technology Readiness in a Department of Clinical Nurses

Authors: Julie Delany

Abstract:

To thrive in today’s digital climate, it is vital that organisations adopt new technology and prepare for rising digital trends. This proves more difficult in government where, traditionally, people lack change readiness. While individuals may have a desire to work smarter, this does not necessarily mean embracing technology. This paper discusses the rollout of an application into a small department of highly experienced nurses. The goal was to both streamline the department's workflow and provide a platform for gathering essential business metrics. The biggest challenges were adoption and motivating the nurses to change their routines and learn new computer skills. Two-thirds struggled with the change, and as a result, some jeopardised the validity of the business metrics. In conclusion, there are lessons learned and recommendations for similar projects.

Keywords: change ready, information technology, end-user, iterative method, rollout plan, data analytics

Procedia PDF Downloads 110
2112 Computer Aided Analysis of Breast Based Diagnostic Problems from Mammograms Using Image Processing and Deep Learning Methods

Authors: Ali Berkan Ural

Abstract:

This paper presents the analysis, evaluation, and pre-diagnosis of early stage breast based diagnostic problems (breast cancer, nodulesorlumps) by Computer Aided Diagnosing (CAD) system from mammogram radiological images. According to the statistics, the time factor is crucial to discover the disease in the patient (especially in women) as possible as early and fast. In the study, a new algorithm is developed using advanced image processing and deep learning method to detect and classify the problem at earlystagewithmoreaccuracy. This system first works with image processing methods (Image acquisition, Noiseremoval, Region Growing Segmentation, Morphological Operations, Breast BorderExtraction, Advanced Segmentation, ObtainingRegion Of Interests (ROIs), etc.) and segments the area of interest of the breast and then analyzes these partly obtained area for cancer detection/lumps in order to diagnosis the disease. After segmentation, with using the Spectrogramimages, 5 different deep learning based methods (specified Convolutional Neural Network (CNN) basedAlexNet, ResNet50, VGG16, DenseNet, Xception) are applied to classify the breast based problems.

Keywords: computer aided diagnosis, breast cancer, region growing, segmentation, deep learning

Procedia PDF Downloads 65