Search results for: powder processing
2981 Integrated Life Skill Training and Executive Function Strategies in Children with Autism Spectrum Disorder in Qatar: A Study Protocol for a Randomized Controlled Trial
Authors: Bara M Yousef, Naresh B Raj, Nadiah W Arfah, Brightlin N Dhas
Abstract:
Background: Executive function (EF) impairment is common in children with autism spectrum disorder (ASD). EF strategies are considered effective in improving the therapeutic outcomes of children with ASD. Aims: This study primarily aims to explore whether integrating EF strategies combined with regular occupational therapy intervention is more effective in improving daily life skills (DLS) and sensory integration/processing (SI/SP) skills than regular occupational therapy alone in children with ASD and secondarily aims to assess treatment outcomes on improving visual motor integration (VMI) skills. Procedures: A total of 92 children with ASD will be recruited and, following baseline assessments, randomly assigned to the treatment group (45-min once weekly individual occupational therapy plus EF strategies) and control group (45-min once weekly individual therapy sessions alone). Results and Outcomes: All children will be evaluated systematically by assessing SI/SP, DLS, and VMI, skills at baseline, 7 weeks, and 14 weeks of treatment. Data will be analyzed using ANCOVA and T-test. Conclusions and Implications: This single-blind, randomized controlled trial will provide empirical evidence for the effectiveness of EF strategies when combined with regular occupational therapy programs. Based on trial results, EF strategies could be recommended in multidisciplinary programs for children with ASD. Trial Registration: The trial has been registered in the clinicaltrail.gov for a registry, protocol ID: MRC-01-22-509 ClinicalTrials.gov Identifier: NCT05829577, registered 25th April 2023Keywords: autism spectrum disorder, executive function strategies, daily life skills, sensory integration/processing, visual motor integration, occupational therapy, effectiveness
Procedia PDF Downloads 1222980 Entrepreneurial Orientation and Business Performance: The Case of Micro Scale Food Processors Operating in a War-Recovery Environment
Authors: V. Suganya, V. Balasuriya
Abstract:
The functioning of Micro and Small Scale (MSS) businesses in the northern part of Sri Lanka was vulnerable due to three decades of internal conflict and the subsequent post-war economic openings has resulted new market prospects for MSS businesses. MSS businesses survive and operate with limited resources and struggle to access finance, raw material, markets, and technology. This study attempts to identify the manner in which entrepreneurial orientation puts into practice by the business operators to overcome these business challenges. Business operators in the traditional food processing sector are taken for this study as this sub-sector of the food industry is developing at a rapid pace. A review of the literature was done to recognize the concepts of entrepreneurial orientation, defining MMS businesses and the manner in which business performance is measured. Direct interview method supported by a structured questionnaire is used to collect data from 80 respondents; based on a fixed interval random sampling technique. This study reveals that more than half of the business operators have opted to commence their business ventures as a result of identifying a market opportunity. 41 per cent of the business operators are highly entrepreneurial oriented in a scale of 1 to 5. Entrepreneurial orientation shows significant relationship and strongly correlated with business performance. Pro-activeness, innovativeness and competitive aggressiveness shows a significant relationship with business performance while risk taking is negative and autonomy is not significantly related to business performance. It is evident that entrepreneurial oriented business practices contribute to better business performance even though 70 per cent prefer the ideas/views of the support agencies than the stakeholders when making business decisions. It is recommended that appropriate training should be introduced to develop entrepreneurial skills focusing to improve business networks so that new business opportunities and innovative business practices are identified.Keywords: Micro and Small Scale (MMS) businesses, entrepreneurial orientation (EO), food processing, business operators
Procedia PDF Downloads 4952979 VIAN-DH: Computational Multimodal Conversation Analysis Software and Infrastructure
Authors: Teodora Vukovic, Christoph Hottiger, Noah Bubenhofer
Abstract:
The development of VIAN-DH aims at bridging two linguistic approaches: conversation analysis/interactional linguistics (IL), so far a dominantly qualitative field, and computational/corpus linguistics and its quantitative and automated methods. Contemporary IL investigates the systematic organization of conversations and interactions composed of speech, gaze, gestures, and body positioning, among others. These highly integrated multimodal behaviour is analysed based on video data aimed at uncovering so called “multimodal gestalts”, patterns of linguistic and embodied conduct that reoccur in specific sequential positions employed for specific purposes. Multimodal analyses (and other disciplines using videos) are so far dependent on time and resource intensive processes of manual transcription of each component from video materials. Automating these tasks requires advanced programming skills, which is often not in the scope of IL. Moreover, the use of different tools makes the integration and analysis of different formats challenging. Consequently, IL research often deals with relatively small samples of annotated data which are suitable for qualitative analysis but not enough for making generalized empirical claims derived quantitatively. VIAN-DH aims to create a workspace where many annotation layers required for the multimodal analysis of videos can be created, processed, and correlated in one platform. VIAN-DH will provide a graphical interface that operates state-of-the-art tools for automating parts of the data processing. The integration of tools that already exist in computational linguistics and computer vision, facilitates data processing for researchers lacking programming skills, speeds up the overall research process, and enables the processing of large amounts of data. The main features to be introduced are automatic speech recognition for the transcription of language, automatic image recognition for extraction of gestures and other visual cues, as well as grammatical annotation for adding morphological and syntactic information to the verbal content. In the ongoing instance of VIAN-DH, we focus on gesture extraction (pointing gestures, in particular), making use of existing models created for sign language and adapting them for this specific purpose. In order to view and search the data, VIAN-DH will provide a unified format and enable the import of the main existing formats of annotated video data and the export to other formats used in the field, while integrating different data source formats in a way that they can be combined in research. VIAN-DH will adapt querying methods from corpus linguistics to enable parallel search of many annotation levels, combining token-level and chronological search for various types of data. VIAN-DH strives to bring crucial and potentially revolutionary innovation to the field of IL, (that can also extend to other fields using video materials). It will allow the processing of large amounts of data automatically and, the implementation of quantitative analyses, combining it with the qualitative approach. It will facilitate the investigation of correlations between linguistic patterns (lexical or grammatical) with conversational aspects (turn-taking or gestures). Users will be able to automatically transcribe and annotate visual, spoken and grammatical information from videos, and to correlate those different levels and perform queries and analyses.Keywords: multimodal analysis, corpus linguistics, computational linguistics, image recognition, speech recognition
Procedia PDF Downloads 1082978 Effect of Peganum harmala Seeds on Blood Factors, Immune Response and Intestinal Selected Bacterial Population in Broiler Chickens
Authors: Majid Goudarzi
Abstract:
This experiment was designed to study the effects of feeding different levels of Peganum harmala seeds (PHS) and antibiotic on serum biochemical parameters, immune response and intestinal microflora composition in Ross broiler chickens. A total of 240 one-d-old unsexed broiler chickens were randomly allocated to each of the four treatment groups, each with four replicate pens of 15 chicks. The dietary treatments included of control (C) - without PHS and antibiotic - the diet contains 300 mg/kg Lincomycin 0.88% (A) and the diets contain 2 g/kg (H1) and 4 g/kg (H2) PHS. The chicks were raised on floor pens and received diets and water ad libitum for six weeks. Blood samplings were performed for the determination of antibody titer against Newcastle disease on 14 and 21 days and for biochemical parameters on 42 days of age. The populations of Lactobacilli spp. and Escherichia coli were enumerated in ileum by conventional microbiological techniques using selective agar media. Inclusion of PHS in diet resulted in a significant decrease in total cholesterol and significant increase in HDL relative to the control and antibiotic groups. Antibody titer against NDV was not affected by experimental treatments. E. coli population in birds supplemented with antibiotic and PHS was significantly lower than control, but Lactobacilli spp. population increased only by antibiotic and not by PHS. In conclusion, the results of this study showed that addition of PHS powder seem to have a positive influence on some biochemical parameters and gastrointestinal microflora.Keywords: antibiotic, biochemical parameters, immune system, Peganum harmala
Procedia PDF Downloads 3622977 Examining the Influence of Firm Internal Level Factors on Performance Variations among Micro and Small Enterprises: Evidence from Tanzanian Agri-Food Processing Firms
Authors: Pulkeria Pascoe, Hawa P. Tundui, Marcia Dutra de Barcellos, Hans de Steur, Xavier Gellynck
Abstract:
A majority of Micro and Small Enterprises (MSEs) experience low or no growth. Understanding their performance remains unfinished and disjointed as there is no consensus on the factors influencing it, especially in developing countries. Using a Resource-Based View (RBV) as the theoretical background, this cross-sectional study employed four regression models to examine the influence of firm-level factors (firm-specific characteristics, firm resources, manager socio-demographic characteristics, and selected management practices) on the overall performance variations among 442 Tanzanian micro and small agri-food processing firms. Study results confirmed the RBV argument that intangible resources make a larger contribution to overall performance variations among firms than that tangible resources. Firms' tangible and intangible resources explained 34.5% of overall performance variations (intangible resources explained the overall performance variability by 19.4% compared to tangible resources, which accounted for 15.1%), ranking first in explaining the overall performance variance. Firm-specific characteristics ranked second by influencing variations in overall performance by 29.0%. Selected management practices ranked third (6.3%), while the manager's socio-demographic factors were last on the list, as they influenced the overall performance variability among firms by only 5.1%. The study also found that firms that focus on proper utilization of tangible resources (financial and physical), set targets, and undertake better working capital management practices performed higher than their counterparts (low and average performers). Furthermore, accumulation and proper utilization of intangible resources (relational, organizational, and reputational), undertaking performance monitoring practices, age of the manager, and the choice of the firm location and activity were the dominant significant factors influencing the variations among average and high performers, relative to low performers. The entrepreneurial background was a significant factor influencing variations in average and low-performing firms, indicating that entrepreneurial skills are crucial to achieving average levels of performance. Firm age, size, legal status, source of start-up capital, gender, education level, and total business experience of the manager were not statistically significant variables influencing the overall performance variations among the agri-food processors under the study. The study has identified both significant and non-significant factors influencing performance variations among low, average, and high-performing micro and small agri-food processing firms in Tanzania. Therefore, results from this study will help managers, policymakers and researchers to identify areas where more attention should be placed in order to improve overall performance of MSEs in agri-food industry.Keywords: firm-level factors, micro and small enterprises, performance, regression analysis, resource-based-view
Procedia PDF Downloads 862976 Processing Big Data: An Approach Using Feature Selection
Authors: Nikat Parveen, M. Ananthi
Abstract:
Big data is one of the emerging technology, which collects the data from various sensors and those data will be used in many fields. Data retrieval is one of the major issue where there is a need to extract the exact data as per the need. In this paper, large amount of data set is processed by using the feature selection. Feature selection helps to choose the data which are actually needed to process and execute the task. The key value is the one which helps to point out exact data available in the storage space. Here the available data is streamed and R-Center is proposed to achieve this task.Keywords: big data, key value, feature selection, retrieval, performance
Procedia PDF Downloads 3412975 An Innovative High Energy Density Power Pack for Portable and Off-Grid Power Applications
Authors: Idit Avrahami, Alex Schechter, Lev Zakhvatkin
Abstract:
This research focuses on developing a compact and light Hydrogen Generator (HG), coupled with fuel cells (FC) to provide a High-Energy-Density Power-Pack (HEDPP) solution, which is 10 times Li-Ion batteries. The HEDPP is designed for portable & off-grid power applications such as Drones, UAVs, stationary off-grid power sources, unmanned marine vehicles, and more. Hydrogen gas provided by this device is delivered in the safest way as a chemical powder at room temperature and ambient pressure is activated only when the power is on. Hydrogen generation is based on a stabilized chemical reaction of Sodium Borohydride (SBH) and water. The proposed solution enables a ‘No Storage’ Hydrogen-based Power Pack. Hydrogen is produced and consumed on-the-spot, during operation; therefore, there’s no need for high-pressure hydrogen tanks, which are large, heavy, and unsafe. In addition to its high energy density, ease of use, and safety, the presented power pack has a significant advantage of versatility and deployment in numerous applications and scales. This patented HG was demonstrated using several prototypes in our lab and was proved to be feasible and highly efficient for several applications. For example, in applications where water is available (such as marine vehicles, water and sewage infrastructure, and stationary applications), the Energy Density of the suggested power pack may reach 2700-3000 Wh/kg, which is again more than 10 times higher than conventional lithium-ion batteries. In other applications (e.g., UAV or small vehicles) the energy density may exceed 1000 Wh/kg.Keywords: hydrogen energy, sodium borohydride, fixed-wing UAV, energy pack
Procedia PDF Downloads 832974 Verbal Working Memory in Sequential and Simultaneous Bilinguals: An Exploratory Study
Authors: Archana Rao R., Deepak P., Chayashree P. D., Darshan H. S.
Abstract:
Cognitive abilities in bilinguals have been widely studied over the last few decades. Bilingualism has been found to extensively facilitate the ability to store and manipulate information in Working Memory (WM). The mechanism of WM includes primary memory, attentional control, and secondary memory, each of which makes a contribution to WM. Many researches have been done in an attempt to measure WM capabilities through both verbal (phonological) and nonverbal tasks (visuospatial). Since there is a lot of speculations regarding the relationship between WM and bilingualism, further investigation is required to understand the nature of WM in bilinguals, i.e., with respect to sequential and simultaneous bilinguals. Hence the present study aimed to highlight the verbal working memory abilities in sequential and simultaneous bilinguals with respect to the processing and recall abilities of nouns and verbs. Two groups of bilinguals aged between 18-30 years were considered for the study. Group 1 consisted of 20 (10 males and 10 females) sequential bilinguals who had acquired L1 (Kannada) before the age of 3 and had exposure to L2 (English) for a period of 8-10 years. Group 2 consisted of 20 (10 males and 10 females) simultaneous bilinguals who have acquired both L1 and L2 before the age of 3. Working memory abilities were assessed using two tasks, and a set of stimuli which was presented in gradation of complexity and the stimuli was inclusive of frequent and infrequent nouns and verbs. The tasks involved the participants to judge the correctness of the sentence and simultaneously remember the last word of each sentence and the participants are instructed to recall the words at the end of each set. The results indicated no significant difference between sequential and simultaneous bilinguals in processing the nouns and verbs, and this could be attributed to the proficiency level of the participants in L1 and the alike cognitive abilities between the groups. And recall of nouns was better compared to verbs, maybe because of the complex argument structure involved in verbs. Similarly, authors found a frequency of occurrence of nouns and verbs also had an effect on WM abilities. The difference was also found across gradation due to the load imposed on the central executive function and phonological loop.Keywords: bilinguals, nouns, verbs, working memory
Procedia PDF Downloads 1292973 High Pressure Delignification Process for Nanocrystalline Cellulose Production from Agro-Waste Biomass
Authors: Sakinul Islam, Nhol Kao, Sati Bhattacharya, Rahul Gupta
Abstract:
Nanocrystalline cellulose (NCC) has been widely used for miscellaneous applications due to its superior properties over other nanomaterials. However, the major problems associated with the production of NCC are long reaction time, low production rate and inefficient process. The mass production of NCC within a short period of time is still a great challenge. The main objective of this study is to produce NCC from rice husk agro waste biomass from a high pressure delignification process (HPDP), followed by bleaching and hydrolysis processes. The HPDP has not been explored for NCC production from rice husk biomass (RHB) until now. In order to produce NCC, powder rice husk (PRH) was placed into a stainless steel reactor at 80 ˚C under 5 bars. Aqueous solution of NaOH (4M) was used for the dissolution of lignin and other amorphous impurities from PRH. After certain experimental times (1h, 3.5h and 6h), bleaching and hydrolysis were carried out on delignified samples. NaOCl (20%) and H2SO4 (4M) solutions were used for bleaching and hydrolysis processes, respectively. The NCC suspension from hydrolysis was sonicated and neutralized by buffer solution for various characterisations. Finally NCC suspension was dried and analyzed by FTIR, XRD, SEM, AFM and TEM. The chemical composition of NCC and PRH was estimated by TAPPI (Technical Association of Pulp and Paper Industry) standard methods to observe the product purity. It was found that, the 6h of the HPDP was more efficient to produce good quality NCC than that at 1h and 3.5h due to low separation of non-cellulosic components from RHB. The analyses indicated the crystallinity of NCC to be 71 %, particle size of 20-50 nm (diameter) and 100-200 nm in length.Keywords: nanocrystalline cellulose, NCC, high pressure delignification, bleaching, hydrolysis, agro-waste biomass
Procedia PDF Downloads 2642972 Establishment of Precision System for Underground Facilities Based on 3D Absolute Positioning Technology
Authors: Yonggu Jang, Jisong Ryu, Woosik Lee
Abstract:
The study aims to address the limitations of existing underground facility exploration equipment in terms of exploration depth range, relative depth measurement, data processing time, and human-centered ground penetrating radar image interpretation. The study proposed the use of 3D absolute positioning technology to develop a precision underground facility exploration system. The aim of this study is to establish a precise exploration system for underground facilities based on 3D absolute positioning technology, which can accurately survey up to a depth of 5m and measure the 3D absolute location of precise underground facilities. The study developed software and hardware technologies to build the precision exploration system. The software technologies developed include absolute positioning technology, ground surface location synchronization technology of GPR exploration equipment, GPR exploration image AI interpretation technology, and integrated underground space map-based composite data processing technology. The hardware systems developed include a vehicle-type exploration system and a cart-type exploration system. The data was collected using the developed exploration system, which employs 3D absolute positioning technology. The GPR exploration images were analyzed using AI technology, and the three-dimensional location information of the explored precise underground facilities was compared to the integrated underground space map. The study successfully developed a precision underground facility exploration system based on 3D absolute positioning technology. The developed exploration system can accurately survey up to a depth of 5m and measure the 3D absolute location of precise underground facilities. The system comprises software technologies that build a 3D precise DEM, synchronize the GPR sensor's ground surface 3D location coordinates, automatically analyze and detect underground facility information in GPR exploration images and improve accuracy through comparative analysis of the three-dimensional location information, and hardware systems, including a vehicle-type exploration system and a cart-type exploration system. The study's findings and technological advancements are essential for underground safety management in Korea. The proposed precision exploration system significantly contributes to establishing precise location information of underground facility information, which is crucial for underground safety management and improves the accuracy and efficiency of exploration. The study addressed the limitations of existing equipment in exploring underground facilities, proposed 3D absolute positioning technology-based precision exploration system, developed software and hardware systems for the exploration system, and contributed to underground safety management by providing precise location information. The developed precision underground facility exploration system based on 3D absolute positioning technology has the potential to provide accurate and efficient exploration of underground facilities up to a depth of 5m. The system's technological advancements contribute to the establishment of precise location information of underground facility information, which is essential for underground safety management in Korea.Keywords: 3D absolute positioning, AI interpretation of GPR exploration images, complex data processing, integrated underground space maps, precision exploration system for underground facilities
Procedia PDF Downloads 622971 A Generalized Framework for Adaptive Machine Learning Deployments in Algorithmic Trading
Authors: Robert Caulk
Abstract:
A generalized framework for adaptive machine learning deployments in algorithmic trading is introduced, tested, and released as open-source code. The presented software aims to test the hypothesis that recent data contains enough information to form a probabilistically favorable short-term price prediction. Further, the framework contains various adaptive machine learning techniques that are geared toward generating profit during strong trends and minimizing losses during trend changes. Results demonstrate that this adaptive machine learning approach is capable of capturing trends and generating profit. The presentation also discusses the importance of defining the parameter space associated with the dynamic training data-set and using the parameter space to identify and remove outliers from prediction data points. Meanwhile, the generalized architecture enables common users to exploit the powerful machinery while focusing on high-level feature engineering and model testing. The presentation also highlights common strengths and weaknesses associated with the presented technique and presents a broad range of well-tested starting points for feature set construction, target setting, and statistical methods for enforcing risk management and maintaining probabilistically favorable entry and exit points. The presentation also describes the end-to-end data processing tools associated with FreqAI, including automatic data fetching, data aggregation, feature engineering, safe and robust data pre-processing, outlier detection, custom machine learning and statistical tools, data post-processing, and adaptive training backtest emulation, and deployment of adaptive training in live environments. Finally, the generalized user interface is also discussed in the presentation. Feature engineering is simplified so that users can seed their feature sets with common indicator libraries (e.g. TA-lib, pandas-ta). The user also feeds data expansion parameters to fill out a large feature set for the model, which can contain as many as 10,000+ features. The presentation describes the various object-oriented programming techniques employed to make FreqAI agnostic to third-party libraries and external data sources. In other words, the back-end is constructed in such a way that users can leverage a broad range of common regression libraries (Catboost, LightGBM, Sklearn, etc) as well as common Neural Network libraries (TensorFlow, PyTorch) without worrying about the logistical complexities associated with data handling and API interactions. The presentation finishes by drawing conclusions about the most important parameters associated with a live deployment of the adaptive learning framework and provides the road map for future development in FreqAI.Keywords: machine learning, market trend detection, open-source, adaptive learning, parameter space exploration
Procedia PDF Downloads 892970 Solid-State Synthesis Approach and Optical study of Red Emitting Phosphors Li₃BaSrxCa₁₋ₓEu₂.₇Gd₀.₃(MoO₄)₈ for White LEDs
Authors: Priyansha Sharma, Sibani Mund, Sivakumar Vaidyanathan
Abstract:
Solid-state synthesis methods were used for the synthesis of pure red emissive Li¬3BaSrxCa(1-x)Eu2.7Gd0.3(MoO4)8 (x = 0.0 to 1.0) phosphors, XRD, SEM, and FTIR spectra were used to characterize the materials, and their optical properties were thoroughly investigated. PL studies were examined at different excitations 230 nm, 275nm, 465nm, and 395 nm. All the spectra show similar emissions with the highest transition at 616 nm due to ED transition. The given phosphor Li¬3BaSr0.25Ca0.75Eu2.7Gd0.3(MoO4)8 shows the highest intensity and is thus chosen for the temperature-dependent and Quantum yield study. According to the PL investigation, the phosphor-containing Eu3+ emits red light due to the (5D0 7F2) transition. The excitation analysis shows that all of the Eu3+ activated phosphors exhibited broad absorption due to the charge transfer band, O2-Mo6+, O2-Eu3+ transition, as well as narrow absorption bands related to the Eu3+ ion's 4f-4f electronic transition. Excitation spectra show Charge transfer band at 275 nm shows the highest intensity. The primary band in the spectra refers to Eu3+ ions occupying the lattice's non-centrosymmetric location. All of the compositions are monoclinic crystal structures with space group C2/c and match with reference powder patterns. The thermal stability of the 3BaSr0.25Ca0.75Eu2.7Gd0.3(MoO4)8 phosphor was investigated at (300 k- 500 K) as well as at low temperature from (20 K to 275 K) to be utilized for red and white LED fabrication. The Decay Lifetime of all the phosphor was measured. The best phosphor was used for White and Red LED fabrication.Keywords: PL, phosphor, quantum yield, white LED
Procedia PDF Downloads 742969 Arabic Light Word Analyser: Roles with Deep Learning Approach
Authors: Mohammed Abu Shquier
Abstract:
This paper introduces a word segmentation method using the novel BP-LSTM-CRF architecture for processing semantic output training. The objective of web morphological analysis tools is to link a formal morpho-syntactic description to a lemma, along with morpho-syntactic information, a vocalized form, a vocalized analysis with morpho-syntactic information, and a list of paradigms. A key objective is to continuously enhance the proposed system through an inductive learning approach that considers semantic influences. The system is currently under construction and development based on data-driven learning. To evaluate the tool, an experiment on homograph analysis was conducted. The tool also encompasses the assumption of deep binary segmentation hypotheses, the arbitrary choice of trigram or n-gram continuation probabilities, language limitations, and morphology for both Modern Standard Arabic (MSA) and Dialectal Arabic (DA), which provide justification for updating this system. Most Arabic word analysis systems are based on the phonotactic morpho-syntactic analysis of a word transmitted using lexical rules, which are mainly used in MENA language technology tools, without taking into account contextual or semantic morphological implications. Therefore, it is necessary to have an automatic analysis tool taking into account the word sense and not only the morpho-syntactic category. Moreover, they are also based on statistical/stochastic models. These stochastic models, such as HMMs, have shown their effectiveness in different NLP applications: part-of-speech tagging, machine translation, speech recognition, etc. As an extension, we focus on language modeling using Recurrent Neural Network (RNN); given that morphological analysis coverage was very low in dialectal Arabic, it is significantly important to investigate deeply how the dialect data influence the accuracy of these approaches by developing dialectal morphological processing tools to show that dialectal variability can support to improve analysis.Keywords: NLP, DL, ML, analyser, MSA, RNN, CNN
Procedia PDF Downloads 422968 Antioxidant Face Mask from Purple Sweet Potato (Ipomea Batatas) with Oleum Cytrus
Authors: Lilis Kistriyani, Dine Olisvia, Lutfa Rahmawati
Abstract:
Facial mask is an important part of every beauty treatment because it will give a smooth and gentle effect on the face. This research is done to make edible film that will be applied for face mask. The main ingredient in making this edible film is purple sweet potato powder with the addition of glycerol as plasticizer. One of the ingredients in purple sweet potato is a flavonoid compound. The purpose of this study was to determine the effect of increasing the amount of glycerol to flavonoids release and the effect on the physical properties and biological properties of edible film produced. The stages of this research are the making of edible film, then perform some analysis, among others, spectrophotometer UV-vis analysis to find out how many flavonoids can be released into facial skin, tensile strength and elongation of break analysis, biodegradability analysis, and microbiological analysis. The variation of edible film is the volume of glycerol that is 1 ml, 2 ml, 3 ml. The results of spectrophotometer UV-vis analysis showed that the most flavonoid release concentration is 20.33 ppm in the 2 ml glycerol variation. The best tensile strength value is 8,502 N, and the greatest elongation of break value is 14% in 1 ml glycerol variation. In the biodegradability test, the more volume of glycerol added the faster the edible film is degraded. The results of microbiological analysis showed that purple sweet potato extract has the ability to inhibit the growth of Propionibacterium acnes seen in the presence of inhibiting zone which is 18.9 mm.Keywords: face mask, edible film, plasticizer, flavonoid
Procedia PDF Downloads 1762967 Intelligent Process and Model Applied for E-Learning Systems
Authors: Mafawez Alharbi, Mahdi Jemmali
Abstract:
E-learning is a developing area especially in education. E-learning can provide several benefits to learners. An intelligent system to collect all components satisfying user preferences is so important. This research presents an approach that it capable to personalize e-information and give the user their needs following their preferences. This proposal can make some knowledge after more evaluations made by the user. In addition, it can learn from the habit from the user. Finally, we show a walk-through to prove how intelligent process work.Keywords: artificial intelligence, architecture, e-learning, software engineering, processing
Procedia PDF Downloads 1912966 Productive Performance of Lactating Sows Feed with Cull Chickpea
Authors: J. M. Uriarte, H. R. Guemez, J. A. Romo, R. Barajas, J. M. Romo
Abstract:
This research was carried out with the objective of knowing the productive performance of sows in lactation when fed with diets containing cull chickpea instead of corn and soybean meal. Thirty-six (Landrace x Yorkshire) lactating sows were divided into three treatments with 12 sows per treatment. On day 107 of gestation, sows were moved into farrowing crates in an environmentally regulated (2.2 × 0.6 m) contained an area (2.2 × 0.5 m) for newborn pigs on each side, all diets were provided as a dry powder, and the sows received free access to water throughout the experimental period. After farrowing, daily feed allowance increased gradually, and sows had ad libitum access to feed by day four. They were fed diets containing 0 (CONT), cull chickpeas 15 % (CHP15), or cull chickpeas 30% (CHP30) for 28 days. The diets contained the same calculated levels of crude protein and metabolizable energy, and contained vitamins and minerals that exceeded the National Research Council (1998) recommendations; sows were fed three times daily. On day 28, piglets were weaned and performances of lactating sows and nursery piglets were recorded. All data in this experiment were analyzed in accordance with a completely randomized design. Results indicated that average daily feed intake (5.61, 5.59 and 5.46 kg for CONT, CHP15, and CHP30 respectively) of sows were not affected (P > 0.05) by different dietary. There was no difference (P > 0.05) in average body weight of piglets on the day of birth (1.35 vs. 1.30, and 1.32 kg) and day 28 (7.10, 6.80 and 6.92 kg) between treatments. The numbers of weaned piglets (10.65 on average) were not affected by treatments. It is concluded that the use of cull chickpea at 30% of the diet does not affect the productive performance of lactating sows.Keywords: cull chickpea, lactating sow, performance, pigs
Procedia PDF Downloads 1422965 Integrating Natural Language Processing (NLP) and Machine Learning in Lung Cancer Diagnosis
Authors: Mehrnaz Mostafavi
Abstract:
The assessment and categorization of incidental lung nodules present a considerable challenge in healthcare, often necessitating resource-intensive multiple computed tomography (CT) scans for growth confirmation. This research addresses this issue by introducing a distinct computational approach leveraging radiomics and deep-learning methods. However, understanding local services is essential before implementing these advancements. With diverse tracking methods in place, there is a need for efficient and accurate identification approaches, especially in the context of managing lung nodules alongside pre-existing cancer scenarios. This study explores the integration of text-based algorithms in medical data curation, indicating their efficacy in conjunction with machine learning and deep-learning models for identifying lung nodules. Combining medical images with text data has demonstrated superior data retrieval compared to using each modality independently. While deep learning and text analysis show potential in detecting previously missed nodules, challenges persist, such as increased false positives. The presented research introduces a Structured-Query-Language (SQL) algorithm designed for identifying pulmonary nodules in a tertiary cancer center, externally validated at another hospital. Leveraging natural language processing (NLP) and machine learning, the algorithm categorizes lung nodule reports based on sentence features, aiming to facilitate research and assess clinical pathways. The hypothesis posits that the algorithm can accurately identify lung nodule CT scans and predict concerning nodule features using machine-learning classifiers. Through a retrospective observational study spanning a decade, CT scan reports were collected, and an algorithm was developed to extract and classify data. Results underscore the complexity of lung nodule cohorts in cancer centers, emphasizing the importance of careful evaluation before assuming a metastatic origin. The SQL and NLP algorithms demonstrated high accuracy in identifying lung nodule sentences, indicating potential for local service evaluation and research dataset creation. Machine-learning models exhibited strong accuracy in predicting concerning changes in lung nodule scan reports. While limitations include variability in disease group attribution, the potential for correlation rather than causality in clinical findings, and the need for further external validation, the algorithm's accuracy and potential to support clinical decision-making and healthcare automation represent a significant stride in lung nodule management and research.Keywords: lung cancer diagnosis, structured-query-language (SQL), natural language processing (NLP), machine learning, CT scans
Procedia PDF Downloads 1012964 Multiscale Process Modeling Analysis for the Prediction of Composite Strength Allowables
Authors: Marianna Maiaru, Gregory M. Odegard
Abstract:
During the processing of high-performance thermoset polymer matrix composites, chemical reactions occur during elevated pressure and temperature cycles, causing the constituent monomers to crosslink and form a molecular network that gradually can sustain stress. As the crosslinking process progresses, the material naturally experiences a gradual shrinkage due to the increase in covalent bonds in the network. Once the cured composite completes the cure cycle and is brought to room temperature, the thermal expansion mismatch of the fibers and matrix cause additional residual stresses to form. These compounded residual stresses can compromise the reliability of the composite material and affect the composite strength. Composite process modeling is greatly complicated by the multiscale nature of the composite architecture. At the molecular level, the degree of cure controls the local shrinkage and thermal-mechanical properties of the thermoset. At the microscopic level, the local fiber architecture and packing affect the magnitudes and locations of residual stress concentrations. At the macroscopic level, the layup sequence controls the nature of crack initiation and propagation due to residual stresses. The goal of this research is use molecular dynamics (MD) and finite element analysis (FEA) to predict the residual stresses in composite laminates and the corresponding effect on composite failure. MD is used to predict the polymer shrinkage and thermomechanical properties as a function of degree of cure. This information is used as input into FEA to predict the residual stresses on the microscopic level resulting from the complete cure process. Virtual testing is subsequently conducted to predict strength allowables. Experimental characterization is used to validate the modeling.Keywords: molecular dynamics, finite element analysis, processing modeling, multiscale modeling
Procedia PDF Downloads 922963 Non-Destructive Testing of Selective Laser Melting Products
Authors: Luca Collini, Michele Antolotti, Diego Schiavi
Abstract:
At present, complex geometries within production time shrinkage, rapidly increasing demand, and high-quality standard requirement make the non-destructive (ND) control of additively manufactured components indispensable means. On the other hand, a technology gap and the lack of standards regulating the methods and the acceptance criteria indicate the NDT of these components a stimulating field to be still fully explored. Up to date, penetrant testing, acoustic wave, tomography, radiography, and semi-automated ultrasound methods have been tested on metal powder based products so far. External defects, distortion, surface porosity, roughness, texture, internal porosity, and inclusions are the typical defects in the focus of testing. Detection of density and layers compactness are also been tried on stainless steels by the ultrasonic scattering method. In this work, the authors want to present and discuss the radiographic and the ultrasound ND testing on additively manufactured Ti₆Al₄V and inconel parts obtained by the selective laser melting (SLM) technology. In order to test the possibilities given by the radiographic method, both X-Rays and γ-Rays are tried on a set of specifically designed specimens realized by the SLM. The specimens contain a family of defectology, which represent the most commonly found, as cracks and lack of fusion. The tests are also applied to real parts of various complexity and thickness. A set of practical indications and of acceptance criteria is finally drawn.Keywords: non-destructive testing, selective laser melting, radiography, UT method
Procedia PDF Downloads 1462962 Experimental Investigation of Recycling Cementitious Materials in Low Strength Range for Sustainability and Affordability
Authors: Mulubrhan Berihu
Abstract:
Due to the design versatility, availability, and cost efficiency, concrete continues to be the most used construction material on earth. However, the production of Portland cement, the primary component of concrete mix is causing to have a serious effect on environmental and economic impacts. This shows there is a need to study using of supplementary cementitious materials (SCMs). The most commonly used supplementary cementitious materials are wastes, and the use of these industrial waste products has technical, economic, and environmental benefits besides the reduction of CO2 emission from cement production. This paper aims to document the effect on the strength property of concrete due to the use of low cement by maximizing supplementary cementitious materials like fly ash. The amount of cement content was below 250 kg/m3, and in all the mixes, the quantity of powder (cement + fly ash) is almost kept at about 500 kg. According to this, seven different cement content (250 kg/m3, 195 kg/m3, 150 kg/m3, 125 kg/m3, 100 kg/m3, 85 kg/m3, 70 kg/m3) with different amount of replacement of SCMs was conducted. The mix proportion was prepared by keeping the water content constant and varying the cement content, SCMs, and water-to-binder ratio. Based on the different mix proportions of fly ash, a range of mix designs was formulated. The test results showed that using up to 85 kg/m3 of cement is possible for plain concrete works like hollow block concrete to achieve 9.8 Mpa, and the experimental results indicate that strength is a function of w/b. The experiment result shows a big difference in gaining of compressive strength from 7 days to 28 days and this obviously shows the slow rate of hydration of fly ash concrete. As the w/b ratio increases, the strength decreases significantly. At the same time, higher permeability was seen in the specimens which were tested for three hours than one hour.Keywords: efficiency factor, cement content, compressive strength, mix proportion, w/c ratio, water permeability, SCMs
Procedia PDF Downloads 432961 Analyzing Data Protection in the Era of Big Data under the Framework of Virtual Property Layer Theory
Authors: Xiaochen Mu
Abstract:
Data rights confirmation, as a key legal issue in the development of the digital economy, is undergoing a transition from a traditional rights paradigm to a more complex private-economic paradigm. In this process, data rights confirmation has evolved from a simple claim of rights to a complex structure encompassing multiple dimensions of personality rights and property rights. Current data rights confirmation practices are primarily reflected in two models: holistic rights confirmation and process rights confirmation. The holistic rights confirmation model continues the traditional "one object, one right" theory, while the process rights confirmation model, through contractual relationships in the data processing process, recognizes rights that are more adaptable to the needs of data circulation and value release. In the design of the data property rights system, there is a hierarchical characteristic aimed at decoupling from raw data to data applications through horizontal stratification and vertical staging. This design not only respects the ownership rights of data originators but also, based on the usufructuary rights of enterprises, constructs a corresponding rights system for different stages of data processing activities. The subjects of data property rights include both data originators, such as users, and data producers, such as enterprises, who enjoy different rights at different stages of data processing. The intellectual property rights system, with the mission of incentivizing innovation and promoting the advancement of science, culture, and the arts, provides a complete set of mechanisms for protecting innovative results. However, unlike traditional private property rights, the granting of intellectual property rights is not an end in itself; the purpose of the intellectual property system is to balance the exclusive rights of the rights holders with the prosperity and long-term development of society's public learning and the entire field of science, culture, and the arts. Therefore, the intellectual property granting mechanism provides both protection and limitations for the rights holder. This perfectly aligns with the dual attributes of data. In terms of achieving the protection of data property rights, the granting of intellectual property rights is an important institutional choice that can enhance the effectiveness of the data property exchange mechanism. Although this is not the only path, the granting of data property rights within the framework of the intellectual property rights system helps to establish fundamental legal relationships and rights confirmation mechanisms and is more compatible with the classification and grading system of data. The modernity of the intellectual property rights system allows it to adapt to the needs of big data technology development through special clauses or industry guidelines, thus promoting the comprehensive advancement of data intellectual property rights legislation. This paper analyzes data protection under the virtual property layer theory and two-fold virtual property rights system. Based on the “bundle of right” theory, this paper establishes specific three-level data rights. This paper analyzes the cases: Google v. Vidal-Hall, Halliday v Creation Consumer Finance, Douglas v Hello Limited, Campbell v MGN and Imerman v Tchenquiz. This paper concluded that recognizing property rights over personal data and protecting data under the framework of intellectual property will be beneficial to establish the tort of misuse of personal information.Keywords: data protection, property rights, intellectual property, Big data
Procedia PDF Downloads 392960 Effect of Jatropha curcas Leaf Extract on Castor Oil Induced Diarrhea in Albino Rats
Authors: Fatima U. Maigari, Musa Halilu, M. Maryam Umar, Rabiu Zainab
Abstract:
Plants as therapeutic agents are used as drug in many parts of the world. Medicinal plants are mostly used in developing countries due to culture acceptability, belief or due to lack of easy access to primary health care services. Jatropha curcas is a plant from the Euphorbiaceae family which is widely used in Northern Nigeria as an anti-diarrheal agent. This study was conducted to determine the anti-diarrheal effect of the leaf extract on castor oil induced diarrhea in albino rats. The leaves of J. curcas were collected from Balanga Local government in Gombe State, north-eastern Nigeria; due to its bioavailability. The leaves were air-dried at room temperature and ground to powder. Phytochemical screening was done and different concentrations of the extract was prepared and administered to the different categories of experimental animals. From the results, aqueous leaf extract of Jatropha curcas at doses of 200mg/Kg and 400mg/Kg was found to reduce the mean stool score as compared to control rats, however, maximum reduction was achieved with the standard drug of Loperamide (5mg/Kg). Treatment of diarrhea with 200mg/Kg of the extract did not produce any significant decrease in stool fluid content but was found to be significant in those rats that were treated with 400mg/Kg of the extract at 2hours (0.05±0.02) and 4hours (0.01±0.01). A significant reduction of diarrhea in the experimental animals signifies it to possess some anti-diarrheal activity.Keywords: anti-diarrhea, diarrhea, Jatropha curcas, loperamide
Procedia PDF Downloads 3312959 Building Atmospheric Moisture Diagnostics: Environmental Monitoring and Data Collection
Authors: Paula Lopez-Arce, Hector Altamirano, Dimitrios Rovas, James Berry, Bryan Hindle, Steven Hodgson
Abstract:
Efficient mould remediation and accurate moisture diagnostics leading to condensation and mould growth in dwellings are largely untapped. Number of factors are contributing to the rising trend of excessive moisture in homes mainly linked with modern living, increased levels of occupation and rising fuel costs, as well as making homes more energy efficient. Environmental monitoring by means of data collection though loggers sensors and survey forms has been performed in a range of buildings from different UK regions. Air and surface temperature and relative humidity values of residential areas affected by condensation and/or mould issues were recorded. Additional measurements were taken through different trials changing type, location, and position of loggers. In some instances, IR thermal images and ventilation rates have also been acquired. Results have been interpreted together with environmental key parameters by processing and connecting data from loggers and survey questionnaires, both in buildings with and without moisture issues. Monitoring exercises carried out during Winter and Spring time show the importance of developing and following accurate protocols for guidance to obtain consistent, repeatable and comparable results and to improve the performance of environmental monitoring. A model and a protocol are being developed to build a diagnostic tool with the goal of performing a simple but precise residential atmospheric moisture diagnostics to distinguish the cause entailing condensation and mould generation, i.e., ventilation, insulation or heating systems issue. This research shows the relevance of monitoring and processing environmental data to assign moisture risk levels and determine the origin of condensation or mould when dealing with a building atmospheric moisture excess.Keywords: environmental monitoring, atmospheric moisture, protocols, mould
Procedia PDF Downloads 1392958 AI-Based Techniques for Online Social Media Network Sentiment Analysis: A Methodical Review
Authors: A. M. John-Otumu, M. M. Rahman, O. C. Nwokonkwo, M. C. Onuoha
Abstract:
Online social media networks have long served as a primary arena for group conversations, gossip, text-based information sharing and distribution. The use of natural language processing techniques for text classification and unbiased decision-making has not been far-fetched. Proper classification of this textual information in a given context has also been very difficult. As a result, we decided to conduct a systematic review of previous literature on sentiment classification and AI-based techniques that have been used in order to gain a better understanding of the process of designing and developing a robust and more accurate sentiment classifier that can correctly classify social media textual information of a given context between hate speech and inverted compliments with a high level of accuracy by assessing different artificial intelligence techniques. We evaluated over 250 articles from digital sources like ScienceDirect, ACM, Google Scholar, and IEEE Xplore and whittled down the number of research to 31. Findings revealed that Deep learning approaches such as CNN, RNN, BERT, and LSTM outperformed various machine learning techniques in terms of performance accuracy. A large dataset is also necessary for developing a robust sentiment classifier and can be obtained from places like Twitter, movie reviews, Kaggle, SST, and SemEval Task4. Hybrid Deep Learning techniques like CNN+LSTM, CNN+GRU, CNN+BERT outperformed single Deep Learning techniques and machine learning techniques. Python programming language outperformed Java programming language in terms of sentiment analyzer development due to its simplicity and AI-based library functionalities. Based on some of the important findings from this study, we made a recommendation for future research.Keywords: artificial intelligence, natural language processing, sentiment analysis, social network, text
Procedia PDF Downloads 1152957 Cocrystal of Mesalamine for Enhancement of Its Biopharmaceutical Properties, Utilizing Supramolecular Chemistry Approach
Authors: Akshita Jindal, Renu Chadha, Maninder Karan
Abstract:
Supramolecular chemistry has gained recent eminence in a flurry of research documents demonstrating the formation of new crystalline forms with potentially advantageous characteristics. Mesalamine (5-amino salicylic acid) belongs to anti-inflammatory class of drugs, is used to treat ulcerative colitis and Crohn’s disease. Unfortunately, mesalamine suffer from poor solubility and therefore very low bioavailability. This work is focused on preparation and characterization of cocrystal of mesalamine with nicotinamide (MNIC) a coformer of GRAS status. Cocrystallisation was achieved by solvent drop grinding in stoichiometric ratio of 1:1 using acetonitrile as solvent and was characterized by various techniques including DSC (Differential Scanning Calorimetry), PXRD (X-ray Powder Diffraction), and FTIR (Fourier Transform Infrared Spectrometer). The co-crystal depicted single endothermic transitions (254°C) which were different from the melting peaks of both drug (288°C) and coformer (128°C) indicating the formation of a new solid phase. Different XRPD patterns and FTIR spectrums for the co-crystals from those of individual components confirms the formation of new phase. Enhancement in apparent solubility study and intrinsic dissolution study showed effectiveness of this cocrystal. Further improvement in pharmacokinetic profile has also been observed with 2 folds increase in bioavailability. To conclude, our results show that application of nicotinamide as a coformer is a viable approach towards the preparation of cocrystals of potential drug molecule having limited solubility.Keywords: cocrystal, mesalamine, nicotinamide, solvent drop grinding
Procedia PDF Downloads 1772956 Evaluation of Modern Natural Language Processing Techniques via Measuring a Company's Public Perception
Authors: Burak Oksuzoglu, Savas Yildirim, Ferhat Kutlu
Abstract:
Opinion mining (OM) is one of the natural language processing (NLP) problems to determine the polarity of opinions, mostly represented on a positive-neutral-negative axis. The data for OM is usually collected from various social media platforms. In an era where social media has considerable control over companies’ futures, it’s worth understanding social media and taking actions accordingly. OM comes to the fore here as the scale of the discussion about companies increases, and it becomes unfeasible to gauge opinion on individual levels. Thus, the companies opt to automize this process by applying machine learning (ML) approaches to their data. For the last two decades, OM or sentiment analysis (SA) has been mainly performed by applying ML classification algorithms such as support vector machines (SVM) and Naïve Bayes to a bag of n-gram representations of textual data. With the advent of deep learning and its apparent success in NLP, traditional methods have become obsolete. Transfer learning paradigm that has been commonly used in computer vision (CV) problems started to shape NLP approaches and language models (LM) lately. This gave a sudden rise to the usage of the pretrained language model (PTM), which contains language representations that are obtained by training it on the large datasets using self-supervised learning objectives. The PTMs are further fine-tuned by a specialized downstream task dataset to produce efficient models for various NLP tasks such as OM, NER (Named-Entity Recognition), Question Answering (QA), and so forth. In this study, the traditional and modern NLP approaches have been evaluated for OM by using a sizable corpus belonging to a large private company containing about 76,000 comments in Turkish: SVM with a bag of n-grams, and two chosen pre-trained models, multilingual universal sentence encoder (MUSE) and bidirectional encoder representations from transformers (BERT). The MUSE model is a multilingual model that supports 16 languages, including Turkish, and it is based on convolutional neural networks. The BERT is a monolingual model in our case and transformers-based neural networks. It uses a masked language model and next sentence prediction tasks that allow the bidirectional training of the transformers. During the training phase of the architecture, pre-processing operations such as morphological parsing, stemming, and spelling correction was not used since the experiments showed that their contribution to the model performance was found insignificant even though Turkish is a highly agglutinative and inflective language. The results show that usage of deep learning methods with pre-trained models and fine-tuning achieve about 11% improvement over SVM for OM. The BERT model achieved around 94% prediction accuracy while the MUSE model achieved around 88% and SVM did around 83%. The MUSE multilingual model shows better results than SVM, but it still performs worse than the monolingual BERT model.Keywords: BERT, MUSE, opinion mining, pretrained language model, SVM, Turkish
Procedia PDF Downloads 1462955 Validation of Escherichia coli O157:H7 Inactivation on Apple-Carrot Juice Treated with Manothermosonication by Kinetic Models
Authors: Ozan Kahraman, Hao Feng
Abstract:
Several models such as Weibull, Modified Gompertz, Biphasic linear, and Log-logistic models have been proposed in order to describe non-linear inactivation kinetics and used to fit non-linear inactivation data of several microorganisms for inactivation by heat, high pressure processing or pulsed electric field. First-order kinetic parameters (D-values and z-values) have often been used in order to identify microbial inactivation by non-thermal processing methods such as ultrasound. Most ultrasonic inactivation studies employed first-order kinetic parameters (D-values and z-values) in order to describe the reduction on microbial survival count. This study was conducted to analyze the E. coli O157:H7 inactivation data by using five microbial survival models (First-order, Weibull, Modified Gompertz, Biphasic linear and Log-logistic). First-order, Weibull, Modified Gompertz, Biphasic linear and Log-logistic kinetic models were used for fitting inactivation curves of Escherichia coli O157:H7. The residual sum of squares and the total sum of squares criteria were used to evaluate the models. The statistical indices of the kinetic models were used to fit inactivation data for E. coli O157:H7 by MTS at three temperatures (40, 50, and 60 0C) and three pressures (100, 200, and 300 kPa). Based on the statistical indices and visual observations, the Weibull and Biphasic models were best fitting of the data for MTS treatment as shown by high R2 values. The non-linear kinetic models, including the Modified Gompertz, First-order, and Log-logistic models did not provide any better fit to data from MTS compared the Weibull and Biphasic models. It was observed that the data found in this study did not follow the first-order kinetics. It is possibly because of the cells which are sensitive to ultrasound treatment were inactivated first, resulting in a fast inactivation period, while those resistant to ultrasound were killed slowly. The Weibull and biphasic models were found as more flexible in order to determine the survival curves of E. coli O157:H7 treated by MTS on apple-carrot juice.Keywords: Weibull, Biphasic, MTS, kinetic models, E.coli O157:H7
Procedia PDF Downloads 3662954 Digi-Buddy: A Smart Cane with Artificial Intelligence and Real-Time Assistance
Authors: Amaladhithyan Krishnamoorthy, Ruvaitha Banu
Abstract:
Vision is considered as the most important sense in humans, without which leading a normal can be often difficult. There are many existing smart canes for visually impaired with obstacle detection using ultrasonic transducer to help them navigate. Though the basic smart cane increases the safety of the users, it does not help in filling the void of visual loss. This paper introduces the concept of Digi-Buddy which is an evolved smart cane for visually impaired. The cane consists for several modules, apart from the basic obstacle detection features; the Digi-Buddy assists the user by capturing video/images and streams them to the server using a wide-angled camera, which then detects the objects using Deep Convolutional Neural Network. In addition to determining what the particular image/object is, the distance of the object is assessed by the ultrasonic transducer. The sound generation application, modelled with the help of Natural Language Processing is used to convert the processed images/object into audio. The object detected is signified by its name which is transmitted to the user with the help of Bluetooth hear phones. The object detection is extended to facial recognition which maps the faces of the person the user meets in the database of face images and alerts the user about the person. One of other crucial function consists of an automatic-intimation-alarm which is triggered when the user is in an emergency. If the user recovers within a set time, a button is provisioned in the cane to stop the alarm. Else an automatic intimation is sent to friends and family about the whereabouts of the user using GPS. In addition to safety and security by the existing smart canes, the proposed concept devices to be implemented as a prototype helping visually-impaired visualize their surroundings through audio more in an amicable way.Keywords: artificial intelligence, facial recognition, natural language processing, internet of things
Procedia PDF Downloads 3552953 Low-Temperature Fabrication of Reaction Bonded Composites, Based on Sic and (Sic+B4C) Mixture, Infiltrated with Si-Al Alloy
Authors: Helen Dilman, Eyal Oz, Shmuel Hayun, Nahum Frage
Abstract:
The conventional approach for manufacturing silicon carbide and boron carbide reaction bonded composites is based on infiltrating a ceramic porous preform with molten silicon. The relatively high melting temperature of the silicon infiltrating medium is a drawback of the process. The present contribution is concerned with an approach that allows obtaining reaction bonded composites by pressure-less infiltration at a significantly lower (850-1000oC) temperature range. This approach was applied for the fabrication of fully dense SiC/(Si-Al) and (SiC+B4C)/(Si-Al) composites. The key feature of the approach is based on using Si alloys with low melting temperature and the Mg-vapor atmosphere, under which an adequate wetting between ceramics and liquid alloys for the infiltration process is achieved. In the first set of the experiments ceramic performs compacted from multimodal SiC powders (with the green density of about 27 vol. %) without free carbon addition were infiltrated by Si-20%Al alloy at 950oC. In the second set, 19 vol. % of a fine boron carbide powder was added to SiC powders as a source of carbon. The green density of the SiC-B4C preforms was about 23-25 vol. %. In both cases, successful infiltration was achieved and the composites were fully dense. The density of the composites was about 3g/cm3. For the SiC based composites the hardness value was 750±150HV, Young modulus-280GPa and bending strength-240±30MPa. These values for (SiC-B4C)/(Si-Al) composites (1460±200HV, 317GPa and 360±20MPa) were significantly higher due to the formation of novel ceramics phases. Microstructural characteristics of the composites and their phase composition will be discussed.Keywords: boron carbide, composites, infiltration, low temperatures, silicon carbide
Procedia PDF Downloads 5472952 Audio-Visual Co-Data Processing Pipeline
Authors: Rita Chattopadhyay, Vivek Anand Thoutam
Abstract:
Speech is the most acceptable means of communication where we can quickly exchange our feelings and thoughts. Quite often, people can communicate orally but cannot interact or work with computers or devices. It’s easy and quick to give speech commands than typing commands to computers. In the same way, it’s easy listening to audio played from a device than extract output from computers or devices. Especially with Robotics being an emerging market with applications in warehouses, the hospitality industry, consumer electronics, assistive technology, etc., speech-based human-machine interaction is emerging as a lucrative feature for robot manufacturers. Considering this factor, the objective of this paper is to design the “Audio-Visual Co-Data Processing Pipeline.” This pipeline is an integrated version of Automatic speech recognition, a Natural language model for text understanding, object detection, and text-to-speech modules. There are many Deep Learning models for each type of the modules mentioned above, but OpenVINO Model Zoo models are used because the OpenVINO toolkit covers both computer vision and non-computer vision workloads across Intel hardware and maximizes performance, and accelerates application development. A speech command is given as input that has information about target objects to be detected and start and end times to extract the required interval from the video. Speech is converted to text using the Automatic speech recognition QuartzNet model. The summary is extracted from text using a natural language model Generative Pre-Trained Transformer-3 (GPT-3). Based on the summary, essential frames from the video are extracted, and the You Only Look Once (YOLO) object detection model detects You Only Look Once (YOLO) objects on these extracted frames. Frame numbers that have target objects (specified objects in the speech command) are saved as text. Finally, this text (frame numbers) is converted to speech using text to speech model and will be played from the device. This project is developed for 80 You Only Look Once (YOLO) labels, and the user can extract frames based on only one or two target labels. This pipeline can be extended for more than two target labels easily by making appropriate changes in the object detection module. This project is developed for four different speech command formats by including sample examples in the prompt used by Generative Pre-Trained Transformer-3 (GPT-3) model. Based on user preference, one can come up with a new speech command format by including some examples of the respective format in the prompt used by the Generative Pre-Trained Transformer-3 (GPT-3) model. This pipeline can be used in many projects like human-machine interface, human-robot interaction, and surveillance through speech commands. All object detection projects can be upgraded using this pipeline so that one can give speech commands and output is played from the device.Keywords: OpenVINO, automatic speech recognition, natural language processing, object detection, text to speech
Procedia PDF Downloads 80