Search results for: cyclostationary signal processing
511 Generative Pre-Trained Transformers (GPT-3) and Their Impact on Higher Education
Authors: Sheelagh Heugh, Michael Upton, Kriya Kalidas, Stephen Breen
Abstract:
This article aims to create awareness of the opportunities and issues the artificial intelligence (AI) tool GPT-3 (Generative Pre-trained Transformer-3) brings to higher education. Technological disruptors have featured in higher education (HE) since Konrad Klaus developed the first functional programmable automatic digital computer. The flurry of technological advances, such as personal computers, smartphones, the world wide web, search engines, and artificial intelligence (AI), have regularly caused disruption and discourse across the educational landscape around harnessing the change for the good. Accepting AI influences are inevitable; we took mixed methods through participatory action research and evaluation approach. Joining HE communities, reviewing the literature, and conducting our own research around Chat GPT-3, we reviewed our institutional approach to changing our current practices and developing policy linked to assessments and the use of Chat GPT-3. We review the impact of GPT-3, a high-powered natural language processing (NLP) system first seen in 2020 on HE. Historically HE has flexed and adapted with each technological advancement, and the latest debates for educationalists are focusing on the issues around this version of AI which creates natural human language text from prompts and other forms that can generate code and images. This paper explores how Chat GPT-3 affects the current educational landscape: we debate current views around plagiarism, research misconduct, and the credibility of assessment and determine the tool's value in developing skills for the workplace and enhancing critical analysis skills. These questions led us to review our institutional policy and explore the effects on our current assessments and the development of new assessments. Conclusions: After exploring the pros and cons of Chat GTP-3, it is evident that this form of AI cannot be un-invented. Technology needs to be harnessed for positive outcomes in higher education. We have observed that materials developed through AI and potential effects on our development of future assessments and teaching methods. Materials developed through Chat GPT-3 can still aid student learning but lead to redeveloping our institutional policy around plagiarism and academic integrity.Keywords: artificial intelligence, Chat GPT-3, intellectual property, plagiarism, research misconduct
Procedia PDF Downloads 89510 The Role of Artificial Intelligence in Creating Personalized Health Content for Elderly People: A Systematic Review Study
Authors: Mahnaz Khalafehnilsaz, Rozina Rahnama
Abstract:
Introduction: The elderly population is growing rapidly, and with this growth comes an increased demand for healthcare services. Artificial intelligence (AI) has the potential to revolutionize the delivery of healthcare services to the elderly population. In this study, the various ways in which AI is used to create health content for elderly people and its transformative impact on the healthcare industry will be explored. Method: A systematic review of the literature was conducted to identify studies that have investigated the role of AI in creating health content specifically for elderly people. Several databases, including PubMed, Scopus, and Web of Science, were searched for relevant articles published between 2000 and 2022. The search strategy employed a combination of keywords related to AI, personalized health content, and the elderly. Studies that utilized AI to create health content for elderly individuals were included, while those that did not meet the inclusion criteria were excluded. A total of 20 articles that met the inclusion criteria were identified. Finding: The findings of this review highlight the diverse applications of AI in creating health content for elderly people. One significant application is the use of natural language processing (NLP), which involves the creation of chatbots and virtual assistants capable of providing personalized health information and advice to elderly patients. AI is also utilized in the field of medical imaging, where algorithms analyze medical images such as X-rays, CT scans, and MRIs to detect diseases and abnormalities. Additionally, AI enables the development of personalized health content for elderly patients by analyzing large amounts of patient data to identify patterns and trends that can inform healthcare providers in developing tailored treatment plans. Conclusion: AI is transforming the healthcare industry by providing a wide range of applications that can improve patient outcomes and reduce healthcare costs. From creating chatbots and virtual assistants to analyzing medical images and developing personalized treatment plans, AI is revolutionizing the way healthcare is delivered to elderly patients. Continued investment in this field is essential to ensure that elderly patients receive the best possible care.Keywords: artificial intelligence, health content, older adult, healthcare
Procedia PDF Downloads 69509 Inquiry on Regenerative Tourism in an Avian Destination: A Case Study of Kaliveli in Tamil Nadu, India
Authors: Anu Chandran, Reena Esther Rani
Abstract:
Background of the Study: Dotted with multiple Unique Destination Prepositions (UDPs), Tamil Nadu is an established tourism brand as regards leisure, MICE, culture, and ecological flavors. Albeit, the enchanting destination possesses distinctive attributes and resources yet to be tapped for better competitive advantage. Being a destination that allures an incredible variety of migratory birds, Tamil Nadu is deemed to be an ornithologist’s paradise. This study primarily explores the prospects of developing Kaliveli, recognized as a bird sanctuary in the Tindivanam forest division of the Villupuram district in the State. Kaliveli is an ideal nesting site for migratory birds and is currently apt for a prospective analysis of regenerative tourism. Objectives of the study: This research lays an accent on avian tourism as part and parcel of sustainable tourism ventures. The impacts of projects like the Ornithological Conservation Centre on tourists have been gauged in the present paper. It maps the futuristic proactive propositions linked to regenerative tourism on the site. How far technological innovations can do a world of good in Kaliveli through Artificial Intelligence, Smart Tourism, and similar latest coinages to entice real eco-tourists, have been conceptualized. The experiential dimensions of resource stewardship as regards facilitating tourists’ relish the offerings in a sustainable manner is at the crux of this work. Methodology: Modeled as a case study, this work tries to deliberate on the impact of existing projects attributed to avian fauna in Kalveli. Conducted in the qualitative research design mode, the case study method was adopted for the processing and presentation of study results drawn by applying thematic content analysis based on the data collected from the field. Result and discussion: One of the key findings relates to the kind of nature trails that can be a regenerative dynamic for eco-friendly tourism in Kaliveli. Field visits have been conducted to assess the niche tourism aspects which could be incorporated with the regenerative tourism model to be framed as part of the study.Keywords: regenerative tourism, Kaliveli bird sanctuary, sustainable development, resource Stewardship, Ornithology, Avian Fauna
Procedia PDF Downloads 79508 Spatial Analysis as a Tool to Assess Risk Management in Peru
Authors: Josué Alfredo Tomas Machaca Fajardo, Jhon Elvis Chahua Janampa, Pedro Rau Lavado
Abstract:
A flood vulnerability index was developed for the Piura River watershed in northern Peru using Principal Component Analysis (PCA) to assess flood risk. The official methodology to assess risk from natural hazards in Peru was introduced in 1980 and proved effective for aiding complex decision-making. This method relies in part on decision-makers defining subjective correlations between variables to identify high-risk areas. While risk identification and ensuing response activities benefit from a qualitative understanding of influences, this method does not take advantage of the advent of national and international data collection efforts, which can supplement our understanding of risk. Furthermore, this method does not take advantage of broadly applied statistical methods such as PCA, which highlight central indicators of vulnerability. Nowadays, information processing is much faster and allows for more objective decision-making tools, such as PCA. The approach presented here develops a tool to improve the current flood risk assessment in the Peruvian basin. Hence, the spatial analysis of the census and other datasets provides a better understanding of the current land occupation and a basin-wide distribution of services and human populations, a necessary step toward ultimately reducing flood risk in Peru. PCA allows the simplification of a large number of variables into a few factors regarding social, economic, physical and environmental dimensions of vulnerability. There is a correlation between the location of people and the water availability mainly found in rivers. For this reason, a comprehensive vision of the population location around the river basin is necessary to establish flood prevention policies. The grouping of 5x5 km gridded areas allows the spatial analysis of flood risk rather than assessing political divisions of the territory. The index was applied to the Peruvian region of Piura, where several flood events occurred in recent past years, being one of the most affected regions during the ENSO events in Peru. The analysis evidenced inequalities for the access to basic services, such as water, electricity, internet and sewage, between rural and urban areas.Keywords: assess risk, flood risk, indicators of vulnerability, principal component analysis
Procedia PDF Downloads 186507 AI Predictive Modeling of Excited State Dynamics in OPV Materials
Authors: Pranav Gunhal., Krish Jhurani
Abstract:
This study tackles the significant computational challenge of predicting excited state dynamics in organic photovoltaic (OPV) materials—a pivotal factor in the performance of solar energy solutions. Time-dependent density functional theory (TDDFT), though effective, is computationally prohibitive for larger and more complex molecules. As a solution, the research explores the application of transformer neural networks, a type of artificial intelligence (AI) model known for its superior performance in natural language processing, to predict excited state dynamics in OPV materials. The methodology involves a two-fold process. First, the transformer model is trained on an extensive dataset comprising over 10,000 TDDFT calculations of excited state dynamics from a diverse set of OPV materials. Each training example includes a molecular structure and the corresponding TDDFT-calculated excited state lifetimes and key electronic transitions. Second, the trained model is tested on a separate set of molecules, and its predictions are rigorously compared to independent TDDFT calculations. The results indicate a remarkable degree of predictive accuracy. Specifically, for a test set of 1,000 OPV materials, the transformer model predicted excited state lifetimes with a mean absolute error of 0.15 picoseconds, a negligible deviation from TDDFT-calculated values. The model also correctly identified key electronic transitions contributing to the excited state dynamics in 92% of the test cases, signifying a substantial concordance with the results obtained via conventional quantum chemistry calculations. The practical integration of the transformer model with existing quantum chemistry software was also realized, demonstrating its potential as a powerful tool in the arsenal of materials scientists and chemists. The implementation of this AI model is estimated to reduce the computational cost of predicting excited state dynamics by two orders of magnitude compared to conventional TDDFT calculations. The successful utilization of transformer neural networks to accurately predict excited state dynamics provides an efficient computational pathway for the accelerated discovery and design of new OPV materials, potentially catalyzing advancements in the realm of sustainable energy solutions.Keywords: transformer neural networks, organic photovoltaic materials, excited state dynamics, time-dependent density functional theory, predictive modeling
Procedia PDF Downloads 118506 Interferon-Induced Transmembrane Protein-3 rs12252-CC Associated with the Progress of Hepatocellular Carcinoma by Up-Regulating the Expression of Interferon-Induced Transmembrane Protein 3
Authors: Yuli Hou, Jianping Sun, Mengdan Gao, Hui Liu, Ling Qin, Ang Li, Dongfu Li, Yonghong Zhang, Yan Zhao
Abstract:
Background and Aims: Interferon-induced transmembrane protein 3 (IFITM3) is a component of ISG (Interferon-Stimulated Gene) family. IFITM3 has been recognized as a key signal molecule regulating cell growth in some tumors. However, the function of IFITM3 rs12252-CC genotype in the hepatocellular carcinoma (HCC) remains unknown to author’s best knowledge. A cohort study was employed to clarify the relationship between IFITM3 rs12252-CC genotype and HCC progression, and cellular experiments were used to investigate the correlation of function of IFITM3 and the progress of HCC. Methods: 336 candidates were enrolled in study, including 156 with HBV related HCC and 180 with chronic Hepatitis B infections or liver cirrhosis. Polymerase chain reaction (PCR) was employed to determine the gene polymorphism of IFITM3. The functions of IFITM3 were detected in PLC/PRF/5 cell with different treated:LV-IFITM3 transfected with lentivirus to knockdown the expression of IFITM3 and LV-NC transfected with empty lentivirus as negative control. The IFITM3 expression, proliferation and migration were detected by Quantitative reverse transcription polymerase chain reaction (qRT-PCR), QuantiGene Plex 2.0 assay, western blotting, immunohistochemistry, Cell Counting Kit(CCK)-8 and wound healing respectively. Six samples (three infected with empty lentiviral as control; three infected with LV-IFITM3 vector lentiviral as experimental group ) of PLC/PRF/5 were sequenced at BGI (Beijing Genomics Institute, Shenzhen,China) using RNA-seq technology to identify the IFITM3-related signaling pathways and chose PI3K/AKT pathway as related signaling to verify. Results: The patients with HCC had a significantly higher proportion of IFITM3 rs12252-CC compared with the patients with chronic HBV infection or liver cirrhosis. The distribution of CC genotype in HCC patients with low differentiation was significantly higher than that in those with high differentiation. Patients with CC genotype found with bigger tumor size, higher percentage of vascular thrombosis, higher distribution of low differentiation and higher 5-year relapse rate than those with CT/TT genotypes. The expression of IFITM3 was higher in HCC tissues than adjacent normal tissues, and the level of IFITM3 was higher in HCC tissues with low differentiation and metastatic than high/medium differentiation and without metastatic. Higher RNA level of IFITM3 was found in CC genotype than TT genotype. In PLC/PRF/5 cell with knockdown, the ability of cell proliferation and migration was inhibited. Analysis RNA sequencing and verification of RT-PCR found out the phosphatidylinositol 3-kinase/protein kinase B/mammalian target of rapamycin(PI3K/AKT/mTOR) pathway was associated with knockdown IFITM3.With the inhibition of IFITM3, the expression of PI3K/AKT/mTOR signaling pathway was blocked and the expression of vimentin was decreased. Conclusions: IFITM3 rs12252-CC with the higher expression plays a vital role in the progress of HCC by regulating HCC cell proliferation and migration. These effects are associated with PI3K/AKT/mTOR signaling pathway.Keywords: IFITM3, interferon-induced transmembrane protein 3, HCC, hepatocellular carcinoma, PI3K/ AKT/mTOR, phosphatidylinositol 3-kinase/protein kinase B/mammalian target of rapamycin
Procedia PDF Downloads 124505 An Extended Domain-Specific Modeling Language for Marine Observatory Relying on Enterprise Architecture
Authors: Charbel Aoun, Loic Lagadec
Abstract:
A Sensor Network (SN) is considered as an operation of two phases: (1) the observation/measuring, which means the accumulation of the gathered data at each sensor node; (2) transferring the collected data to some processing center (e.g., Fusion Servers) within the SN. Therefore, an underwater sensor network can be defined as a sensor network deployed underwater that monitors underwater activity. The deployed sensors, such as Hydrophones, are responsible for registering underwater activity and transferring it to more advanced components. The process of data exchange between the aforementioned components perfectly defines the Marine Observatory (MO) concept which provides information on ocean state, phenomena and processes. The first step towards the implementation of this concept is defining the environmental constraints and the required tools and components (Marine Cables, Smart Sensors, Data Fusion Server, etc). The logical and physical components that are used in these observatories perform some critical functions such as the localization of underwater moving objects. These functions can be orchestrated with other services (e.g. military or civilian reaction). In this paper, we present an extension to our MO meta-model that is used to generate a design tool (ArchiMO). We propose new constraints to be taken into consideration at design time. We illustrate our proposal with an example from the MO domain. Additionally, we generate the corresponding simulation code using our self-developed domain-specific model compiler. On the one hand, this illustrates our approach in relying on Enterprise Architecture (EA) framework that respects: multiple views, perspectives of stakeholders, and domain specificity. On the other hand, it helps reducing both complexity and time spent in design activity, while preventing from design modeling errors during porting this activity in the MO domain. As conclusion, this work aims to demonstrate that we can improve the design activity of complex system based on the use of MDE technologies and a domain-specific modeling language with the associated tooling. The major improvement is to provide an early validation step via models and simulation approach to consolidate the system design.Keywords: smart sensors, data fusion, distributed fusion architecture, sensor networks, domain specific modeling language, enterprise architecture, underwater moving object, localization, marine observatory, NS-3, IMS
Procedia PDF Downloads 177504 A Multi-Role Oriented Collaboration Platform for Distributed Disaster Reduction in China
Authors: Linyao Qiu, Zhiqiang Du
Abstract:
As the rapid development of urbanization, economic developments, and steady population growth in China, the widespread devastation, economic damages, and loss of human lives caused by numerous forms of natural disasters are becoming increasingly serious every year. Disaster management requires available and effective cooperation of different roles and organizations in whole process including mitigation, preparedness, response and recovery. Due to the imbalance of regional development in China, the disaster management capabilities of national and provincial disaster reduction centers are uneven. When an undeveloped area suffers from disaster, neither local reduction department could get first-hand information like high-resolution remote sensing images from satellites and aircrafts independently, nor sharing mechanism is provided for the department to access to data resources deployed in other place directly. Most existing disaster management systems operate in a typical passive data-centric mode and work for single department, where resources cannot be fully shared. The impediment blocks local department and group from quick emergency response and decision-making. In this paper, we introduce a collaborative platform for distributed disaster reduction. To address the issues of imbalance of sharing data sources and technology in the process of disaster reduction, we propose a multi-role oriented collaboration business mechanism, which is capable of scheduling and allocating for optimum utilization of multiple resources, to link various roles for collaborative reduction business in different place. The platform fully considers the difference of equipment conditions in different provinces and provide several service modes to satisfy technology need in disaster reduction. An integrated collaboration system based on focusing services mechanism is designed and implemented for resource scheduling, functional integration, data processing, task management, collaborative mapping, and visualization. Actual applications illustrate that the platform can well support data sharing and business collaboration between national and provincial department. It could significantly improve the capability of disaster reduction in China.Keywords: business collaboration, data sharing, distributed disaster reduction, focusing service
Procedia PDF Downloads 295503 Realistic Modeling of the Preclinical Small Animal Using Commercial Software
Authors: Su Chul Han, Seungwoo Park
Abstract:
As the increasing incidence of cancer, the technology and modality of radiotherapy have advanced and the importance of preclinical model is increasing in the cancer research. Furthermore, the small animal dosimetry is an essential part of the evaluation of the relationship between the absorbed dose in preclinical small animal and biological effect in preclinical study. In this study, we carried out realistic modeling of the preclinical small animal phantom possible to verify irradiated dose using commercial software. The small animal phantom was modeling from 4D Digital Mouse whole body phantom. To manipulate Moby phantom in commercial software (Mimics, Materialise, Leuven, Belgium), we converted Moby phantom to DICOM image file of CT by Matlab and two- dimensional of CT images were converted to the three-dimensional image and it is possible to segment and crop CT image in Sagittal, Coronal and axial view). The CT images of small animals were modeling following process. Based on the profile line value, the thresholding was carried out to make a mask that was connection of all the regions of the equal threshold range. Using thresholding method, we segmented into three part (bone, body (tissue). lung), to separate neighboring pixels between lung and body (tissue), we used region growing function of Mimics software. We acquired 3D object by 3D calculation in the segmented images. The generated 3D object was smoothing by remeshing operation and smoothing operation factor was 0.4, iteration value was 5. The edge mode was selected to perform triangle reduction. The parameters were that tolerance (0.1mm), edge angle (15 degrees) and the number of iteration (5). The image processing 3D object file was converted to an STL file to output with 3D printer. We modified 3D small animal file using 3- Matic research (Materialise, Leuven, Belgium) to make space for radiation dosimetry chips. We acquired 3D object of realistic small animal phantom. The width of small animal phantom was 2.631 cm, thickness was 2.361 cm, and length was 10.817. Mimics software supported efficiency about 3D object generation and usability of conversion to STL file for user. The development of small preclinical animal phantom would increase reliability of verification of absorbed dose in small animal for preclinical study.Keywords: mimics, preclinical small animal, segmentation, 3D printer
Procedia PDF Downloads 366502 Experimental Quantification of the Intra-Tow Resin Storage Evolution during RTM Injection
Authors: Mathieu Imbert, Sebastien Comas-Cardona, Emmanuelle Abisset-Chavanne, David Prono
Abstract:
Short cycle time Resin Transfer Molding (RTM) applications appear to be of great interest for the mass production of automotive or aeronautical lightweight structural parts. During the RTM process, the two components of a resin are mixed on-line and injected into the cavity of a mold where a fibrous preform has been placed. Injection and polymerization occur simultaneously in the preform inducing evolutions of temperature, degree of cure and viscosity that furthermore affect flow and curing. In order to adjust the processing conditions to reduce the cycle time, it is, therefore, essential to understand and quantify the physical mechanisms occurring in the part during injection. In a previous study, a dual-scale simulation tool has been developed to help determining the optimum injection parameters. This tool allows tracking finely the repartition of the resin and the evolution of its properties during reactive injections with on-line mixing. Tows and channels of the fibrous material are considered separately to deal with the consequences of the dual-scale morphology of the continuous fiber textiles. The simulation tool reproduces the unsaturated area at the flow front, generated by the tow/channel difference of permeability. Resin “storage” in the tows after saturation is also taken into account as it may significantly affect the repartition and evolution of the temperature, degree of cure and viscosity in the part during reactive injections. The aim of the current study is, thanks to experiments, to understand and quantify the “storage” evolution in the tows to adjust and validate the numerical tool. The presented study is based on four experimental repeats conducted on three different types of textiles: a unidirectional Non Crimp Fabric (NCF), a triaxial NCF and a satin weave. Model fluids, dyes and image analysis, are used to study quantitatively, the resin flow in the saturated area of the samples. Also, textiles characteristics affecting the resin “storage” evolution in the tows are analyzed. Finally, fully coupled on-line mixing reactive injections are conducted to validate the numerical model.Keywords: experimental, on-line mixing, high-speed RTM process, dual-scale flow
Procedia PDF Downloads 165501 Treating Voxels as Words: Word-to-Vector Methods for fMRI Meta-Analyses
Authors: Matthew Baucum
Abstract:
With the increasing popularity of fMRI as an experimental method, psychology and neuroscience can greatly benefit from advanced techniques for summarizing and synthesizing large amounts of data from brain imaging studies. One promising avenue is automated meta-analyses, in which natural language processing methods are used to identify the brain regions consistently associated with certain semantic concepts (e.g. “social”, “reward’) across large corpora of studies. This study builds on this approach by demonstrating how, in fMRI meta-analyses, individual voxels can be treated as vectors in a semantic space and evaluated for their “proximity” to terms of interest. In this technique, a low-dimensional semantic space is built from brain imaging study texts, allowing words in each text to be represented as vectors (where words that frequently appear together are near each other in the semantic space). Consequently, each voxel in a brain mask can be represented as a normalized vector sum of all of the words in the studies that showed activation in that voxel. The entire brain mask can then be visualized in terms of each voxel’s proximity to a given term of interest (e.g., “vision”, “decision making”) or collection of terms (e.g., “theory of mind”, “social”, “agent”), as measured by the cosine similarity between the voxel’s vector and the term vector (or the average of multiple term vectors). Analysis can also proceed in the opposite direction, allowing word cloud visualizations of the nearest semantic neighbors for a given brain region. This approach allows for continuous, fine-grained metrics of voxel-term associations, and relies on state-of-the-art “open vocabulary” methods that go beyond mere word-counts. An analysis of over 11,000 neuroimaging studies from an existing meta-analytic fMRI database demonstrates that this technique can be used to recover known neural bases for multiple psychological functions, suggesting this method’s utility for efficient, high-level meta-analyses of localized brain function. While automated text analytic methods are no replacement for deliberate, manual meta-analyses, they seem to show promise for the efficient aggregation of large bodies of scientific knowledge, at least on a relatively general level.Keywords: FMRI, machine learning, meta-analysis, text analysis
Procedia PDF Downloads 449500 Strengthening Strategy across Languages: A Cognitive and Grammatical Universal Phenomenon
Authors: Behnam Jay
Abstract:
In this study, the phenomenon called “Strengthening” in human language refers to the strategic use of multiple linguistic elements to intensify specific grammatical or semantic functions. This study explores cross-linguistic evidence demonstrating how strengthening appears in various grammatical structures. In French and Spanish, double negatives are used not to cancel each other out but to intensify the negation, challenging the conventional understanding that double negatives result in an affirmation. For example, in French, il ne sait pas (He dosn't know.) uses both “ne” and “pas” to strengthen the negation. Similarly, in Spanish, No vio a nadie. (He didn't see anyone.) uses “no” and “nadie” to achieve a stronger negative meaning. In Japanese, double honorifics, often perceived as erroneous, are reinterpreted as intentional efforts to amplify politeness, as seen in forms like ossharareru (to say, (honorific)). Typically, an honorific morpheme appears only once in a predicate, but native speakers often use double forms to reinforce politeness. In Turkish, the word eğer (indicating a condition) is sometimes used together with the conditional suffix -se(sa) within the same sentence to strengthen the conditional meaning, as in Eğer yağmur yağarsa, o gelmez. (If it rains, he won't come). Furthermore, the combination of question words with rising intonation in various languages serves to enhance interrogative force. These instances suggest that strengthening is a cross-linguistic strategy that may reflect a broader cognitive mechanism in language processing. This paper investigates these cases in detail, providing insights into why languages may adopt such strategies. No corpus was used for collecting examples from different languages. Instead, the examples were gathered from languages the author encountered during their research, focusing on specific grammatical and morphological phenomena relevant to the concept of strengthening. Due to the complexity of employing a comparative method across multiple languages, this approach was chosen to illustrate common patterns of strengthening based on available data. It is acknowledged that different languages may have different strengthening strategies in various linguistic domains. While the primary focus is on grammar and morphology, it is recognized that the strengthening phenomenon may also appear in phonology. Future research should aim to include a broader range of languages and utilize more comprehensive comparative methods where feasible to enhance methodological rigor and explore this phenomenon more thoroughly.Keywords: strengthening, cross-linguistic analysis, syntax, semantics, cognitive mechanism
Procedia PDF Downloads 24499 Predicting Wealth Status of Households Using Ensemble Machine Learning Algorithms
Authors: Habtamu Ayenew Asegie
Abstract:
Wealth, as opposed to income or consumption, implies a more stable and permanent status. Due to natural and human-made difficulties, households' economies will be diminished, and their well-being will fall into trouble. Hence, governments and humanitarian agencies offer considerable resources for poverty and malnutrition reduction efforts. One key factor in the effectiveness of such efforts is the accuracy with which low-income or poor populations can be identified. As a result, this study aims to predict a household’s wealth status using ensemble Machine learning (ML) algorithms. In this study, design science research methodology (DSRM) is employed, and four ML algorithms, Random Forest (RF), Adaptive Boosting (AdaBoost), Light Gradient Boosted Machine (LightGBM), and Extreme Gradient Boosting (XGBoost), have been used to train models. The Ethiopian Demographic and Health Survey (EDHS) dataset is accessed for this purpose from the Central Statistical Agency (CSA)'s database. Various data pre-processing techniques were employed, and the model training has been conducted using the scikit learn Python library functions. Model evaluation is executed using various metrics like Accuracy, Precision, Recall, F1-score, area under curve-the receiver operating characteristics (AUC-ROC), and subjective evaluations of domain experts. An optimal subset of hyper-parameters for the algorithms was selected through the grid search function for the best prediction. The RF model has performed better than the rest of the algorithms by achieving an accuracy of 96.06% and is better suited as a solution model for our purpose. Following RF, LightGBM, XGBoost, and AdaBoost algorithms have an accuracy of 91.53%, 88.44%, and 58.55%, respectively. The findings suggest that some of the features like ‘Age of household head’, ‘Total children ever born’ in a family, ‘Main roof material’ of their house, ‘Region’ they lived in, whether a household uses ‘Electricity’ or not, and ‘Type of toilet facility’ of a household are determinant factors to be a focal point for economic policymakers. The determinant risk factors, extracted rules, and designed artifact achieved 82.28% of the domain expert’s evaluation. Overall, the study shows ML techniques are effective in predicting the wealth status of households.Keywords: ensemble machine learning, households wealth status, predictive model, wealth status prediction
Procedia PDF Downloads 39498 Impact of Different Rearing Diets on the Performance of Adult Mealworms Tenebrio molitor
Authors: Caroline Provost, Francois Dumont
Abstract:
Production of insects for human and animal consumption is an increasingly important activity in Canada. Protein production is more efficient and less harmful to the environment using insect rearing compared to the impact of traditional livestock, poultry and fish farms. Insects are rich in essential amino acids, essential fatty acids and trace elements. Thus, insect-based products could be used as a food supplement for livestock and domestic animals and may even find their way into the diets of high performing athletes or fine dining. Nevertheless, several parameters remain to be determined to ensure efficient and profitable production that meet the potential of these sectors. This project proposes to improve the production processes, rearing diets and processing methods for three species with valuable gastronomic and nutritional potential: the common mealworms (Tenebrio molitor), the small mealworm (Alphitobius diaperinus), and the giant mealworm (Zophobas morio). The general objective of the project is to acquire specific knowledge for mass rearing of insects dedicated to animal and human consumption in order to respond to current market opportunities and meet a growing demand for these products. Mass rearing of the three species of mealworm was produced to provide the individuals needed for the experiments. Mealworms eat flour from different cereals (e.g. wheat, barley, buckwheat). These cereals vary in their composition (protein, carbohydrates, fiber, vitamins, antioxidant, etc.), but also in their purchase cost. Seven different diets were compared to optimize the yield of the rearing. Diets were composed of cereal flour (e.g. wheat, barley) and were either mixed or left alone. Female fecundity, larvae mortality and growing curves were observed. Some flour diets have positive effects on female fecundity and larvae performance while each mealworm was found to have specific diet requirements. Trade-offs between mealworm performance and costs need to be considered. Experiments on the effect of flour composition on several parameters related to performance and nutritional and gastronomic value led to the identification of a more appropriate diet for each mealworm.Keywords: mass rearing, mealworm, human consumption, diet
Procedia PDF Downloads 147497 Oxidovanadium(IV) and Dioxidovanadium(V) Complexes: Efficient Catalyst for Peroxidase Mimetic Activity and Oxidation
Authors: Mannar R. Maurya, Bithika Sarkar, Fernando Avecilla
Abstract:
Peroxidase activity is possibly successfully used for different industrial processes in medicine, chemical industry, food processing and agriculture. However, they bear some intrinsic drawback associated with denaturation by proteases, their special storage requisite and cost factor also. Now a day’s artificial enzyme mimics are becoming a research interest because of their significant applications over conventional organic enzymes for ease of their preparation, low price and good stability in activity and overcome the drawbacks of natural enzymes e.g serine proteases. At present, a large number of artificial enzymes have been synthesized by assimilating a catalytic center into a variety of schiff base complexes, ligand-anchoring, supramolecular complexes, hematin, porphyrin, nanoparticles to mimic natural enzymes. Although in recent years a several number of vanadium complexes have been reported by a continuing increase in interest in bioinorganic chemistry. To our best of knowledge, the investigation of artificial enzyme mimics of vanadium complexes is very less explored. Recently, our group has reported synthetic vanadium schiff base complexes capable of mimicking peroxidases. Herein, we have synthesized monoidovanadium(IV) and dioxidovanadium(V) complexes of pyrazoleone derivateis ( extensively studied on account of their broad range of pharmacological appication). All these complexes are characterized by various spectroscopic techniques like FT-IR, UV-Visible, NMR (1H, 13C and 51V), Elemental analysis, thermal studies and single crystal analysis. The peroxidase mimic activity has been studied towards oxidation of pyrogallol to purpurogallin with hydrogen peroxide at pH 7 followed by measuring kinetic parameters. The Michaelis-Menten behavior shows an excellent catalytic activity over its natural counterparts, e.g. V-HPO and HRP. The obtained kinetic parameters (Vmax, Kcat) were also compared with peroxidase and haloperoxidase enzymes making it a promising mimic of peroxidase catalyst. Also, the catalytic activity has been studied towards the oxidation of 1-phenylethanol in presence of H2O2 as an oxidant. Various parameters such as amount of catalyst and oxidant, reaction time, reaction temperature and solvent have been taken into consideration to get maximum oxidative products of 1-phenylethanol.Keywords: oxovanadium(IV)/dioxidovanadium(V) complexes, NMR spectroscopy, Crystal structure, peroxidase mimic activity towards oxidation of pyrogallol, Oxidation of 1-phenylethanol
Procedia PDF Downloads 341496 The Impact of the Method of Extraction on 'Chemchali' Olive Oil Composition in Terms of Oxidation Index, and Chemical Quality
Authors: Om Kalthoum Sallem, Saidakilani, Kamiliya Ounaissa, Abdelmajid Abid
Abstract:
Introduction and purposes: Olive oil is the main oil used in the Mediterranean diet. Virgin olive oil is valued for its organoleptic and nutritional characteristics and is resistant to oxidation due to its high monounsaturated fatty acid content (MUFAs), and low polyunsaturates (PUFAs) and the presence of natural antioxidants such as phenols, tocopherols and carotenoids. The fatty acid composition, especially the MUFA content, and the natural antioxidants provide advantages for health. The aim of the present study was to examine the impact of method of extraction on the chemical profiles of ‘Chemchali’ olive oil variety, which is cultivated in the city of Gafsa, and to compare it with chetoui and chemchali varieties. Methods: Our study is a qualitative prospective study that deals with ‘Chemchali’ olive oil variety. Analyses were conducted during three months (from December to February) in different oil mills in the city of Gafsa. We have compared ‘Chemchali’ olive oil obtained by continuous method to this obtained by superpress method. Then we have analyzed quality index parameters, including free fatty acid content (FFA), acidity, and UV spectrophotometric characteristics and other physico-chemical data [oxidative stability, ß-carotene, and chlorophyll pigment composition]. Results: Olive oil resulting from super press method compared with continuous method is less acid(0,6120 vs. 0,9760), less oxydazible(K232:2,478 vs. 2,592)(k270:0,216 vs. 0,228), more rich in oleic acid(61,61% vs. 66.99%), less rich in linoleic acid(13,38% vs. 13,98 %), more rich in total chlorophylls pigments (6,22 ppm vs. 3,18 ppm ) and ß-carotene (3,128 mg/kg vs. 1,73 mg/kg). ‘Chemchali’ olive oil showed more equilibrated total content in fatty acids compared with the varieties ’Chemleli’ and ‘Chetoui’. Gafsa’s variety ’Chemlali’ have significantly less saturated and polyunsaturated fatty acids. Whereas it has a higher content in monounsaturated fatty acid C18:2, compared with the two other varieties. Conclusion: The use of super press method had benefic effects on general chemical characteristics of ‘Chemchali’ olive oil, maintaining the highest quality according to the ecocert legal standards. In light of the results obtained in this study, a more detailed study is required to establish whether the differences in the chemical properties of oils are mainly due to agronomic and climate variables or, to the processing employed in oil mills.Keywords: olive oil, extraction method, fatty acids, chemchali olive oil
Procedia PDF Downloads 383495 Analysis of Constraints and Opportunities in Dairy Production in Botswana
Authors: Som Pal Baliyan
Abstract:
Dairy enterprise has been a major source of employment and income generation in most of the economies worldwide. Botswana government has also identified dairy as one of the agricultural sectors towards diversification of the mineral dependent economy of the country. The huge gap between local demand and supply of milk and milk products indicated that there are not only constraints but also; opportunities exist in this sub sector of agriculture. Therefore, this study was an attempt to identify constraints and opportunities in dairy production industry in Botswana. The possible ways to mitigate the constraints were also identified. The findings should assist the stakeholders especially, policy makers in the formulation of effective policies for the growth of dairy sector in the country. This quantitative study adopted a survey research design. A final survey followed by a pilot survey was conducted for data collection. The purpose of the pilot survey was to collect basic information on the nature and extent of the constraints, opportunities and ways to mitigate the constraints in dairy production. Based on the information from pilot survey, a four point Likert’s scale type questionnaire was constructed, validated and tested for its reliability. The data for the final survey were collected from purposively selected twenty five dairy farms. The descriptive statistical tools were employed to analyze data. Among the twelve constraints identified; high feed costs, feed shortage and availability, lack of technical support, lack of skilled manpower, high prevalence of pests and diseases and, lack of dairy related technologies were the six major constraints in dairy production. Grain feed production, roughage feed production, manufacturing of dairy feed, establishment of milk processing industry and, development of transportation systems were the five major opportunities among the eight opportunities identified. Increasing production of animal feed locally, increasing roughage feed production locally, provision of subsidy on animal feed, easy access to sufficient financial support, training of the farmers and, effective control of pests and diseases were identified as the six major ways to mitigate the constraints. It was recommended that the identified constraints and opportunities as well as the ways to mitigate the constraints need to be carefully considered by the stakeholders especially, policy makers during the formulation and implementation of the policies for the development of dairy sector in Botswana.Keywords: dairy enterprise, milk production, opportunities, production constraints
Procedia PDF Downloads 405494 Incorporation of Noncanonical Amino Acids into Hard-to-Express Antibody Fragments: Expression and Characterization
Authors: Hana Hanaee-Ahvaz, Monika Cserjan-Puschmann, Christopher Tauer, Gerald Striedner
Abstract:
Incorporation of noncanonical amino acids (ncAA) into proteins has become an interesting topic as proteins featured with ncAAs offer a wide range of different applications. Nowadays, technologies and systems exist that allow for the site-specific introduction of ncAAs in vivo, but the efficient production of proteins modified this way is still a big challenge. This is especially true for 'hard-to-express' proteins where low yields are encountered even with the native sequence. In this study, site-specific incorporation of azido-ethoxy-carbonyl-Lysin (azk) into an anti-tumor-necrosis-factor-α-Fab (FTN2) was investigated. According to well-established parameters, possible site positions for ncAA incorporation were determined, and corresponding FTN2 genes were constructed. Each of the modified FTN2 variants has one amber codon for azk incorporated either in its heavy or light chain. The expression level for all variants produced was determined by ELISA, and all azk variants could be produced with a satisfactory yield in the range of 50-70% of the original FTN2 variant. In terms of expression yield, neither the azk incorporation position nor the subunit modified (heavy or light chain) had a significant effect. We confirmed correct protein processing and azk incorporation by mass spectrometry analysis, and antigen-antibody interaction was determined by surface plasmon resonance analysis. The next step is to characterize the effect of azk incorporation on protein stability and aggregation tendency via differential scanning calorimetry and light scattering, respectively. In summary, the incorporation of ncAA into our Fab candidate FTN2 worked better than expected. The quantities produced allowed a detailed characterization of the variants in terms of their properties, and we can now turn our attention to potential applications. By using click chemistry, we can equip the Fabs with additional functionalities and make them suitable for a wide range of applications. We will now use this option in a first approach and develop an assay that will allow us to follow the degradation of the recombinant target protein in vivo. Special focus will be laid on the proteolytic activity in the periplasm and how it is influenced by cultivation/induction conditions.Keywords: degradation, FTN2, hard-to-express protein, non-canonical amino acids
Procedia PDF Downloads 233493 Attention and Creative Problem-Solving: Cognitive Differences between Adults with and without Attention Deficit Hyperactivity Disorder
Authors: Lindsey Carruthers, Alexandra Willis, Rory MacLean
Abstract:
Introduction: It has been proposed that distractibility, a key diagnostic criterion of Attention Deficit Hyperactivity Disorder (ADHD), may be associated with higher creativity levels in some individuals. Anecdotal and empirical evidence has shown that ADHD is therefore beneficial to creative problem-solving, and the generation of new ideas and products. Previous studies have only used one or two measures of attention, which is insufficient given that it is a complex cognitive process. The current study aimed to determine in which ways performance on creative problem-solving tasks and a range of attention tests may be related, and if performance differs between adults with and without ADHD. Methods: 150 adults, 47 males and 103 females (mean age=28.81 years, S.D.=12.05 years), were tested at Edinburgh Napier University. Of this set, 50 participants had ADHD, and 100 did not, forming the control group. Each participant completed seven attention tasks, assessing focussed, sustained, selective, and divided attention. Creative problem-solving was measured using divergent thinking tasks, which require multiple original solutions for one given problem. Two types of divergent thinking task were used: verbal (requires written responses) and figural (requires drawn responses). Each task is scored for idea originality, with higher scores indicating more creative responses. Correlational analyses were used to explore relationships between attention and creative problem-solving, and t-tests were used to study the between group differences. Results: The control group scored higher on originality for figural divergent thinking (t(148)= 3.187, p< .01), whereas the ADHD group had more original ideas for the verbal divergent thinking task (t(148)= -2.490, p < .05). Within the control group, figural divergent thinking scores were significantly related to both selective (r= -.295 to -.285, p < .01) and divided attention (r= .206 to .290, p < .05). Alternatively, within the ADHD group, both selective (r= -.390 to -.356, p < .05) and divided (r= .328 to .347, p < .05) attention are related to verbal divergent thinking. Conclusions: Selective and divided attention are both related to divergent thinking, however the performance patterns are different between each group, which may point to cognitive variance in the processing of these problems and how they are managed. The creative differences previously found between those with and without ADHD may be dependent on task type, which to the author’s knowledge, has not been distinguished previously. It appears that ADHD does not specifically lead to higher creativity, but may provide explanation for creative differences when compared to those without the disorder.Keywords: ADHD, attention, creativity, problem-solving
Procedia PDF Downloads 456492 Impact of Intelligent Transportation System on Planning, Operation and Safety of Urban Corridor
Authors: Sourabh Jain, S. S. Jain
Abstract:
Intelligent transportation system (ITS) is the application of technologies for developing a user–friendly transportation system to extend the safety and efficiency of urban transportation systems in developing countries. These systems involve vehicles, drivers, passengers, road operators, managers of transport services; all interacting with each other and the surroundings to boost the security and capacity of road systems. The goal of urban corridor management using ITS in road transport is to achieve improvements in mobility, safety, and the productivity of the transportation system within the available facilities through the integrated application of advanced monitoring, communications, computer, display, and control process technologies, both in the vehicle and on the road. Intelligent transportation system is a product of the revolution in information and communications technologies that is the hallmark of the digital age. The basic ITS technology is oriented on three main directions: communications, information, integration. Information acquisition (collection), processing, integration, and sorting are the basic activities of ITS. In the paper, attempts have been made to present the endeavor that was made to interpret and evaluate the performance of the 27.4 Km long study corridor having eight intersections and four flyovers. The corridor consisting of six lanes as well as eight lanes divided road network. Two categories of data have been collected such as traffic data (traffic volume, spot speed, delay) and road characteristics data (no. of lanes, lane width, bus stops, mid-block sections, intersections, flyovers). The instruments used for collecting the data were video camera, stop watch, radar gun, and mobile GPS (GPS tracker lite). From the analysis, the performance interpretations incorporated were the identification of peak and off-peak hours, congestion and level of service (LOS) at midblock sections and delay followed by plotting the speed contours. The paper proposed the urban corridor management strategies based on sensors integrated into both vehicles and on the roads that those have to be efficiently executable, cost-effective, and familiar to road users. It will be useful to reduce congestion, fuel consumption, and pollution so as to provide comfort, safety, and efficiency to the users.Keywords: ITS strategies, congestion, planning, mobility, safety
Procedia PDF Downloads 179491 Detection and Classification Strabismus Using Convolutional Neural Network and Spatial Image Processing
Authors: Anoop T. R., Otman Basir, Robert F. Hess, Eileen E. Birch, Brooke A. Koritala, Reed M. Jost, Becky Luu, David Stager, Ben Thompson
Abstract:
Strabismus refers to a misalignment of the eyes. Early detection and treatment of strabismus in childhood can prevent the development of permanent vision loss due to abnormal development of visual brain areas. We developed a two-stage method for strabismus detection and classification based on photographs of the face. The first stage detects the presence or absence of strabismus, and the second stage classifies the type of strabismus. The first stage comprises face detection using Haar cascade, facial landmark estimation, face alignment, aligned face landmark detection, segmentation of the eye region, and detection of strabismus using VGG 16 convolution neural networks. Face alignment transforms the face to a canonical pose to ensure consistency in subsequent analysis. Using facial landmarks, the eye region is segmented from the aligned face and fed into a VGG 16 CNN model, which has been trained to classify strabismus. The CNN determines whether strabismus is present and classifies the type of strabismus (exotropia, esotropia, and vertical deviation). If stage 1 detects strabismus, the eye region image is fed into stage 2, which starts with the estimation of pupil center coordinates using mask R-CNN deep neural networks. Then, the distance between the pupil coordinates and eye landmarks is calculated along with the angle that the pupil coordinates make with the horizontal and vertical axis. The distance and angle information is used to characterize the degree and direction of the strabismic eye misalignment. This model was tested on 100 clinically labeled images of children with (n = 50) and without (n = 50) strabismus. The True Positive Rate (TPR) and False Positive Rate (FPR) of the first stage were 94% and 6% respectively. The classification stage has produced a TPR of 94.73%, 94.44%, and 100% for esotropia, exotropia, and vertical deviations, respectively. This method also had an FPR of 5.26%, 5.55%, and 0% for esotropia, exotropia, and vertical deviation, respectively. The addition of one more feature related to the location of corneal light reflections may reduce the FPR, which was primarily due to children with pseudo-strabismus (the appearance of strabismus due to a wide nasal bridge or skin folds on the nasal side of the eyes).Keywords: strabismus, deep neural networks, face detection, facial landmarks, face alignment, segmentation, VGG 16, mask R-CNN, pupil coordinates, angle deviation, horizontal and vertical deviation
Procedia PDF Downloads 93490 Multi-Criteria Decision Making Tool for Assessment of Biorefinery Strategies
Authors: Marzouk Benali, Jawad Jeaidi, Behrang Mansoornejad, Olumoye Ajao, Banafsheh Gilani, Nima Ghavidel Mehr
Abstract:
Canadian forest industry is seeking to identify and implement transformational strategies for enhanced financial performance through the emerging bioeconomy or more specifically through the concept of the biorefinery. For example, processing forest residues or surplus of biomass available on the mill sites for the production of biofuels, biochemicals and/or biomaterials is one of the attractive strategies along with traditional wood and paper products and cogenerated energy. There are many possible process-product biorefinery pathways, each associated with specific product portfolios with different levels of risk. Thus, it is not obvious which unique strategy forest industry should select and implement. Therefore, there is a need for analytical and design tools that enable evaluating biorefinery strategies based on a set of criteria considering a perspective of sustainability over the short and long terms, while selecting the existing core products as well as selecting the new product portfolio. In addition, it is critical to assess the manufacturing flexibility to internalize the risk from market price volatility of each targeted bio-based product in the product portfolio, prior to invest heavily in any biorefinery strategy. The proposed paper will focus on introducing a systematic methodology for designing integrated biorefineries using process systems engineering tools as well as a multi-criteria decision making framework to put forward the most effective biorefinery strategies that fulfill the needs of the forest industry. Topics to be covered will include market analysis, techno-economic assessment, cost accounting, energy integration analysis, life cycle assessment and supply chain analysis. This will be followed by describing the vision as well as the key features and functionalities of the I-BIOREF software platform, developed by CanmetENERGY of Natural Resources Canada. Two industrial case studies will be presented to support the robustness and flexibility of I-BIOREF software platform: i) An integrated Canadian Kraft pulp mill with lignin recovery process (namely, LignoBoost™); ii) A standalone biorefinery based on ethanol-organosolv process.Keywords: biorefinery strategies, bioproducts, co-production, multi-criteria decision making, tool
Procedia PDF Downloads 232489 Microstructure and Mechanical Properties Evaluation of Graphene-Reinforced AlSi10Mg Matrix Composite Produced by Powder Bed Fusion Process
Authors: Jitendar Kumar Tiwari, Ajay Mandal, N. Sathish, A. K. Srivastava
Abstract:
Since the last decade, graphene achieved great attention toward the progress of multifunction metal matrix composites, which are highly demanded in industries to develop energy-efficient systems. This study covers the two advanced aspects of the latest scientific endeavor, i.e., graphene as reinforcement in metallic materials and additive manufacturing (AM) as a processing technology. Herein, high-quality graphene and AlSi10Mg powder mechanically mixed by very low energy ball milling with 0.1 wt. % and 0.2 wt. % graphene. Mixed powder directly subjected to the powder bed fusion process, i.e., an AM technique to produce composite samples along with bare counterpart. The effects of graphene on porosity, microstructure, and mechanical properties were examined in this study. The volumetric distribution of pores was observed under X-ray computed tomography (CT). On the basis of relative density measurement by X-ray CT, it was observed that porosity increases after graphene addition, and pore morphology also transformed from spherical pores to enlarged flaky pores due to improper melting of composite powder. Furthermore, the microstructure suggests the grain refinement after graphene addition. The columnar grains were able to cross the melt pool boundaries in case of the bare sample, unlike composite samples. The smaller columnar grains were formed in composites due to heterogeneous nucleation by graphene platelets during solidification. The tensile properties get affected due to induced porosity irrespective of graphene reinforcement. The optimized tensile properties were achieved at 0.1 wt. % graphene. The increment in yield strength and ultimate tensile strength was 22% and 10%, respectively, for 0.1 wt. % graphene reinforced sample in comparison to bare counterpart while elongation decreases 20% for the same sample. The hardness indentations were taken mostly on the solid region in order to avoid the collapse of the pores. The hardness of the composite was increased progressively with graphene content. Around 30% of increment in hardness was achieved after the addition of 0.2 wt. % graphene. Therefore, it can be concluded that powder bed fusion can be adopted as a suitable technique to develop graphene reinforced AlSi10Mg composite. Though, some further process modification required to avoid the induced porosity after the addition of graphene, which can be addressed in future work.Keywords: graphene, hardness, porosity, powder bed fusion, tensile properties
Procedia PDF Downloads 128488 Multi-Scale Damage Modelling for Microstructure Dependent Short Fiber Reinforced Composite Structure Design
Authors: Joseph Fitoussi, Mohammadali Shirinbayan, Abbas Tcharkhtchi
Abstract:
Due to material flow during processing, short fiber reinforced composites structures obtained by injection or compression molding generally present strong spatial microstructure variation. On the other hand, quasi-static, dynamic, and fatigue behavior of these materials are highly dependent on microstructure parameters such as fiber orientation distribution. Indeed, because of complex damage mechanisms, SFRC structures design is a key challenge for safety and reliability. In this paper, we propose a micromechanical model allowing prediction of damage behavior of real structures as a function of microstructure spatial distribution. To this aim, a statistical damage criterion including strain rate and fatigue effect at the local scale is introduced into a Mori and Tanaka model. A critical local damage state is identified, allowing fatigue life prediction. Moreover, the multi-scale model is coupled with an experimental intrinsic link between damage under monotonic loading and fatigue life in order to build an abacus giving Tsai-Wu failure criterion parameters as a function of microstructure and targeted fatigue life. On the other hand, the micromechanical damage model gives access to the evolution of the anisotropic stiffness tensor of SFRC submitted to complex thermomechanical loading, including quasi-static, dynamic, and cyclic loading with temperature and amplitude variations. Then, the latter is used to fill out microstructure dependent material cards in finite element analysis for design optimization in the case of complex loading history. The proposed methodology is illustrated in the case of a real automotive component made of sheet molding compound (PSA 3008 tailgate). The obtained results emphasize how the proposed micromechanical methodology opens a new path for the automotive industry to lighten vehicle bodies and thereby save energy and reduce gas emission.Keywords: short fiber reinforced composite, structural design, damage, micromechanical modelling, fatigue, strain rate effect
Procedia PDF Downloads 107487 Prevalence of Foodborne Pathogens in Pig and Cattle Carcass Samples Collected from Korean Slaughterhouses
Authors: Kichan Lee, Kwang-Ho Choi, Mi-Hye Hwang, Young Min Son, Bang-Hun Hyun, Byeong Yeal Jung
Abstract:
Recently, worldwide food safety authorities have been strengthening food hygiene in order to curb foodborne illness outbreaks. The hygiene status of Korean slaughterhouses has been monitored annually by Animal and Plant Quarantine Agency and provincial governments through foodborne pathogens investigation using slaughtered pig and cattle meats. This study presented the prevalence of food-borne pathogens from 2014 to 2016 in Korean slaughterhouses. Sampling, microbiological examinations, and analysis of results were performed in accordance with ‘Processing Standards and Ingredient Specifications for Livestock Products’. In total, swab samples from 337 pig carcasses (100 samples in 2014, 135 samples in 2015, 102 samples in 2016) and 319 cattle carcasses (100 samples in 2014, 119 samples in 2015, 100 samples in 2016) from twenty slaughterhouses were examined for Listeria monocytogenes, Campylobacter jejuni, Campylobacter coli, Salmonella spp., Staphylococcus aureus, Clostridium perfringens, Yersinia enterocolitica, Escherichia coli O157:H7 and non-O157 enterohemorrhagic E. coli (EHEC, serotypes O26, O45, O103, O104, O111, O121, O128 and O145) as foodborne pathogens. The samples were analyzed using cultural and PCR-based methods. Foodborne pathogens were isolated in 78 (23.1%) out of 337 pig samples. In 2014, S. aureus (n=17) was predominant, followed by Y. enterocolitica (n=7), C. perfringens (n=2) and L. monocytogenes (n=2). In 2015, C. coli (n=14) was the most prevalent, followed by L. monocytogenes (n=4), S. aureus (n=3), and C. perfringens (n=2). In 2016, S. aureus (n=16) was the most prevalent, followed by C. coli (n=13), L. monocytogenes (n=2) and C. perfringens (n=1). In case of cattle carcasses, foodborne bacteria were detected in 41 (12.9%) out of 319 samples. In 2014, S. aureus (n=16) was the most prevalent, followed by Y. enterocolitica (n=3), C. perfringens (n=3) and L. monocytogenes (n=2). In 2015, L. monocytogenes was isolated from 4 samples, S. aureus from three, C. perfringens, Y. enterocolitica and Salmonella spp. from one, respectively. In 2016, L. monocytogenes (n=6) was the most prevalent, followed by C. perfringens (n=3) C. jejuni (n=1), respectively. It was found that 10 carcass samples (4 cattle and 6 pigs) were contaminated with two bacterial pathogen tested. Interestingly, foodborne pathogens were more detected from pig carcasses than cattle carcasses. Although S. aureus was predominantly detected in this study, other foodborne pathogens were also isolated in slaughtered meats. Results of this study alerted the risk of foodborne pathogen infection for humans from slaughtered meats. Therefore, the authors insisted that it was important to enhance hygiene level of slaughterhouses according to Hazard Analysis and Critical Control Point.Keywords: carcass, cattle, foodborne, Korea, pathogen, pig
Procedia PDF Downloads 344486 Impact of Elements of Rock and Water Combination on Landscape Perception: A Visual Landscape Quality Assessment on Kaludiya Pokuna in Sri Lanka
Authors: Clarence Dissanayake, Anishka A. Hettiarachchi
Abstract:
Landscape architecture needs to encompass a placemaking process carefully composing and manipulating landscape elements to address perceptual needs of humans, especially aesthetic, psychological and spiritual. The objective of this qualitative investigation is to inquire the impact of elements of rock and water combination on landscape perception and related feelings, emotions, and behavior. The past empirical studies have assessed the impact of landscape elements in isolation on user preference, yet the combined effect of elements have been less considered. This research was conducted with reference to the verity of qualities of water and rock through a visual landscape quality assessment focusing on landscape qualities derived from five visual concepts (coherence, historicity imageability, naturalness, and ephemera). 'Kaludiya Pokuna' archeological site in Anuradhapura was investigated with a sample of University students (n=19, male 14, female 5, age 20-25) using a five-point Likert scale via a perception based questionnaire and a visitor employed photographic survey (VEP). Two hypothetical questions were taken into investigation concerning biophilic (naturalness) and topophilic (historicity) aspects of humans to prefer a landscape with rock and water. The findings revealed that this combination encourages both biophilic and topophilic aspects, but in varying degrees. The identified hierarchy of visual concepts based on visitor’s preference signify coherence (93%), historicity (89%), imageability (79%), naturalness (75%) and ephemera (70%) respectively. It was further revealed that this combination creates a scenery more coherent dominating information processing aspect of humans to perceive a landscape over the biophilic and topophilic aspects. Different characteristics and secondary landscape effects generated by rock and water combination were found to affect in transforming a space into a place, full filling the aesthetic and spiritual aspects of the visitors. These findings enhance a means of making places for people, resource management and historical landscape conservation. Equalization of gender based participation, taking diverse cases and increasing the sample size with more analytical photographic analysis are recommended to enhance the quality of further research.Keywords: landscape perception, visitor’s preference, rock and water combination, visual concepts
Procedia PDF Downloads 226485 Detection of Abnormal Process Behavior in Copper Solvent Extraction by Principal Component Analysis
Authors: Kirill Filianin, Satu-Pia Reinikainen, Tuomo Sainio
Abstract:
Frequent measurements of product steam quality create a data overload that becomes more and more difficult to handle. In the current study, plant history data with multiple variables was successfully treated by principal component analysis to detect abnormal process behavior, particularly, in copper solvent extraction. The multivariate model is based on the concentration levels of main process metals recorded by the industrial on-stream x-ray fluorescence analyzer. After mean-centering and normalization of concentration data set, two-dimensional multivariate model under principal component analysis algorithm was constructed. Normal operating conditions were defined through control limits that were assigned to squared score values on x-axis and to residual values on y-axis. 80 percent of the data set were taken as the training set and the multivariate model was tested with the remaining 20 percent of data. Model testing showed successful application of control limits to detect abnormal behavior of copper solvent extraction process as early warnings. Compared to the conventional techniques of analyzing one variable at a time, the proposed model allows to detect on-line a process failure using information from all process variables simultaneously. Complex industrial equipment combined with advanced mathematical tools may be used for on-line monitoring both of process streams’ composition and final product quality. Defining normal operating conditions of the process supports reliable decision making in a process control room. Thus, industrial x-ray fluorescence analyzers equipped with integrated data processing toolbox allows more flexibility in copper plant operation. The additional multivariate process control and monitoring procedures are recommended to apply separately for the major components and for the impurities. Principal component analysis may be utilized not only in control of major elements’ content in process streams, but also for continuous monitoring of plant feed. The proposed approach has a potential in on-line instrumentation providing fast, robust and cheap application with automation abilities.Keywords: abnormal process behavior, failure detection, principal component analysis, solvent extraction
Procedia PDF Downloads 310484 Measuring Fluctuating Asymmetry in Human Faces Using High-Density 3D Surface Scans
Authors: O. Ekrami, P. Claes, S. Van Dongen
Abstract:
Fluctuating asymmetry (FA) has been studied for many years as an indicator of developmental stability or ‘genetic quality’ based on the assumption that perfect symmetry is ideally the expected outcome for a bilateral organism. Further studies have also investigated the possible link between FA and attractiveness or levels of masculinity or femininity. These hypotheses have been mostly examined using 2D images, and the structure of interest is usually presented using a limited number of landmarks. Such methods have the downside of simplifying and reducing the dimensionality of the structure, which will in return increase the error of the analysis. In an attempt to reach more conclusive and accurate results, in this study we have used high-resolution 3D scans of human faces and have developed an algorithm to measure and localize FA, taking a spatially-dense approach. A symmetric spatially dense anthropometric mask with paired vertices is non-rigidly mapped on target faces using an Iterative Closest Point (ICP) registration algorithm. A set of 19 manually indicated landmarks were used to examine the precision of our mapping step. The protocol’s accuracy in measurement and localizing FA is assessed using simulated faces with known amounts of asymmetry added to them. The results of validation of our approach show that the algorithm is perfectly capable of locating and measuring FA in 3D simulated faces. With the use of such algorithm, the additional captured information on asymmetry can be used to improve the studies of FA as an indicator of fitness or attractiveness. This algorithm can especially be of great benefit in studies of high number of subjects due to its automated and time-efficient nature. Additionally, taking a spatially dense approach provides us with information about the locality of FA, which is impossible to obtain using conventional methods. It also enables us to analyze the asymmetry of a morphological structures in a multivariate manner; This can be achieved by using methods such as Principal Components Analysis (PCA) or Factor Analysis, which can be a step towards understanding the underlying processes of asymmetry. This method can also be used in combination with genome wide association studies to help unravel the genetic bases of FA. To conclude, we introduced an algorithm to study and analyze asymmetry in human faces, with the possibility of extending the application to other morphological structures, in an automated, accurate and multi-variate framework.Keywords: developmental stability, fluctuating asymmetry, morphometrics, 3D image processing
Procedia PDF Downloads 140483 Modelling Distress Sale in Agriculture: Evidence from Maharashtra, India
Authors: Disha Bhanot, Vinish Kathuria
Abstract:
This study focusses on the issue of distress sale in horticulture sector in India, which faces unique challenges, given the perishable nature of horticulture crops, seasonal production and paucity of post-harvest produce management links. Distress sale, from a farmer’s perspective may be defined as urgent sale of normal or distressed goods, at deeply discounted prices (way below the cost of production) and it is usually characterized by unfavorable conditions for the seller (farmer). The small and marginal farmers, often involved in subsistence farming, stand to lose substantially if they receive lower prices than expected prices (typically framed in relation to cost of production). Distress sale maximizes price uncertainty of produce leading to substantial income loss; and with increase in input costs of farming, the high variability in harvest price severely affects profit margin of farmers, thereby affecting their survival. The objective of this study is to model the occurrence of distress sale by tomato cultivators in the Indian state of Maharashtra, against the background of differential access to set of factors such as - capital, irrigation facilities, warehousing, storage and processing facilities, and institutional arrangements for procurement etc. Data is being collected using primary survey of over 200 farmers in key tomato growing areas of Maharashtra, asking information on the above factors in addition to seeking information on cost of cultivation, selling price, time gap between harvesting and selling, role of middleman in selling, besides other socio-economic variables. Farmers selling their produce far below the cost of production would indicate an occurrence of distress sale. Occurrence of distress sale would then be modelled as a function of farm, household and institutional characteristics. Heckman-two-stage model would be applied to find the probability/likelihood of a famer falling into distress sale as well as to ascertain how the extent of distress sale varies in presence/absence of various factors. Findings of the study would recommend suitable interventions and promotion of strategies that would help farmers better manage price uncertainties, avoid distress sale and increase profit margins, having direct implications on poverty.Keywords: distress sale, horticulture, income loss, India, price uncertainity
Procedia PDF Downloads 243482 Recommendations for Teaching Word Formation for Students of Linguistics Using Computer Terminology as an Example
Authors: Svetlana Kostrubina, Anastasia Prokopeva
Abstract:
This research presents a comprehensive study of the word formation processes in computer terminology within English and Russian languages and provides listeners with a system of exercises for training these skills. The originality is that this study focuses on a comparative approach, which shows both general patterns and specific features of English and Russian computer terms word formation. The key point is the system of exercises development for training computer terminology based on Bloom’s taxonomy. Data contain 486 units (228 English terms from the Glossary of Computer Terms and 258 Russian terms from the Terminological Dictionary-Reference Book). The objective is to identify the main affixation models in the English and Russian computer terms formation and to develop exercises. To achieve this goal, the authors employed Bloom’s Taxonomy as a methodological framework to create a systematic exercise program aimed at enhancing students’ cognitive skills in analyzing, applying, and evaluating computer terms. The exercises are appropriate for various levels of learning, from basic recall of definitions to higher-order thinking skills, such as synthesizing new terms and critically assessing their usage in different contexts. Methodology also includes: a method of scientific and theoretical analysis for systematization of linguistic concepts and clarification of the conceptual and terminological apparatus; a method of nominative and derivative analysis for identifying word-formation types; a method of word-formation analysis for organizing linguistic units; a classification method for determining structural types of abbreviations applicable to the field of computer communication; a quantitative analysis technique for determining the productivity of methods for forming abbreviations of computer vocabulary based on the English and Russian computer terms, as well as a technique of tabular data processing for a visual presentation of the results obtained. a technique of interlingua comparison for identifying common and different features of abbreviations of computer terms in the Russian and English languages. The research shows that affixation retains its productivity in the English and Russian computer terms formation. Bloom’s taxonomy allows us to plan a training program and predict the effectiveness of the compiled program based on the assessment of the teaching methods used.Keywords: word formation, affixation, computer terms, Bloom's taxonomy
Procedia PDF Downloads 14