Search results for: fully spatial signal processing
512 Development and Compositional Analysis of Functional Bread and Biscuit from Soybean, Peas and Rice Flour
Authors: Jean Paul Hategekimana, Bampire Claudine, Niyonsenga Nadia, Irakoze Josiane
Abstract:
Peas, soybeans and rice are crops which are grown in Rwanda and are available in rural and urban local markets and they give contribution in reduction of health problems especially in fighting malnutrition and food insecurity in Rwanda. Several research activities have been conducted on how cereals flour can be mixed with legumes flour for developing baked products which are rich in protein, fiber, minerals as they are found in legumes. However, such activity was not yet well studied in Rwanda. The aim of the present study was to develop bread and biscuit products from peas, soybeans and rice as functional ingredients combined with wheat flour and then analyze the nutritional content and consumer acceptability of new developed products. The malnutrition problem can be reduced by producing bread and biscuits which are rich in protein and are very accessible for every individual. The processing of bread and biscuit were made by taking peas flour, soybeans flour and rice flour mixed with wheat flour and other ingredients then a dough was made followed by baking. For bread, two kind of products were processed, for each product one control and three experimental samples in different three ratios of peas and rice were prepared. These ratios were 95:5, 90:10 and 80:20 for bread from peas and 85:5:10, 80:10:10 and 70:10:20 for bread from peas and rice. For biscuit, two kind of products were also processed, for each product one control sample and three experimental samples in three different ratios were prepared. These ratios are 90:5:5,80:10:10 and 70:10:20 for biscuit from peas and rice and 90:5:5,80:10:10 and 70:10:20 for biscuit from soybean and rice. All samples including the control sample were analyzed for the consumer acceptability (sensory attributes) and nutritional composition. For sensory analysis, bread from of peas and rice flour with wheat flour at ratio 85:5:10 and bread from peas only as functional ingredient with wheat flour at ratio 95:5 and biscuits made from a of soybeans and rice at a ratio 90:5:5 and biscuit made from peas and rice at ratio 90:5:5 were most acceptable compared to control sample and other samples in different ratio. The moisture, protein, fat, fiber and minerals (Sodium and iron.) content were analyzed where bread from peas in all ratios was found to be rich in protein and fiber compare to control sample and biscuit from soybean and rice in all ratios was found to be rich in protein and fiber compare to control sample.Keywords: bakery products, peas and rice flour, wheat flour, sensory evaluation, proximate composition
Procedia PDF Downloads 64511 Insight2OSC: Using Electroencephalography (EEG) Rhythms from the Emotiv Insight for Musical Composition via Open Sound Control (OSC)
Authors: Constanza Levicán, Andrés Aparicio, Rodrigo F. Cádiz
Abstract:
The artistic usage of Brain-computer interfaces (BCI), initially intended for medical purposes, has increased in the past few years as they become more affordable and available for the general population. One interesting question that arises from this practice is whether it is possible to compose or perform music by using only the brain as a musical instrument. In order to approach this question, we propose a BCI for musical composition, based on the representation of some mental states as the musician thinks about sounds. We developed software, called Insight2OSC, that allows the usage of the Emotiv Insight device as a musical instrument, by sending the EEG data to audio processing software such as MaxMSP through the OSC protocol. We provide two compositional applications bundled with the software, which we call Mapping your Mental State and Thinking On. The signals produced by the brain have different frequencies (or rhythms) depending on the level of activity, and they are classified as one of the following waves: delta (0.5-4 Hz), theta (4-8 Hz), alpha (8-13 Hz), beta (13-30 Hz), gamma (30-50 Hz). These rhythms have been found to be related to some recognizable mental states. For example, the delta rhythm is predominant in a deep sleep, while beta and gamma rhythms have higher amplitudes when the person is awake and very concentrated. Our first application (Mapping your Mental State) produces different sounds representing the mental state of the person: focused, active, relaxed or in a state similar to a deep sleep by the selection of the dominants rhythms provided by the EEG device. The second application relies on the physiology of the brain, which is divided into several lobes: frontal, temporal, parietal and occipital. The frontal lobe is related to abstract thinking and high-level functions, the parietal lobe conveys the stimulus of the body senses, the occipital lobe contains the primary visual cortex and processes visual stimulus, the temporal lobe processes auditory information and it is important for memory tasks. In consequence, our second application (Thinking On) processes the audio output depending on the users’ brain activity as it activates a specific area of the brain that can be measured using the Insight device.Keywords: BCI, music composition, emotiv insight, OSC
Procedia PDF Downloads 322510 Recovery of Physical Performance in Postpartum Women: An Effective Physical Education Program
Authors: Julia A. Ermakova
Abstract:
This study aimed to investigate the efficacy of a physical rehabilitation program for postpartum women. The program was developed with the purpose of restoring physical performance in women during the postpartum period. The research employed a variety of methods, including an analysis of scientific literature, pedagogical testing and experimentation, mathematical processing of study results, and physical performance assessment using a range of tests. The program recommends refraining from abdominal exercises during the first 6-8 months following a cesarean section and avoiding exercises with weights. Instead, a feasible training regimen that gradually increases in intensity several times a week is recommended, along with moderate cardio exercises such as walking, bodyweight training, and a separate workout component that targets posture improvement. Stretching after strength training is also encouraged. The necessary equipment includes comfortable sports attire with a chest support top, mat, push-ups, resistance band, timer, and clock. The motivational aspect of the program is paramount, and the mentee's positive experience with the workout regimen includes feelings of lightness in the body, increased energy, and positive emotions. The gradual reduction of body size and weight loss due to an improved metabolism also serves as positive reinforcement. The mentee's progress can be measured through various means, including an external assessment of her form, body measurements, weight, BMI, and the presence or absence of slouching in everyday life. The findings of this study reveal that the program is effective in restoring physical performance in postpartum women. The mentee achieved weight loss and almost regained her pre-pregnancy shape while her self-esteem improved. Her waist, shoulder, and hip measurements decreased, and she displayed less slouching in her daily life. In conclusion, the developed physical rehabilitation program for postpartum women is an effective means of restoring physical performance. It is crucial to follow the recommended training regimen and equipment to avoid limitations and ensure safety during the postpartum period. The motivational component of the program is also fundamental in encouraging positive reinforcement and improving self-esteem.Keywords: physical rehabilitation, postpartum, methodology, postpartum recovery, rehabilitation
Procedia PDF Downloads 75509 Adversarial Attacks and Defenses on Deep Neural Networks
Authors: Jonathan Sohn
Abstract:
Deep neural networks (DNNs) have shown state-of-the-art performance for many applications, including computer vision, natural language processing, and speech recognition. Recently, adversarial attacks have been studied in the context of deep neural networks, which aim to alter the results of deep neural networks by modifying the inputs slightly. For example, an adversarial attack on a DNN used for object detection can cause the DNN to miss certain objects. As a result, the reliability of DNNs is undermined by their lack of robustness against adversarial attacks, raising concerns about their use in safety-critical applications such as autonomous driving. In this paper, we focus on studying the adversarial attacks and defenses on DNNs for image classification. There are two types of adversarial attacks studied which are fast gradient sign method (FGSM) attack and projected gradient descent (PGD) attack. A DNN forms decision boundaries that separate the input images into different categories. The adversarial attack slightly alters the image to move over the decision boundary, causing the DNN to misclassify the image. FGSM attack obtains the gradient with respect to the image and updates the image once based on the gradients to cross the decision boundary. PGD attack, instead of taking one big step, repeatedly modifies the input image with multiple small steps. There is also another type of attack called the target attack. This adversarial attack is designed to make the machine classify an image to a class chosen by the attacker. We can defend against adversarial attacks by incorporating adversarial examples in training. Specifically, instead of training the neural network with clean examples, we can explicitly let the neural network learn from the adversarial examples. In our experiments, the digit recognition accuracy on the MNIST dataset drops from 97.81% to 39.50% and 34.01% when the DNN is attacked by FGSM and PGD attacks, respectively. If we utilize FGSM training as a defense method, the classification accuracy greatly improves from 39.50% to 92.31% for FGSM attacks and from 34.01% to 75.63% for PGD attacks. To further improve the classification accuracy under adversarial attacks, we can also use a stronger PGD training method. PGD training improves the accuracy by 2.7% under FGSM attacks and 18.4% under PGD attacks over FGSM training. It is worth mentioning that both FGSM and PGD training do not affect the accuracy of clean images. In summary, we find that PGD attacks can greatly degrade the performance of DNNs, and PGD training is a very effective way to defend against such attacks. PGD attacks and defence are overall significantly more effective than FGSM methods.Keywords: deep neural network, adversarial attack, adversarial defense, adversarial machine learning
Procedia PDF Downloads 195508 Particle Size Characteristics of Aerosol Jets Produced by a Low Powered E-Cigarette
Authors: Mohammad Shajid Rahman, Tarik Kaya, Edgar Matida
Abstract:
Electronic cigarettes, also known as e-cigarettes, may have become a tool to improve smoking cessation due to their ability to provide nicotine at a selected rate. Unlike traditional cigarettes, which produce toxic elements from tobacco combustion, e-cigarettes generate aerosols by heating a liquid solution (commonly a mixture of propylene glycol, vegetable glycerin, nicotine and some flavoring agents). However, caution still needs to be taken when using e-cigarettes due to the presence of addictive nicotine and some harmful substances produced from the heating process. Particle size distribution (PSD) and associated velocities generated by e-cigarettes have significant influence on aerosol deposition in different regions of human respiratory tracts. On another note, low actuation power is beneficial in aerosol generating devices since it exhibits a reduced emission of toxic chemicals. In case of e-cigarettes, lower heating powers can be considered as powers lower than 10 W compared to a wide range of powers (0.6 to 70.0 W) studied in literature. Due to the importance regarding inhalation risk reduction, deeper understanding of particle size characteristics of e-cigarettes demands thorough investigation. However, comprehensive study on PSD and velocities of e-cigarettes with a standard testing condition at relatively low heating powers is still lacking. The present study aims to measure particle number count and size distribution of undiluted aerosols of a latest fourth-generation e-cigarette at low powers, within 6.5 W using real-time particle counter (time-of-flight method). Also, temporal and spatial evolution of particle size and velocity distribution of aerosol jets are examined using phase Doppler anemometry (PDA) technique. To the authors’ best knowledge, application of PDA in e-cigarette aerosol measurement is rarely reported. In the present study, preliminary results about particle number count of undiluted aerosols measured by time-of-flight method depicted that an increase of heating power from 3.5 W to 6.5 W resulted in an enhanced asymmetricity in PSD, deviating from log-normal distribution. This can be considered as an artifact of rapid vaporization, condensation and coagulation processes on aerosols caused by higher heating power. A novel mathematical expression, combining exponential, Gaussian and polynomial (EGP) distributions, was proposed to describe asymmetric PSD successfully. The value of count median aerodynamic diameter and geometric standard deviation laid within a range of about 0.67 μm to 0.73 μm, and 1.32 to 1.43, respectively while the power varied from 3.5 W to 6.5 W. Laser Doppler velocimetry (LDV) and PDA measurement suggested a typical centerline streamwise mean velocity decay of aerosol jet along with a reduction of particle sizes. In the final submission, a thorough literature review, detailed description of experimental procedure and discussion of the results will be provided. Particle size and turbulent characteristics of aerosol jets will be further examined, analyzing arithmetic mean diameter, volumetric mean diameter, volume-based mean diameter, streamwise mean velocity and turbulence intensity. The present study has potential implications in PSD simulation and validation of aerosol dosimetry model, leading to improving related aerosol generating devices.Keywords: E-cigarette aerosol, laser doppler velocimetry, particle size distribution, particle velocity, phase Doppler anemometry
Procedia PDF Downloads 49507 Industrial and Technological Applications of Brewer’s Spent Malt
Authors: Francielo Vendruscolo
Abstract:
During industrial processing of raw materials of animal and vegetable origin, large amounts of solid, liquid and gaseous wastes are generated. Solid residues are usually materials rich in carbohydrates, protein, fiber and minerals. Brewer’s spent grain (BSG) is the main waste generated in the brewing industry, representing 85% of the waste generated in this industry. It is estimated that world’s BSG generation is approximately 38.6 x 106 t per year and represents 20-30% (w/w) of the initial mass of added malt, resulting in low commercial value by-product, however, does not have economic value, but it must be removed from the brewery, as its spontaneous fermentation can attract insects and rodents. For every 100 grams in dry basis, BSG has approximately 68 g total fiber, being divided into 3.5 g of soluble fiber and 64.3 g of insoluble fiber (cellulose, hemicellulose and lignin). In addition to dietary fibers, depending on the efficiency of the grinding process and mashing, BSG may also have starch, reducing sugars, lipids, phenolics and antioxidants, emphasizing that its composition will depend on the barley variety and cultivation conditions, malting and technology involved in the production of beer. BSG demands space for storage, but studies have proposed alternatives such as the use of drying, extrusion, pressing with superheated steam, and grinding to facilitate storage. Other important characteristics that enhance its applicability in bioremediation, effluent treatment and biotechnology, is the surface area (SBET) of 1.748 m2 g-1, total pore volume of 0.0053 cm3 g-1 and mean pore diameter of 121.784 Å, characterized as a macroporous and possess fewer adsorption properties but have great ability to trap suspended solids for separation from liquid solutions. It has low economic value; however, it has enormous potential for technological applications that can improve or add value to this agro-industrial waste. Due to its composition, this material has been used in several industrial applications such as in the production of food ingredients, fiber enrichment by its addition in foods such as breads and cookies in bioremediation processes, substrate for microorganism and production of biomolecules, bioenergy generation, and civil construction, among others. Therefore, the use of this waste or by-product becomes essential and aimed at reducing the amount of organic waste in different industrial processes, especially in breweries.Keywords: brewer’s spent malt, agro-industrial residue, lignocellulosic material, waste generation
Procedia PDF Downloads 208506 Advanced Compound Coating for Delaying Corrosion of Fast-Dissolving Alloy in High Temperature and Corrosive Environment
Authors: Lei Zhao, Yi Song, Tim Dunne, Jiaxiang (Jason) Ren, Wenhan Yue, Lei Yang, Li Wen, Yu Liu
Abstract:
Fasting dissolving magnesium (DM) alloy technology has contributed significantly to the “Shale Revolution” in oil and gas industry. This application requires DM downhole tools dissolving initially at a slow rate, rapidly accelerating to a high rate after certain period of operation time (typically 8 h to 2 days), a contradicting requirement that can hardly be addressed by traditional Mg alloying or processing itself. Premature disintegration has been broadly reported in downhole DM tool from field trials. To address this issue, “temporary” thin polymers of various formulations are currently coated onto DM surface to delay its initial dissolving. Due to conveying parts, harsh downhole condition, and high dissolving rate of the base material, the current delay coatings relying on pure polymers are found to perform well only at low temperature (typical < 100 ℃) and parts without sharp edges or corners, as severe geometries prevent high quality thin film coatings from forming effectively. In this study, a coating technology combining Plasma Electrolytic Oxide (PEO) coatings with advanced thin film deposition has been developed, which can delay DM complex parts (with sharp corners) in corrosive fluid at 150 ℃ for over 2 days. Synergistic effects between porous hard PEO coating and chemical inert elastic-polymer sealing leads to its delaying dissolution improvement, and strong chemical/physical bonding between these two layers has been found to play essential role. Microstructure of this advanced coating and compatibility between PEO and various polymer selections has been thoroughly investigated and a model is also proposed to explain its delaying performance. This study could not only benefit oil and gas industry to unplug their High Temperature High Pressure (HTHP) unconventional resources inaccessible before, but also potentially provides a technical route for other industries (e.g., bio-medical, automobile, aerospace) where primer anti-corrosive protection on light Mg alloy is highly demanded.Keywords: dissolvable magnesium, coating, plasma electrolytic oxide, sealer
Procedia PDF Downloads 111505 Generative Pre-Trained Transformers (GPT-3) and Their Impact on Higher Education
Authors: Sheelagh Heugh, Michael Upton, Kriya Kalidas, Stephen Breen
Abstract:
This article aims to create awareness of the opportunities and issues the artificial intelligence (AI) tool GPT-3 (Generative Pre-trained Transformer-3) brings to higher education. Technological disruptors have featured in higher education (HE) since Konrad Klaus developed the first functional programmable automatic digital computer. The flurry of technological advances, such as personal computers, smartphones, the world wide web, search engines, and artificial intelligence (AI), have regularly caused disruption and discourse across the educational landscape around harnessing the change for the good. Accepting AI influences are inevitable; we took mixed methods through participatory action research and evaluation approach. Joining HE communities, reviewing the literature, and conducting our own research around Chat GPT-3, we reviewed our institutional approach to changing our current practices and developing policy linked to assessments and the use of Chat GPT-3. We review the impact of GPT-3, a high-powered natural language processing (NLP) system first seen in 2020 on HE. Historically HE has flexed and adapted with each technological advancement, and the latest debates for educationalists are focusing on the issues around this version of AI which creates natural human language text from prompts and other forms that can generate code and images. This paper explores how Chat GPT-3 affects the current educational landscape: we debate current views around plagiarism, research misconduct, and the credibility of assessment and determine the tool's value in developing skills for the workplace and enhancing critical analysis skills. These questions led us to review our institutional policy and explore the effects on our current assessments and the development of new assessments. Conclusions: After exploring the pros and cons of Chat GTP-3, it is evident that this form of AI cannot be un-invented. Technology needs to be harnessed for positive outcomes in higher education. We have observed that materials developed through AI and potential effects on our development of future assessments and teaching methods. Materials developed through Chat GPT-3 can still aid student learning but lead to redeveloping our institutional policy around plagiarism and academic integrity.Keywords: artificial intelligence, Chat GPT-3, intellectual property, plagiarism, research misconduct
Procedia PDF Downloads 89504 The Role of Artificial Intelligence in Creating Personalized Health Content for Elderly People: A Systematic Review Study
Authors: Mahnaz Khalafehnilsaz, Rozina Rahnama
Abstract:
Introduction: The elderly population is growing rapidly, and with this growth comes an increased demand for healthcare services. Artificial intelligence (AI) has the potential to revolutionize the delivery of healthcare services to the elderly population. In this study, the various ways in which AI is used to create health content for elderly people and its transformative impact on the healthcare industry will be explored. Method: A systematic review of the literature was conducted to identify studies that have investigated the role of AI in creating health content specifically for elderly people. Several databases, including PubMed, Scopus, and Web of Science, were searched for relevant articles published between 2000 and 2022. The search strategy employed a combination of keywords related to AI, personalized health content, and the elderly. Studies that utilized AI to create health content for elderly individuals were included, while those that did not meet the inclusion criteria were excluded. A total of 20 articles that met the inclusion criteria were identified. Finding: The findings of this review highlight the diverse applications of AI in creating health content for elderly people. One significant application is the use of natural language processing (NLP), which involves the creation of chatbots and virtual assistants capable of providing personalized health information and advice to elderly patients. AI is also utilized in the field of medical imaging, where algorithms analyze medical images such as X-rays, CT scans, and MRIs to detect diseases and abnormalities. Additionally, AI enables the development of personalized health content for elderly patients by analyzing large amounts of patient data to identify patterns and trends that can inform healthcare providers in developing tailored treatment plans. Conclusion: AI is transforming the healthcare industry by providing a wide range of applications that can improve patient outcomes and reduce healthcare costs. From creating chatbots and virtual assistants to analyzing medical images and developing personalized treatment plans, AI is revolutionizing the way healthcare is delivered to elderly patients. Continued investment in this field is essential to ensure that elderly patients receive the best possible care.Keywords: artificial intelligence, health content, older adult, healthcare
Procedia PDF Downloads 69503 Inquiry on Regenerative Tourism in an Avian Destination: A Case Study of Kaliveli in Tamil Nadu, India
Authors: Anu Chandran, Reena Esther Rani
Abstract:
Background of the Study: Dotted with multiple Unique Destination Prepositions (UDPs), Tamil Nadu is an established tourism brand as regards leisure, MICE, culture, and ecological flavors. Albeit, the enchanting destination possesses distinctive attributes and resources yet to be tapped for better competitive advantage. Being a destination that allures an incredible variety of migratory birds, Tamil Nadu is deemed to be an ornithologist’s paradise. This study primarily explores the prospects of developing Kaliveli, recognized as a bird sanctuary in the Tindivanam forest division of the Villupuram district in the State. Kaliveli is an ideal nesting site for migratory birds and is currently apt for a prospective analysis of regenerative tourism. Objectives of the study: This research lays an accent on avian tourism as part and parcel of sustainable tourism ventures. The impacts of projects like the Ornithological Conservation Centre on tourists have been gauged in the present paper. It maps the futuristic proactive propositions linked to regenerative tourism on the site. How far technological innovations can do a world of good in Kaliveli through Artificial Intelligence, Smart Tourism, and similar latest coinages to entice real eco-tourists, have been conceptualized. The experiential dimensions of resource stewardship as regards facilitating tourists’ relish the offerings in a sustainable manner is at the crux of this work. Methodology: Modeled as a case study, this work tries to deliberate on the impact of existing projects attributed to avian fauna in Kalveli. Conducted in the qualitative research design mode, the case study method was adopted for the processing and presentation of study results drawn by applying thematic content analysis based on the data collected from the field. Result and discussion: One of the key findings relates to the kind of nature trails that can be a regenerative dynamic for eco-friendly tourism in Kaliveli. Field visits have been conducted to assess the niche tourism aspects which could be incorporated with the regenerative tourism model to be framed as part of the study.Keywords: regenerative tourism, Kaliveli bird sanctuary, sustainable development, resource Stewardship, Ornithology, Avian Fauna
Procedia PDF Downloads 79502 AI Predictive Modeling of Excited State Dynamics in OPV Materials
Authors: Pranav Gunhal., Krish Jhurani
Abstract:
This study tackles the significant computational challenge of predicting excited state dynamics in organic photovoltaic (OPV) materials—a pivotal factor in the performance of solar energy solutions. Time-dependent density functional theory (TDDFT), though effective, is computationally prohibitive for larger and more complex molecules. As a solution, the research explores the application of transformer neural networks, a type of artificial intelligence (AI) model known for its superior performance in natural language processing, to predict excited state dynamics in OPV materials. The methodology involves a two-fold process. First, the transformer model is trained on an extensive dataset comprising over 10,000 TDDFT calculations of excited state dynamics from a diverse set of OPV materials. Each training example includes a molecular structure and the corresponding TDDFT-calculated excited state lifetimes and key electronic transitions. Second, the trained model is tested on a separate set of molecules, and its predictions are rigorously compared to independent TDDFT calculations. The results indicate a remarkable degree of predictive accuracy. Specifically, for a test set of 1,000 OPV materials, the transformer model predicted excited state lifetimes with a mean absolute error of 0.15 picoseconds, a negligible deviation from TDDFT-calculated values. The model also correctly identified key electronic transitions contributing to the excited state dynamics in 92% of the test cases, signifying a substantial concordance with the results obtained via conventional quantum chemistry calculations. The practical integration of the transformer model with existing quantum chemistry software was also realized, demonstrating its potential as a powerful tool in the arsenal of materials scientists and chemists. The implementation of this AI model is estimated to reduce the computational cost of predicting excited state dynamics by two orders of magnitude compared to conventional TDDFT calculations. The successful utilization of transformer neural networks to accurately predict excited state dynamics provides an efficient computational pathway for the accelerated discovery and design of new OPV materials, potentially catalyzing advancements in the realm of sustainable energy solutions.Keywords: transformer neural networks, organic photovoltaic materials, excited state dynamics, time-dependent density functional theory, predictive modeling
Procedia PDF Downloads 118501 Interferon-Induced Transmembrane Protein-3 rs12252-CC Associated with the Progress of Hepatocellular Carcinoma by Up-Regulating the Expression of Interferon-Induced Transmembrane Protein 3
Authors: Yuli Hou, Jianping Sun, Mengdan Gao, Hui Liu, Ling Qin, Ang Li, Dongfu Li, Yonghong Zhang, Yan Zhao
Abstract:
Background and Aims: Interferon-induced transmembrane protein 3 (IFITM3) is a component of ISG (Interferon-Stimulated Gene) family. IFITM3 has been recognized as a key signal molecule regulating cell growth in some tumors. However, the function of IFITM3 rs12252-CC genotype in the hepatocellular carcinoma (HCC) remains unknown to author’s best knowledge. A cohort study was employed to clarify the relationship between IFITM3 rs12252-CC genotype and HCC progression, and cellular experiments were used to investigate the correlation of function of IFITM3 and the progress of HCC. Methods: 336 candidates were enrolled in study, including 156 with HBV related HCC and 180 with chronic Hepatitis B infections or liver cirrhosis. Polymerase chain reaction (PCR) was employed to determine the gene polymorphism of IFITM3. The functions of IFITM3 were detected in PLC/PRF/5 cell with different treated:LV-IFITM3 transfected with lentivirus to knockdown the expression of IFITM3 and LV-NC transfected with empty lentivirus as negative control. The IFITM3 expression, proliferation and migration were detected by Quantitative reverse transcription polymerase chain reaction (qRT-PCR), QuantiGene Plex 2.0 assay, western blotting, immunohistochemistry, Cell Counting Kit(CCK)-8 and wound healing respectively. Six samples (three infected with empty lentiviral as control; three infected with LV-IFITM3 vector lentiviral as experimental group ) of PLC/PRF/5 were sequenced at BGI (Beijing Genomics Institute, Shenzhen,China) using RNA-seq technology to identify the IFITM3-related signaling pathways and chose PI3K/AKT pathway as related signaling to verify. Results: The patients with HCC had a significantly higher proportion of IFITM3 rs12252-CC compared with the patients with chronic HBV infection or liver cirrhosis. The distribution of CC genotype in HCC patients with low differentiation was significantly higher than that in those with high differentiation. Patients with CC genotype found with bigger tumor size, higher percentage of vascular thrombosis, higher distribution of low differentiation and higher 5-year relapse rate than those with CT/TT genotypes. The expression of IFITM3 was higher in HCC tissues than adjacent normal tissues, and the level of IFITM3 was higher in HCC tissues with low differentiation and metastatic than high/medium differentiation and without metastatic. Higher RNA level of IFITM3 was found in CC genotype than TT genotype. In PLC/PRF/5 cell with knockdown, the ability of cell proliferation and migration was inhibited. Analysis RNA sequencing and verification of RT-PCR found out the phosphatidylinositol 3-kinase/protein kinase B/mammalian target of rapamycin(PI3K/AKT/mTOR) pathway was associated with knockdown IFITM3.With the inhibition of IFITM3, the expression of PI3K/AKT/mTOR signaling pathway was blocked and the expression of vimentin was decreased. Conclusions: IFITM3 rs12252-CC with the higher expression plays a vital role in the progress of HCC by regulating HCC cell proliferation and migration. These effects are associated with PI3K/AKT/mTOR signaling pathway.Keywords: IFITM3, interferon-induced transmembrane protein 3, HCC, hepatocellular carcinoma, PI3K/ AKT/mTOR, phosphatidylinositol 3-kinase/protein kinase B/mammalian target of rapamycin
Procedia PDF Downloads 124500 An Extended Domain-Specific Modeling Language for Marine Observatory Relying on Enterprise Architecture
Authors: Charbel Aoun, Loic Lagadec
Abstract:
A Sensor Network (SN) is considered as an operation of two phases: (1) the observation/measuring, which means the accumulation of the gathered data at each sensor node; (2) transferring the collected data to some processing center (e.g., Fusion Servers) within the SN. Therefore, an underwater sensor network can be defined as a sensor network deployed underwater that monitors underwater activity. The deployed sensors, such as Hydrophones, are responsible for registering underwater activity and transferring it to more advanced components. The process of data exchange between the aforementioned components perfectly defines the Marine Observatory (MO) concept which provides information on ocean state, phenomena and processes. The first step towards the implementation of this concept is defining the environmental constraints and the required tools and components (Marine Cables, Smart Sensors, Data Fusion Server, etc). The logical and physical components that are used in these observatories perform some critical functions such as the localization of underwater moving objects. These functions can be orchestrated with other services (e.g. military or civilian reaction). In this paper, we present an extension to our MO meta-model that is used to generate a design tool (ArchiMO). We propose new constraints to be taken into consideration at design time. We illustrate our proposal with an example from the MO domain. Additionally, we generate the corresponding simulation code using our self-developed domain-specific model compiler. On the one hand, this illustrates our approach in relying on Enterprise Architecture (EA) framework that respects: multiple views, perspectives of stakeholders, and domain specificity. On the other hand, it helps reducing both complexity and time spent in design activity, while preventing from design modeling errors during porting this activity in the MO domain. As conclusion, this work aims to demonstrate that we can improve the design activity of complex system based on the use of MDE technologies and a domain-specific modeling language with the associated tooling. The major improvement is to provide an early validation step via models and simulation approach to consolidate the system design.Keywords: smart sensors, data fusion, distributed fusion architecture, sensor networks, domain specific modeling language, enterprise architecture, underwater moving object, localization, marine observatory, NS-3, IMS
Procedia PDF Downloads 177499 Realistic Modeling of the Preclinical Small Animal Using Commercial Software
Authors: Su Chul Han, Seungwoo Park
Abstract:
As the increasing incidence of cancer, the technology and modality of radiotherapy have advanced and the importance of preclinical model is increasing in the cancer research. Furthermore, the small animal dosimetry is an essential part of the evaluation of the relationship between the absorbed dose in preclinical small animal and biological effect in preclinical study. In this study, we carried out realistic modeling of the preclinical small animal phantom possible to verify irradiated dose using commercial software. The small animal phantom was modeling from 4D Digital Mouse whole body phantom. To manipulate Moby phantom in commercial software (Mimics, Materialise, Leuven, Belgium), we converted Moby phantom to DICOM image file of CT by Matlab and two- dimensional of CT images were converted to the three-dimensional image and it is possible to segment and crop CT image in Sagittal, Coronal and axial view). The CT images of small animals were modeling following process. Based on the profile line value, the thresholding was carried out to make a mask that was connection of all the regions of the equal threshold range. Using thresholding method, we segmented into three part (bone, body (tissue). lung), to separate neighboring pixels between lung and body (tissue), we used region growing function of Mimics software. We acquired 3D object by 3D calculation in the segmented images. The generated 3D object was smoothing by remeshing operation and smoothing operation factor was 0.4, iteration value was 5. The edge mode was selected to perform triangle reduction. The parameters were that tolerance (0.1mm), edge angle (15 degrees) and the number of iteration (5). The image processing 3D object file was converted to an STL file to output with 3D printer. We modified 3D small animal file using 3- Matic research (Materialise, Leuven, Belgium) to make space for radiation dosimetry chips. We acquired 3D object of realistic small animal phantom. The width of small animal phantom was 2.631 cm, thickness was 2.361 cm, and length was 10.817. Mimics software supported efficiency about 3D object generation and usability of conversion to STL file for user. The development of small preclinical animal phantom would increase reliability of verification of absorbed dose in small animal for preclinical study.Keywords: mimics, preclinical small animal, segmentation, 3D printer
Procedia PDF Downloads 366498 Treating Voxels as Words: Word-to-Vector Methods for fMRI Meta-Analyses
Authors: Matthew Baucum
Abstract:
With the increasing popularity of fMRI as an experimental method, psychology and neuroscience can greatly benefit from advanced techniques for summarizing and synthesizing large amounts of data from brain imaging studies. One promising avenue is automated meta-analyses, in which natural language processing methods are used to identify the brain regions consistently associated with certain semantic concepts (e.g. “social”, “reward’) across large corpora of studies. This study builds on this approach by demonstrating how, in fMRI meta-analyses, individual voxels can be treated as vectors in a semantic space and evaluated for their “proximity” to terms of interest. In this technique, a low-dimensional semantic space is built from brain imaging study texts, allowing words in each text to be represented as vectors (where words that frequently appear together are near each other in the semantic space). Consequently, each voxel in a brain mask can be represented as a normalized vector sum of all of the words in the studies that showed activation in that voxel. The entire brain mask can then be visualized in terms of each voxel’s proximity to a given term of interest (e.g., “vision”, “decision making”) or collection of terms (e.g., “theory of mind”, “social”, “agent”), as measured by the cosine similarity between the voxel’s vector and the term vector (or the average of multiple term vectors). Analysis can also proceed in the opposite direction, allowing word cloud visualizations of the nearest semantic neighbors for a given brain region. This approach allows for continuous, fine-grained metrics of voxel-term associations, and relies on state-of-the-art “open vocabulary” methods that go beyond mere word-counts. An analysis of over 11,000 neuroimaging studies from an existing meta-analytic fMRI database demonstrates that this technique can be used to recover known neural bases for multiple psychological functions, suggesting this method’s utility for efficient, high-level meta-analyses of localized brain function. While automated text analytic methods are no replacement for deliberate, manual meta-analyses, they seem to show promise for the efficient aggregation of large bodies of scientific knowledge, at least on a relatively general level.Keywords: FMRI, machine learning, meta-analysis, text analysis
Procedia PDF Downloads 449497 Strengthening Strategy across Languages: A Cognitive and Grammatical Universal Phenomenon
Authors: Behnam Jay
Abstract:
In this study, the phenomenon called “Strengthening” in human language refers to the strategic use of multiple linguistic elements to intensify specific grammatical or semantic functions. This study explores cross-linguistic evidence demonstrating how strengthening appears in various grammatical structures. In French and Spanish, double negatives are used not to cancel each other out but to intensify the negation, challenging the conventional understanding that double negatives result in an affirmation. For example, in French, il ne sait pas (He dosn't know.) uses both “ne” and “pas” to strengthen the negation. Similarly, in Spanish, No vio a nadie. (He didn't see anyone.) uses “no” and “nadie” to achieve a stronger negative meaning. In Japanese, double honorifics, often perceived as erroneous, are reinterpreted as intentional efforts to amplify politeness, as seen in forms like ossharareru (to say, (honorific)). Typically, an honorific morpheme appears only once in a predicate, but native speakers often use double forms to reinforce politeness. In Turkish, the word eğer (indicating a condition) is sometimes used together with the conditional suffix -se(sa) within the same sentence to strengthen the conditional meaning, as in Eğer yağmur yağarsa, o gelmez. (If it rains, he won't come). Furthermore, the combination of question words with rising intonation in various languages serves to enhance interrogative force. These instances suggest that strengthening is a cross-linguistic strategy that may reflect a broader cognitive mechanism in language processing. This paper investigates these cases in detail, providing insights into why languages may adopt such strategies. No corpus was used for collecting examples from different languages. Instead, the examples were gathered from languages the author encountered during their research, focusing on specific grammatical and morphological phenomena relevant to the concept of strengthening. Due to the complexity of employing a comparative method across multiple languages, this approach was chosen to illustrate common patterns of strengthening based on available data. It is acknowledged that different languages may have different strengthening strategies in various linguistic domains. While the primary focus is on grammar and morphology, it is recognized that the strengthening phenomenon may also appear in phonology. Future research should aim to include a broader range of languages and utilize more comprehensive comparative methods where feasible to enhance methodological rigor and explore this phenomenon more thoroughly.Keywords: strengthening, cross-linguistic analysis, syntax, semantics, cognitive mechanism
Procedia PDF Downloads 25496 Predicting Wealth Status of Households Using Ensemble Machine Learning Algorithms
Authors: Habtamu Ayenew Asegie
Abstract:
Wealth, as opposed to income or consumption, implies a more stable and permanent status. Due to natural and human-made difficulties, households' economies will be diminished, and their well-being will fall into trouble. Hence, governments and humanitarian agencies offer considerable resources for poverty and malnutrition reduction efforts. One key factor in the effectiveness of such efforts is the accuracy with which low-income or poor populations can be identified. As a result, this study aims to predict a household’s wealth status using ensemble Machine learning (ML) algorithms. In this study, design science research methodology (DSRM) is employed, and four ML algorithms, Random Forest (RF), Adaptive Boosting (AdaBoost), Light Gradient Boosted Machine (LightGBM), and Extreme Gradient Boosting (XGBoost), have been used to train models. The Ethiopian Demographic and Health Survey (EDHS) dataset is accessed for this purpose from the Central Statistical Agency (CSA)'s database. Various data pre-processing techniques were employed, and the model training has been conducted using the scikit learn Python library functions. Model evaluation is executed using various metrics like Accuracy, Precision, Recall, F1-score, area under curve-the receiver operating characteristics (AUC-ROC), and subjective evaluations of domain experts. An optimal subset of hyper-parameters for the algorithms was selected through the grid search function for the best prediction. The RF model has performed better than the rest of the algorithms by achieving an accuracy of 96.06% and is better suited as a solution model for our purpose. Following RF, LightGBM, XGBoost, and AdaBoost algorithms have an accuracy of 91.53%, 88.44%, and 58.55%, respectively. The findings suggest that some of the features like ‘Age of household head’, ‘Total children ever born’ in a family, ‘Main roof material’ of their house, ‘Region’ they lived in, whether a household uses ‘Electricity’ or not, and ‘Type of toilet facility’ of a household are determinant factors to be a focal point for economic policymakers. The determinant risk factors, extracted rules, and designed artifact achieved 82.28% of the domain expert’s evaluation. Overall, the study shows ML techniques are effective in predicting the wealth status of households.Keywords: ensemble machine learning, households wealth status, predictive model, wealth status prediction
Procedia PDF Downloads 39495 Impact of Different Rearing Diets on the Performance of Adult Mealworms Tenebrio molitor
Authors: Caroline Provost, Francois Dumont
Abstract:
Production of insects for human and animal consumption is an increasingly important activity in Canada. Protein production is more efficient and less harmful to the environment using insect rearing compared to the impact of traditional livestock, poultry and fish farms. Insects are rich in essential amino acids, essential fatty acids and trace elements. Thus, insect-based products could be used as a food supplement for livestock and domestic animals and may even find their way into the diets of high performing athletes or fine dining. Nevertheless, several parameters remain to be determined to ensure efficient and profitable production that meet the potential of these sectors. This project proposes to improve the production processes, rearing diets and processing methods for three species with valuable gastronomic and nutritional potential: the common mealworms (Tenebrio molitor), the small mealworm (Alphitobius diaperinus), and the giant mealworm (Zophobas morio). The general objective of the project is to acquire specific knowledge for mass rearing of insects dedicated to animal and human consumption in order to respond to current market opportunities and meet a growing demand for these products. Mass rearing of the three species of mealworm was produced to provide the individuals needed for the experiments. Mealworms eat flour from different cereals (e.g. wheat, barley, buckwheat). These cereals vary in their composition (protein, carbohydrates, fiber, vitamins, antioxidant, etc.), but also in their purchase cost. Seven different diets were compared to optimize the yield of the rearing. Diets were composed of cereal flour (e.g. wheat, barley) and were either mixed or left alone. Female fecundity, larvae mortality and growing curves were observed. Some flour diets have positive effects on female fecundity and larvae performance while each mealworm was found to have specific diet requirements. Trade-offs between mealworm performance and costs need to be considered. Experiments on the effect of flour composition on several parameters related to performance and nutritional and gastronomic value led to the identification of a more appropriate diet for each mealworm.Keywords: mass rearing, mealworm, human consumption, diet
Procedia PDF Downloads 147494 Oxidovanadium(IV) and Dioxidovanadium(V) Complexes: Efficient Catalyst for Peroxidase Mimetic Activity and Oxidation
Authors: Mannar R. Maurya, Bithika Sarkar, Fernando Avecilla
Abstract:
Peroxidase activity is possibly successfully used for different industrial processes in medicine, chemical industry, food processing and agriculture. However, they bear some intrinsic drawback associated with denaturation by proteases, their special storage requisite and cost factor also. Now a day’s artificial enzyme mimics are becoming a research interest because of their significant applications over conventional organic enzymes for ease of their preparation, low price and good stability in activity and overcome the drawbacks of natural enzymes e.g serine proteases. At present, a large number of artificial enzymes have been synthesized by assimilating a catalytic center into a variety of schiff base complexes, ligand-anchoring, supramolecular complexes, hematin, porphyrin, nanoparticles to mimic natural enzymes. Although in recent years a several number of vanadium complexes have been reported by a continuing increase in interest in bioinorganic chemistry. To our best of knowledge, the investigation of artificial enzyme mimics of vanadium complexes is very less explored. Recently, our group has reported synthetic vanadium schiff base complexes capable of mimicking peroxidases. Herein, we have synthesized monoidovanadium(IV) and dioxidovanadium(V) complexes of pyrazoleone derivateis ( extensively studied on account of their broad range of pharmacological appication). All these complexes are characterized by various spectroscopic techniques like FT-IR, UV-Visible, NMR (1H, 13C and 51V), Elemental analysis, thermal studies and single crystal analysis. The peroxidase mimic activity has been studied towards oxidation of pyrogallol to purpurogallin with hydrogen peroxide at pH 7 followed by measuring kinetic parameters. The Michaelis-Menten behavior shows an excellent catalytic activity over its natural counterparts, e.g. V-HPO and HRP. The obtained kinetic parameters (Vmax, Kcat) were also compared with peroxidase and haloperoxidase enzymes making it a promising mimic of peroxidase catalyst. Also, the catalytic activity has been studied towards the oxidation of 1-phenylethanol in presence of H2O2 as an oxidant. Various parameters such as amount of catalyst and oxidant, reaction time, reaction temperature and solvent have been taken into consideration to get maximum oxidative products of 1-phenylethanol.Keywords: oxovanadium(IV)/dioxidovanadium(V) complexes, NMR spectroscopy, Crystal structure, peroxidase mimic activity towards oxidation of pyrogallol, Oxidation of 1-phenylethanol
Procedia PDF Downloads 341493 The Impact of the Method of Extraction on 'Chemchali' Olive Oil Composition in Terms of Oxidation Index, and Chemical Quality
Authors: Om Kalthoum Sallem, Saidakilani, Kamiliya Ounaissa, Abdelmajid Abid
Abstract:
Introduction and purposes: Olive oil is the main oil used in the Mediterranean diet. Virgin olive oil is valued for its organoleptic and nutritional characteristics and is resistant to oxidation due to its high monounsaturated fatty acid content (MUFAs), and low polyunsaturates (PUFAs) and the presence of natural antioxidants such as phenols, tocopherols and carotenoids. The fatty acid composition, especially the MUFA content, and the natural antioxidants provide advantages for health. The aim of the present study was to examine the impact of method of extraction on the chemical profiles of ‘Chemchali’ olive oil variety, which is cultivated in the city of Gafsa, and to compare it with chetoui and chemchali varieties. Methods: Our study is a qualitative prospective study that deals with ‘Chemchali’ olive oil variety. Analyses were conducted during three months (from December to February) in different oil mills in the city of Gafsa. We have compared ‘Chemchali’ olive oil obtained by continuous method to this obtained by superpress method. Then we have analyzed quality index parameters, including free fatty acid content (FFA), acidity, and UV spectrophotometric characteristics and other physico-chemical data [oxidative stability, ß-carotene, and chlorophyll pigment composition]. Results: Olive oil resulting from super press method compared with continuous method is less acid(0,6120 vs. 0,9760), less oxydazible(K232:2,478 vs. 2,592)(k270:0,216 vs. 0,228), more rich in oleic acid(61,61% vs. 66.99%), less rich in linoleic acid(13,38% vs. 13,98 %), more rich in total chlorophylls pigments (6,22 ppm vs. 3,18 ppm ) and ß-carotene (3,128 mg/kg vs. 1,73 mg/kg). ‘Chemchali’ olive oil showed more equilibrated total content in fatty acids compared with the varieties ’Chemleli’ and ‘Chetoui’. Gafsa’s variety ’Chemlali’ have significantly less saturated and polyunsaturated fatty acids. Whereas it has a higher content in monounsaturated fatty acid C18:2, compared with the two other varieties. Conclusion: The use of super press method had benefic effects on general chemical characteristics of ‘Chemchali’ olive oil, maintaining the highest quality according to the ecocert legal standards. In light of the results obtained in this study, a more detailed study is required to establish whether the differences in the chemical properties of oils are mainly due to agronomic and climate variables or, to the processing employed in oil mills.Keywords: olive oil, extraction method, fatty acids, chemchali olive oil
Procedia PDF Downloads 383492 Raman Spectral Fingerprints of Healthy and Cancerous Human Colorectal Tissues
Authors: Maria Karnachoriti, Ellas Spyratou, Dimitrios Lykidis, Maria Lambropoulou, Yiannis S. Raptis, Ioannis Seimenis, Efstathios P. Efstathopoulos, Athanassios G. Kontos
Abstract:
Colorectal cancer is the third most common cancer diagnosed in Europe, according to the latest incidence data provided by the World Health Organization (WHO), and early diagnosis has proved to be the key in reducing cancer-related mortality. In cases where surgical interventions are required for cancer treatment, the accurate discrimination between healthy and cancerous tissues is critical for the postoperative care of the patient. The current study focuses on the ex vivo handling of surgically excised colorectal specimens and the acquisition of their spectral fingerprints using Raman spectroscopy. Acquired data were analyzed in an effort to discriminate, in microscopic scale, between healthy and malignant margins. Raman spectroscopy is a spectroscopic technique with high detection sensitivity and spatial resolution of few micrometers. The spectral fingerprint which is produced during laser-tissue interaction is unique and characterizes the biostructure and its inflammatory or cancer state. Numerous published studies have demonstrated the potential of the technique as a tool for the discrimination between healthy and malignant tissues/cells either ex vivo or in vivo. However, the handling of the excised human specimens and the Raman measurement conditions remain challenging, unavoidably affecting measurement reliability and repeatability, as well as the technique’s overall accuracy and sensitivity. Therefore, tissue handling has to be optimized and standardized to ensure preservation of cell integrity and hydration level. Various strategies have been implemented in the past, including the use of balanced salt solutions, small humidifiers or pump-reservoir-pipette systems. In the current study, human colorectal specimens of 10X5 mm were collected from 5 patients up to now who underwent open surgery for colorectal cancer. A novel, non-toxic zinc-based fixative (Z7) was used for tissue preservation. Z7 demonstrates excellent protein preservation and protection against tissue autolysis. Micro-Raman spectra were recorded with a Renishaw Invia spectrometer from successive random 2 micrometers spots upon excitation at 785 nm to decrease fluorescent background and secure avoidance of tissue photodegradation. A temperature-controlled approach was adopted to stabilize the tissue at 2 °C, thus minimizing dehydration effects and consequent focus drift during measurement. A broad spectral range, 500-3200 cm-1,was covered with five consecutive full scans that lasted for 20 minutes in total. The average spectra were used for least square fitting analysis of the Raman modes.Subtle Raman differences were observed between normal and cancerous colorectal tissues mainly in the intensities of the 1556 cm-1 and 1628 cm-1 Raman modes which correspond to v(C=C) vibrations in porphyrins, as well as in the range of 2800-3000 cm-1 due to CH2 stretching of lipids and CH3 stretching of proteins. Raman spectra evaluation was supported by histological findings from twin specimens. This study demonstrates that Raman spectroscopy may constitute a promising tool for real-time verification of clear margins in colorectal cancer open surgery.Keywords: colorectal cancer, Raman spectroscopy, malignant margins, spectral fingerprints
Procedia PDF Downloads 91491 Analysis of Constraints and Opportunities in Dairy Production in Botswana
Authors: Som Pal Baliyan
Abstract:
Dairy enterprise has been a major source of employment and income generation in most of the economies worldwide. Botswana government has also identified dairy as one of the agricultural sectors towards diversification of the mineral dependent economy of the country. The huge gap between local demand and supply of milk and milk products indicated that there are not only constraints but also; opportunities exist in this sub sector of agriculture. Therefore, this study was an attempt to identify constraints and opportunities in dairy production industry in Botswana. The possible ways to mitigate the constraints were also identified. The findings should assist the stakeholders especially, policy makers in the formulation of effective policies for the growth of dairy sector in the country. This quantitative study adopted a survey research design. A final survey followed by a pilot survey was conducted for data collection. The purpose of the pilot survey was to collect basic information on the nature and extent of the constraints, opportunities and ways to mitigate the constraints in dairy production. Based on the information from pilot survey, a four point Likert’s scale type questionnaire was constructed, validated and tested for its reliability. The data for the final survey were collected from purposively selected twenty five dairy farms. The descriptive statistical tools were employed to analyze data. Among the twelve constraints identified; high feed costs, feed shortage and availability, lack of technical support, lack of skilled manpower, high prevalence of pests and diseases and, lack of dairy related technologies were the six major constraints in dairy production. Grain feed production, roughage feed production, manufacturing of dairy feed, establishment of milk processing industry and, development of transportation systems were the five major opportunities among the eight opportunities identified. Increasing production of animal feed locally, increasing roughage feed production locally, provision of subsidy on animal feed, easy access to sufficient financial support, training of the farmers and, effective control of pests and diseases were identified as the six major ways to mitigate the constraints. It was recommended that the identified constraints and opportunities as well as the ways to mitigate the constraints need to be carefully considered by the stakeholders especially, policy makers during the formulation and implementation of the policies for the development of dairy sector in Botswana.Keywords: dairy enterprise, milk production, opportunities, production constraints
Procedia PDF Downloads 405490 Incorporation of Noncanonical Amino Acids into Hard-to-Express Antibody Fragments: Expression and Characterization
Authors: Hana Hanaee-Ahvaz, Monika Cserjan-Puschmann, Christopher Tauer, Gerald Striedner
Abstract:
Incorporation of noncanonical amino acids (ncAA) into proteins has become an interesting topic as proteins featured with ncAAs offer a wide range of different applications. Nowadays, technologies and systems exist that allow for the site-specific introduction of ncAAs in vivo, but the efficient production of proteins modified this way is still a big challenge. This is especially true for 'hard-to-express' proteins where low yields are encountered even with the native sequence. In this study, site-specific incorporation of azido-ethoxy-carbonyl-Lysin (azk) into an anti-tumor-necrosis-factor-α-Fab (FTN2) was investigated. According to well-established parameters, possible site positions for ncAA incorporation were determined, and corresponding FTN2 genes were constructed. Each of the modified FTN2 variants has one amber codon for azk incorporated either in its heavy or light chain. The expression level for all variants produced was determined by ELISA, and all azk variants could be produced with a satisfactory yield in the range of 50-70% of the original FTN2 variant. In terms of expression yield, neither the azk incorporation position nor the subunit modified (heavy or light chain) had a significant effect. We confirmed correct protein processing and azk incorporation by mass spectrometry analysis, and antigen-antibody interaction was determined by surface plasmon resonance analysis. The next step is to characterize the effect of azk incorporation on protein stability and aggregation tendency via differential scanning calorimetry and light scattering, respectively. In summary, the incorporation of ncAA into our Fab candidate FTN2 worked better than expected. The quantities produced allowed a detailed characterization of the variants in terms of their properties, and we can now turn our attention to potential applications. By using click chemistry, we can equip the Fabs with additional functionalities and make them suitable for a wide range of applications. We will now use this option in a first approach and develop an assay that will allow us to follow the degradation of the recombinant target protein in vivo. Special focus will be laid on the proteolytic activity in the periplasm and how it is influenced by cultivation/induction conditions.Keywords: degradation, FTN2, hard-to-express protein, non-canonical amino acids
Procedia PDF Downloads 234489 Attention and Creative Problem-Solving: Cognitive Differences between Adults with and without Attention Deficit Hyperactivity Disorder
Authors: Lindsey Carruthers, Alexandra Willis, Rory MacLean
Abstract:
Introduction: It has been proposed that distractibility, a key diagnostic criterion of Attention Deficit Hyperactivity Disorder (ADHD), may be associated with higher creativity levels in some individuals. Anecdotal and empirical evidence has shown that ADHD is therefore beneficial to creative problem-solving, and the generation of new ideas and products. Previous studies have only used one or two measures of attention, which is insufficient given that it is a complex cognitive process. The current study aimed to determine in which ways performance on creative problem-solving tasks and a range of attention tests may be related, and if performance differs between adults with and without ADHD. Methods: 150 adults, 47 males and 103 females (mean age=28.81 years, S.D.=12.05 years), were tested at Edinburgh Napier University. Of this set, 50 participants had ADHD, and 100 did not, forming the control group. Each participant completed seven attention tasks, assessing focussed, sustained, selective, and divided attention. Creative problem-solving was measured using divergent thinking tasks, which require multiple original solutions for one given problem. Two types of divergent thinking task were used: verbal (requires written responses) and figural (requires drawn responses). Each task is scored for idea originality, with higher scores indicating more creative responses. Correlational analyses were used to explore relationships between attention and creative problem-solving, and t-tests were used to study the between group differences. Results: The control group scored higher on originality for figural divergent thinking (t(148)= 3.187, p< .01), whereas the ADHD group had more original ideas for the verbal divergent thinking task (t(148)= -2.490, p < .05). Within the control group, figural divergent thinking scores were significantly related to both selective (r= -.295 to -.285, p < .01) and divided attention (r= .206 to .290, p < .05). Alternatively, within the ADHD group, both selective (r= -.390 to -.356, p < .05) and divided (r= .328 to .347, p < .05) attention are related to verbal divergent thinking. Conclusions: Selective and divided attention are both related to divergent thinking, however the performance patterns are different between each group, which may point to cognitive variance in the processing of these problems and how they are managed. The creative differences previously found between those with and without ADHD may be dependent on task type, which to the author’s knowledge, has not been distinguished previously. It appears that ADHD does not specifically lead to higher creativity, but may provide explanation for creative differences when compared to those without the disorder.Keywords: ADHD, attention, creativity, problem-solving
Procedia PDF Downloads 456488 Impact of Intelligent Transportation System on Planning, Operation and Safety of Urban Corridor
Authors: Sourabh Jain, S. S. Jain
Abstract:
Intelligent transportation system (ITS) is the application of technologies for developing a user–friendly transportation system to extend the safety and efficiency of urban transportation systems in developing countries. These systems involve vehicles, drivers, passengers, road operators, managers of transport services; all interacting with each other and the surroundings to boost the security and capacity of road systems. The goal of urban corridor management using ITS in road transport is to achieve improvements in mobility, safety, and the productivity of the transportation system within the available facilities through the integrated application of advanced monitoring, communications, computer, display, and control process technologies, both in the vehicle and on the road. Intelligent transportation system is a product of the revolution in information and communications technologies that is the hallmark of the digital age. The basic ITS technology is oriented on three main directions: communications, information, integration. Information acquisition (collection), processing, integration, and sorting are the basic activities of ITS. In the paper, attempts have been made to present the endeavor that was made to interpret and evaluate the performance of the 27.4 Km long study corridor having eight intersections and four flyovers. The corridor consisting of six lanes as well as eight lanes divided road network. Two categories of data have been collected such as traffic data (traffic volume, spot speed, delay) and road characteristics data (no. of lanes, lane width, bus stops, mid-block sections, intersections, flyovers). The instruments used for collecting the data were video camera, stop watch, radar gun, and mobile GPS (GPS tracker lite). From the analysis, the performance interpretations incorporated were the identification of peak and off-peak hours, congestion and level of service (LOS) at midblock sections and delay followed by plotting the speed contours. The paper proposed the urban corridor management strategies based on sensors integrated into both vehicles and on the roads that those have to be efficiently executable, cost-effective, and familiar to road users. It will be useful to reduce congestion, fuel consumption, and pollution so as to provide comfort, safety, and efficiency to the users.Keywords: ITS strategies, congestion, planning, mobility, safety
Procedia PDF Downloads 179487 Multi-Criteria Decision Making Tool for Assessment of Biorefinery Strategies
Authors: Marzouk Benali, Jawad Jeaidi, Behrang Mansoornejad, Olumoye Ajao, Banafsheh Gilani, Nima Ghavidel Mehr
Abstract:
Canadian forest industry is seeking to identify and implement transformational strategies for enhanced financial performance through the emerging bioeconomy or more specifically through the concept of the biorefinery. For example, processing forest residues or surplus of biomass available on the mill sites for the production of biofuels, biochemicals and/or biomaterials is one of the attractive strategies along with traditional wood and paper products and cogenerated energy. There are many possible process-product biorefinery pathways, each associated with specific product portfolios with different levels of risk. Thus, it is not obvious which unique strategy forest industry should select and implement. Therefore, there is a need for analytical and design tools that enable evaluating biorefinery strategies based on a set of criteria considering a perspective of sustainability over the short and long terms, while selecting the existing core products as well as selecting the new product portfolio. In addition, it is critical to assess the manufacturing flexibility to internalize the risk from market price volatility of each targeted bio-based product in the product portfolio, prior to invest heavily in any biorefinery strategy. The proposed paper will focus on introducing a systematic methodology for designing integrated biorefineries using process systems engineering tools as well as a multi-criteria decision making framework to put forward the most effective biorefinery strategies that fulfill the needs of the forest industry. Topics to be covered will include market analysis, techno-economic assessment, cost accounting, energy integration analysis, life cycle assessment and supply chain analysis. This will be followed by describing the vision as well as the key features and functionalities of the I-BIOREF software platform, developed by CanmetENERGY of Natural Resources Canada. Two industrial case studies will be presented to support the robustness and flexibility of I-BIOREF software platform: i) An integrated Canadian Kraft pulp mill with lignin recovery process (namely, LignoBoost™); ii) A standalone biorefinery based on ethanol-organosolv process.Keywords: biorefinery strategies, bioproducts, co-production, multi-criteria decision making, tool
Procedia PDF Downloads 232486 Microstructure and Mechanical Properties Evaluation of Graphene-Reinforced AlSi10Mg Matrix Composite Produced by Powder Bed Fusion Process
Authors: Jitendar Kumar Tiwari, Ajay Mandal, N. Sathish, A. K. Srivastava
Abstract:
Since the last decade, graphene achieved great attention toward the progress of multifunction metal matrix composites, which are highly demanded in industries to develop energy-efficient systems. This study covers the two advanced aspects of the latest scientific endeavor, i.e., graphene as reinforcement in metallic materials and additive manufacturing (AM) as a processing technology. Herein, high-quality graphene and AlSi10Mg powder mechanically mixed by very low energy ball milling with 0.1 wt. % and 0.2 wt. % graphene. Mixed powder directly subjected to the powder bed fusion process, i.e., an AM technique to produce composite samples along with bare counterpart. The effects of graphene on porosity, microstructure, and mechanical properties were examined in this study. The volumetric distribution of pores was observed under X-ray computed tomography (CT). On the basis of relative density measurement by X-ray CT, it was observed that porosity increases after graphene addition, and pore morphology also transformed from spherical pores to enlarged flaky pores due to improper melting of composite powder. Furthermore, the microstructure suggests the grain refinement after graphene addition. The columnar grains were able to cross the melt pool boundaries in case of the bare sample, unlike composite samples. The smaller columnar grains were formed in composites due to heterogeneous nucleation by graphene platelets during solidification. The tensile properties get affected due to induced porosity irrespective of graphene reinforcement. The optimized tensile properties were achieved at 0.1 wt. % graphene. The increment in yield strength and ultimate tensile strength was 22% and 10%, respectively, for 0.1 wt. % graphene reinforced sample in comparison to bare counterpart while elongation decreases 20% for the same sample. The hardness indentations were taken mostly on the solid region in order to avoid the collapse of the pores. The hardness of the composite was increased progressively with graphene content. Around 30% of increment in hardness was achieved after the addition of 0.2 wt. % graphene. Therefore, it can be concluded that powder bed fusion can be adopted as a suitable technique to develop graphene reinforced AlSi10Mg composite. Though, some further process modification required to avoid the induced porosity after the addition of graphene, which can be addressed in future work.Keywords: graphene, hardness, porosity, powder bed fusion, tensile properties
Procedia PDF Downloads 128485 Future Research on the Resilience of Tehran’s Urban Areas Against Pandemic Crises Horizon 2050
Authors: Farzaneh Sasanpour, Saeed Amini Varaki
Abstract:
Resilience is an important goal for cities as urban areas face an increasing range of challenges in the 21st century; therefore, according to the characteristics of risks, adopting an approach that responds to sensitive conditions in the risk management process is the resilience of cities. In the meantime, most of the resilience assessments have dealt with natural hazards and less attention has been paid to pandemics.In the covid-19 pandemic, the country of Iran and especially the metropolis of Tehran, was not immune from the crisis caused by its effects and consequences and faced many challenges. One of the methods that can increase the resilience of Tehran's metropolis against possible crises in the future is future studies. This research is practical in terms of type. The general pattern of the research will be descriptive-analytical and from the point of view that it is trying to communicate between the components and provide urban resilience indicators with pandemic crises and explain the scenarios, its future studies method is exploratory. In order to extract and determine the key factors and driving forces effective on the resilience of Tehran's urban areas against pandemic crises (Covid-19), the method of structural analysis of mutual effects and Micmac software was used. Therefore, the primary factors and variables affecting the resilience of Tehran's urban areas were set in 5 main factors, including physical-infrastructural (transportation, spatial and physical organization, streets and roads, multi-purpose development) with 39 variables based on mutual effects analysis. Finally, key factors and variables in five main areas, including managerial-institutional with five variables; Technology (intelligence) with 3 variables; economic with 2 variables; socio-cultural with 3 variables; and physical infrastructure, were categorized with 7 variables. These factors and variables have been used as key factors and effective driving forces on the resilience of Tehran's urban areas against pandemic crises (Covid-19), in explaining and developing scenarios. In order to develop the scenarios for the resilience of Tehran's urban areas against pandemic crises (Covid-19), intuitive logic, scenario planning as one of the future research methods and the Global Business Network (GBN) model were used. Finally, four scenarios have been drawn and selected with a creative method using the metaphor of weather conditions, which is indicative of the general outline of the conditions of the metropolis of Tehran in that situation. Therefore, the scenarios of Tehran metropolis were obtained in the form of four scenarios: 1- solar scenario (optimal governance and management leading in smart technology) 2- cloud scenario (optimal governance and management following in intelligent technology) 3- dark scenario (optimal governance and management Unfavorable leader in intelligence technology) 4- Storm scenario (unfavorable governance and management of follower in intelligence technology). The solar scenario shows the best situation and the stormy scenario shows the worst situation for the Tehran metropolis. According to the findings obtained in this research, city managers can, in order to achieve a better tomorrow for the metropolis of Tehran, in all the factors and components of urban resilience against pandemic crises by using future research methods, a coherent picture with the long-term horizon of 2050, from the path Provide urban resilience movement and platforms for upgrading and increasing the capacity to deal with the crisis. To create the necessary platforms for the realization, development and evolution of the urban areas of Tehran in a way that guarantees long-term balance and stability in all dimensions and levels.Keywords: future research, resilience, crisis, pandemic, covid-19, Tehran
Procedia PDF Downloads 67484 An Integrated Real-Time Hydrodynamic and Coastal Risk Assessment Model
Authors: M. Reza Hashemi, Chris Small, Scott Hayward
Abstract:
The Northeast Coast of the US faces damaging effects of coastal flooding and winds due to Atlantic tropical and extratropical storms each year. Historically, several large storm events have produced substantial levels of damage to the region; most notably of which were the Great Atlantic Hurricane of 1938, Hurricane Carol, Hurricane Bob, and recently Hurricane Sandy (2012). The objective of this study was to develop an integrated modeling system that could be used as a forecasting/hindcasting tool to evaluate and communicate the risk coastal communities face from these coastal storms. This modeling system utilizes the ADvanced CIRCulation (ADCIRC) model for storm surge predictions and the Simulating Waves Nearshore (SWAN) model for the wave environment. These models were coupled, passing information to each other and computing over the same unstructured domain, allowing for the most accurate representation of the physical storm processes. The coupled SWAN-ADCIRC model was validated and has been set up to perform real-time forecast simulations (as well as hindcast). Modeled storm parameters were then passed to a coastal risk assessment tool. This tool, which is generic and universally applicable, generates spatial structural damage estimate maps on an individual structure basis for an area of interest. The required inputs for the coastal risk model included a detailed information about the individual structures, inundation levels, and wave heights for the selected region. Additionally, calculation of wind damage to structures was incorporated. The integrated coastal risk assessment system was then tested and applied to Charlestown, a small vulnerable coastal town along the southern shore of Rhode Island. The modeling system was applied to Hurricane Sandy and a synthetic storm. In both storm cases, effect of natural dunes on coastal risk was investigated. The resulting damage maps for the area (Charlestown) clearly showed that the dune eroded scenarios affected more structures, and increased the estimated damage. The system was also tested in forecast mode for a large Nor’Easters: Stella (March 2017). The results showed a good performance of the coupled model in forecast mode when compared to observations. Finally, a nearshore model XBeach was then nested within this regional grid (ADCIRC-SWAN) to simulate nearshore sediment transport processes and coastal erosion. Hurricane Irene (2011) was used to validate XBeach, on the basis of a unique beach profile dataset at the region. XBeach showed a relatively good performance, being able to estimate eroded volumes along the beach transects with a mean error of 16%. The validated model was then used to analyze the effectiveness of several erosion mitigation methods that were recommended in a recent study of coastal erosion in New England: beach nourishment, coastal bank (engineered core), and submerged breakwater as well as artificial surfing reef. It was shown that beach nourishment and coastal banks perform better to mitigate shoreline retreat and coastal erosion.Keywords: ADCIRC, coastal flooding, storm surge, coastal risk assessment, living shorelines
Procedia PDF Downloads 116483 Prevalence of Foodborne Pathogens in Pig and Cattle Carcass Samples Collected from Korean Slaughterhouses
Authors: Kichan Lee, Kwang-Ho Choi, Mi-Hye Hwang, Young Min Son, Bang-Hun Hyun, Byeong Yeal Jung
Abstract:
Recently, worldwide food safety authorities have been strengthening food hygiene in order to curb foodborne illness outbreaks. The hygiene status of Korean slaughterhouses has been monitored annually by Animal and Plant Quarantine Agency and provincial governments through foodborne pathogens investigation using slaughtered pig and cattle meats. This study presented the prevalence of food-borne pathogens from 2014 to 2016 in Korean slaughterhouses. Sampling, microbiological examinations, and analysis of results were performed in accordance with ‘Processing Standards and Ingredient Specifications for Livestock Products’. In total, swab samples from 337 pig carcasses (100 samples in 2014, 135 samples in 2015, 102 samples in 2016) and 319 cattle carcasses (100 samples in 2014, 119 samples in 2015, 100 samples in 2016) from twenty slaughterhouses were examined for Listeria monocytogenes, Campylobacter jejuni, Campylobacter coli, Salmonella spp., Staphylococcus aureus, Clostridium perfringens, Yersinia enterocolitica, Escherichia coli O157:H7 and non-O157 enterohemorrhagic E. coli (EHEC, serotypes O26, O45, O103, O104, O111, O121, O128 and O145) as foodborne pathogens. The samples were analyzed using cultural and PCR-based methods. Foodborne pathogens were isolated in 78 (23.1%) out of 337 pig samples. In 2014, S. aureus (n=17) was predominant, followed by Y. enterocolitica (n=7), C. perfringens (n=2) and L. monocytogenes (n=2). In 2015, C. coli (n=14) was the most prevalent, followed by L. monocytogenes (n=4), S. aureus (n=3), and C. perfringens (n=2). In 2016, S. aureus (n=16) was the most prevalent, followed by C. coli (n=13), L. monocytogenes (n=2) and C. perfringens (n=1). In case of cattle carcasses, foodborne bacteria were detected in 41 (12.9%) out of 319 samples. In 2014, S. aureus (n=16) was the most prevalent, followed by Y. enterocolitica (n=3), C. perfringens (n=3) and L. monocytogenes (n=2). In 2015, L. monocytogenes was isolated from 4 samples, S. aureus from three, C. perfringens, Y. enterocolitica and Salmonella spp. from one, respectively. In 2016, L. monocytogenes (n=6) was the most prevalent, followed by C. perfringens (n=3) C. jejuni (n=1), respectively. It was found that 10 carcass samples (4 cattle and 6 pigs) were contaminated with two bacterial pathogen tested. Interestingly, foodborne pathogens were more detected from pig carcasses than cattle carcasses. Although S. aureus was predominantly detected in this study, other foodborne pathogens were also isolated in slaughtered meats. Results of this study alerted the risk of foodborne pathogen infection for humans from slaughtered meats. Therefore, the authors insisted that it was important to enhance hygiene level of slaughterhouses according to Hazard Analysis and Critical Control Point.Keywords: carcass, cattle, foodborne, Korea, pathogen, pig
Procedia PDF Downloads 344