Search results for: source and target
6375 Bacillus licheniformis sp. nov. PS-6, an Arsenic Tolerance Bacterium with Biotransforming Potential Isolated from Sediments of Pichavaram Mangroves of South India
Authors: Padmanabhan D, Kavitha S
Abstract:
The purpose of the study is to investigate arsenic resistance ability of indigenous microflora and its ability to utilize arsenic species form containing water source. PS-6 potential arsenic tolerance bacterium was screened from thirty isolates from Pichavaram Mangroves of India having tolerance to grow up to 1000 mg/l of As (V) and 800 mg/l of As (III) and arsenic utilization ability of 98 % of As (V) and 97% of As (III) with initial concentration of 3-5 mg/l within 48 hrs. Optimum pH and temperature was found to be ~7-7.4 and 37°C. Active growth of PS-6 in minimal salt media (MSB) helps in cost effective biomass production. Dry weight analysis of PS-6 has shown significant difference in biomass when exposed to As (III) and As (V). Protein level study of PS-6 after exposing to As (V) and As (III) shown modification in total protein concentration and variation in SDS-PAGE pattern. PS-6 was identified as Bacillus licheniformis based on partially sequenced of 16S rRNA using NCBI Blast. Further investigation will help in using this potential bacterium as a well-grounded source for urgency.Keywords: arsenite, arsenate, Bacillus licheniformis, utilization
Procedia PDF Downloads 4066374 Exo-III Assisted Amplification Strategy through Target Recycling of Hg²⁺ Detection in Water: A GNP Based Label-Free Colorimetry Employing T-Rich Hairpin-Loop Metallobase
Authors: Abdul Ghaffar Memon, Xiao Hong Zhou, Yunpeng Xing, Ruoyu Wang, Miao He
Abstract:
Due to deleterious environmental and health effects of the Hg²⁺ ions, various online, detection methods apart from the traditional analytical tools have been developed by researchers. Biosensors especially, label, label-free, colorimetric and optical sensors have advanced with sensitive detection. However, there remains a gap of ultrasensitive quantification as noise interact significantly especially in the AuNP based label-free colorimetry. This study reported an amplification strategy using Exo-III enzyme for target recycling of Hg²⁺ ions in a T-rich hairpin loop metallobase label-free colorimetric nanosensor with an improved sensitivity using unmodified gold nanoparticles (uGNPs) as an indicator. The two T-rich metallobase hairpin loop structures as 5’- CTT TCA TAC ATA GAA AAT GTA TGT TTG -3 (HgS1), and 5’- GGC TTT GAG CGC TAA GAA A TA GCG CTC TTT G -3’ (HgS2) were tested in the study. The thermodynamic properties of HgS1 and HgS2 were calculated using online tools (http://biophysics.idtdna.com/cgi-bin/meltCalculator.cgi). The lab scale synthesized uGNPs were utilized in the analysis. The DNA sequence had T-rich bases on both tails end, which in the presence of Hg²⁺ forms a T-Hg²⁺-T mismatch, promoting the formation of dsDNA. Later, the Exo-III incubation enable the enzyme to cleave stepwise mononucleotides from the 3’ end until the structure become single-stranded. These ssDNA fragments then adsorb on the surface of AuNPs in their presence and protect AuNPs from the induced salt aggregation. The visible change in color from blue (aggregation stage in the absence of Hg²⁺) and pink (dispersion state in the presence of Hg²⁺ and adsorption of ssDNA fragments) can be observed and analyzed through UV spectrometry. An ultrasensitive quantitative nanosensor employing Exo-III assisted target recycling of mercury ions through label-free colorimetry with nanomolar detection using uGNPs have been achieved and is further under the optimization to achieve picomolar range by avoiding the influence of the environmental matrix. The proposed strategy will supplement in the direction of uGNP based ultrasensitive, rapid, onsite, label-free colorimetric detection.Keywords: colorimetric, Exo-III, gold nanoparticles, Hg²⁺ detection, label-free, signal amplification
Procedia PDF Downloads 3116373 Fabrication of Antimicrobial Dental Model Using Digital Light Processing (DLP) Integrated with 3D-Bioprinting Technology
Authors: Rana Mohamed, Ahmed E. Gomaa, Gehan Safwat, Ayman Diab
Abstract:
Background: Bio-fabrication is a multidisciplinary research field that combines several principles, fabrication techniques, and protocols from different fields. The open-source-software movement is a movement that supports the use of open-source licenses for some or all software as part of the broader notion of open collaboration. Additive manufacturing is the concept of 3D printing, where it is a manufacturing method through adding layer-by-layer using computer-aided designs (CAD). There are several types of AM system used, and they can be categorized by the type of process used. One of these AM technologies is Digital light processing (DLP) which is a 3D printing technology used to rapidly cure a photopolymer resin to create hard scaffolds. DLP uses a projected light source to cure (Harden or crosslinking) the entire layer at once. Current applications of DLP are focused on dental and medical applications. Other developments have been made in this field, leading to the revolutionary field 3D bioprinting. The open-source movement was started to spread the concept of open-source software to provide software or hardware that is cheaper, reliable, and has better quality. Objective: Modification of desktop 3D printer into 3D bio-printer and the integration of DLP technology and bio-fabrication to produce an antibacterial dental model. Method: Modification of a desktop 3D printer into a 3D bioprinter. Gelatin hydrogel and sodium alginate hydrogel were prepared with different concentrations. Rhizome of Zingiber officinale, Flower buds of Syzygium aromaticum, and Bulbs of Allium sativum were extracted, and extractions were selected on different levels (Powder, aqueous extracts, total oils, and Essential oils) prepared for antibacterial bioactivity. Agar well diffusion method along with the E. coli have been used to perform the sensitivity test for the antibacterial activity of the extracts acquired by Zingiber officinale, Syzygium aromaticum, and Allium sativum. Lastly, DLP printing was performed to produce several dental models with the natural extracted combined with hydrogel to represent and simulate the Hard and Soft tissues. Result: The desktop 3D printer was modified into 3D bioprinter using open-source software Marline and modified custom-made 3D printed parts. Sodium alginate hydrogel and gelatin hydrogel were prepared at 5% (w/v), 10% (w/v), and 15%(w/v). Resin integration with the natural extracts of Rhizome of Zingiber officinale, Flower buds of Syzygium aromaticum, and Bulbs of Allium sativum was done following the percentage 1- 3% for each extract. Finally, the Antimicrobial dental model was printed; exhibits the antimicrobial activity, followed by merging with sodium alginate hydrogel. Conclusion: The open-source movement was successful in modifying and producing a low-cost Desktop 3D Bioprinter showing the potential of further enhancement in such scope. Additionally, the potential of integrating the DLP technology with bioprinting is a promising step toward the usage of the antimicrobial activity using natural products.Keywords: 3D printing, 3D bio-printing, DLP, hydrogel, antibacterial activity, zingiber officinale, syzygium aromaticum, allium sativum, panax ginseng, dental applications
Procedia PDF Downloads 946372 Secondary Radiation in Laser-Accelerated Proton Beamline (LAP)
Authors: Seyed Ali Mahdipour, Maryam Shafeei Sarvestani
Abstract:
Radiation pressure acceleration (RPA) and target normal sheath acceleration (TNSA) are the most important methods of Laser-accelerated proton beams (LAP) planning systems.LAP has inspired novel applications that can benefit from proton bunch properties different from conventionally accelerated proton beams. The secondary neutron and photon produced in the collision of protons with beamline components are of the important concern in proton therapy. Various published Monte Carlo researches evaluated the beamline and shielding considerations for TNSA method, but there is no studies directly address secondary neutron and photon production from RPA method in LAP. The purpose of this study is to calculate the flux distribution of neutron and photon secondary radiations on the first area ofLAP and to determine the optimize thickness and radius of the energyselector in a LAP planning system based on RPA method. Also, we present the Monte Carlo calculations to determine the appropriate beam pipe for shielding a LAP planning system. The GEANT4 Monte Carlo toolkit has been used to simulate a secondary radiation production in LAP. A section of new multifunctional LAP beamlinehas been proposed, based on the pulsed power solenoid scheme as a GEANT4 toolkit. The results show that the energy selector is the most important source of neutron and photon secondary particles in LAP beamline. According to the calculations, the pure Tungsten energy selector not be the proper case, and using of Tungsten+Polyethylene or Tungsten+Graphitecomposite selectors will reduce the production of neutron and photon intensities by approximately ~10% and ~25%, respectively. Also the optimal radiuses of energy selectors were found to be ~4 cm and ~6 cm for a 3 degree and 5 degree proton deviation angles, respectively.Keywords: neutron, photon, flux distribution, energy selector, GEANT4 toolkit
Procedia PDF Downloads 1046371 The Importance of Jewish Influence on Foundation of Manichaean Philosophical and Religious System
Authors: Tatyana Suvorkina
Abstract:
It is indisputable that the problem of the origin of Manichaeism is very complex. Manichaeism is characterized as a syncretic religion, which was influenced by many teachings, but it is difficult to define one which can be called fundamental. The aim of this paper is an attempt to regard Jewish apocalyptic tradition as one of the most defining source of formation of Manichaean systems. To realize this aim a comparison of the Manichean texts and the Jewish apocryphal literature is made. Consideration is given first to the Coptic Manichaean treatise Kephalaia, The Cologne Mani Codex and to books of Enoch. Under the article it is not denied that Manichaeism was influenced by different doctrines and, passed through centuries, it could adapt and strengthen this influence at an even deeper level. But the fact that the Judeo-Christian environment where Mani grew up and where the first sprouts of his teaching were formed had impact on future prophet seems obvious. Nevertheless, attempts to analyze the system of Mani within the Jewish tradition are quite rare, although such studies were carried out for Gnosticism. But Manichaeism, despite the Gnostic features it contains, is not 'one of the Gnostics' to place it under this term among the rest. Frequently, gnostic currents are pointed out as the main sources for the formation of Mani’s teachings. But it seems possible that Mani's interest in Gnosticism was motivated by the fact that he considered it as something close to that interpretation of Hebrew texts, which he aspired to undertake. The question of understanding the Manichaean system is connected not only with Manichaeism but also with other dualistic teachings, which were recognized by contemporaries as Manichaean. It is seen that polemics between Manicheans and Hellenized Christianity separated from Judaism and continued to separate with every century, were polemics between adherents of initially two different worldviews who had, however, a common source. Therefore an analysis of the controversy in the context of interpretations of this common source by disputing parties is seen very important for further study.Keywords: dualism, Jewish apocalypticism, Manichaeism, syncretism
Procedia PDF Downloads 1866370 Sharing Tacit Knowledge: The Essence of Knowledge Management
Authors: Ayesha Khatun
Abstract:
In 21st century where markets are unstable, technologies rapidly proliferate, competitors multiply, products and services become obsolete almost overnight and customers demand low cost high value product, leveraging and harnessing knowledge is not just a potential source of competitive advantage rather a necessity in technology based and information intensive industries. Knowledge management focuses on leveraging the available knowledge and sharing the same among the individuals in the organization so that the employees can make best use of it towards achieving the organizational goals. Knowledge is not a discrete object. It is embedded in people and so difficult to transfer outside the immediate context that it becomes a major competitive advantage. However, internal transfer of knowledge among the employees is essential to maximize the use of knowledge available in the organization in an unstructured manner. But as knowledge is the source of competitive advantage for the organization it is also the source of competitive advantage for the individuals. People think that knowledge is power and sharing the same may lead to lose the competitive position. Moreover, the very nature of tacit knowledge poses many difficulties in sharing the same. But sharing tacit knowledge is the vital part of knowledge management process because it is the tacit knowledge which is inimitable. Knowledge management has been made synonymous with the use of software and technology leading to the management of explicit knowledge only ignoring personal interaction and forming of informal networks which are considered as the most successful means of sharing tacit knowledge. Factors responsible for effective sharing of tacit knowledge are grouped into –individual, organizational and technological factors. Different factors under each category have been identified. Creating a positive organizational culture, encouraging personal interaction, practicing reward system are some of the strategies that can help to overcome many of the barriers to effective sharing of tacit knowledge. Methodology applied here is completely secondary. Extensive review of relevant literature has been undertaken for the purpose.Keywords: knowledge, tacit knowledge, knowledge management, sustainable competitive advantage, organization, knowledge sharing
Procedia PDF Downloads 3986369 Optimation of Ethanol Extract of Gotu Kola and Majapahit Composition as Natural Antioxidant Source
Authors: Mustofa Ahda, Fiqri Rozi, Gina Noor Habibah, Mas Ulfah Lestari, Tomy Hardianto, Yuni Andriani
Abstract:
The development of natural antioxidants in the Centella asiatica and Majapahit is a great potential. This research has been optimizing the composition of ethanol extract of Centella asiatica and leaves Majapahit as an antioxidants source using measure the free radical scavenging activity of DPPH. The results of the research showed that both the ethanol extract of Centella asiatica and leaves Majapahit has a total content of phenol. It is shown with the ability to reduce reagent Folin Ciocalteu become blue colour. The composition optimization of extract Centella asiatica leaves Majapahit = 30:70 has free radical scavenging activity of DPPH most well compared ethanol extract of Centella asiatica and leaves Majapahit. IC50 values for the composition of ethanol extract of Centella asiatica : leaves Majapahit = 30:70 is 0,103 mg/mL.Keywords: antioxidant activity, Centella asiatica, Cresentia cujete, composition extract
Procedia PDF Downloads 3296368 In Silico Analysis of Salivary miRNAs to Identify the Diagnostic Biomarkers for Oral Cancer
Authors: Andleeb Zahra, Itrat Rubab, Sumaira Malik, Amina Khan, Muhammad Jawad Khan, M. Qaiser Fatmi
Abstract:
Oral squamous cell carcinoma (OSCC) is one of the most common cancers worldwide. Recent studies have highlighted the role of miRNA in disease pathology, indicating its potential use in an early diagnostic tool. miRNAs are small, double stranded, non-coding RNAs that regulate gene expression by deregulating mRNAs. miRNAs play important roles in modifying various cellular processes such as cell growth, differentiation, apoptosis, and immune response. Dis-regulated expression of miRNAs is known to affect the cell growth, and this may function as tumor suppressors or oncogenes in various cancers. Objectives: The main objectives of this study were to characterize the extracellular miRNAs involved in oral cancer (OC) to assist early detection of cancer as well as to propose a list of genes that can potentially be used as biomarkers of OC. We used gene expression data by microarrays already available in literature. Materials and Methods: In the first step, a total of 318 miRNAs involved in oral carcinoma were shortlisted followed by the prediction of their target genes. Simultaneously, the differentially expressed genes (DEGs) of oral carcinoma from all experiments were identified. The common genes between lists of DEGs of OC based on experimentally proven data and target genes of each miRNA were identified. These common genes are the targets of specific miRNA, which is involved in OC. Finally, a list of genes was generated which may be used as biomarker of OC. Results and Conclusion: In results, we included some of pathways in cancer to show the change in gene expression under the control of specific miRNA. Ingenuity pathway analysis (IPA) provided a list of major biomarkers like CDH2, CDK7 and functional enrichment analysis identified the role of miRNA in major pathways like cell adhesion molecules pathway affected by cancer. We observed that at least 25 genes are regulated by maximum number of miRNAs, and thereby, they can be used as biomarkers of OC. To better understand the role of miRNA with respect to their target genes further experiments are required, and our study provides a platform to better understand the miRNA-OC relationship at genomics level.Keywords: biomarkers, gene expression, miRNA, oral carcinoma
Procedia PDF Downloads 3756367 Quantification and Identification of the Main Components of the Biomass of the Microalgae Scenedesmus SP. – Prospection of Molecules of Commercial Interest
Authors: Carolina V. Viegas, Monique Gonçalves, Gisel Chenard Diaz, Yordanka Reyes Cruz, Donato Alexandre Gomes Aranda
Abstract:
To develop the massive cultivation of microalgae, it is necessary to isolate and characterize the species, improving genetic tools in search of specific characteristics. Therefore, the detection, identification and quantification of the compounds that compose the Scenedesmus sp. were prerequisites to verify the potential of these microalgae. The main objective of this work was to carry out the characterization of Scenedesmus sp. as to the content of ash, carbohydrates, proteins and lipids as well as the determination of the composition of their lipid classes and main fatty acids. The biomass of Scenedesmus sp, showed 15,29 ± 0,23 % of ash and CaO (36,17 %) was the main component of this fraction, The total protein and carbohydrate content of the biomass was 40,74 ± 1,01 % and 23,37 ± 0,95 %, respectively, proving to be a potential source of proteins as well as carbohydrates for the production of ethanol via fermentation, The lipid contents extracted via Bligh & Dyer and in situ saponification were 8,18 ± 0,13 % and 4,11 ± 0,11 %, respectively. In the lipid extracts obtained via Bligh & Dyer, approximately 50 % of the composition of this fraction consists of fatty compounds, while the other half is composed of an unsaponifiable fraction composed mainly of chlorophylls, phytosterols and carotenes. From the lowest yield, it was possible to obtain a selectivity of 92,14 % for fatty components (fatty acids and fatty esters) confirmed through the infrared spectroscopy technique. The presence of polyunsaturated acids (~45 %) in the lipid extracts indicated the potential of this fraction as a source of nutraceuticals. The results indicate that the biomass of Scenedesmus sp, can become a promising potential source for obtaining polyunsaturated fatty acids, carotenoids and proteins as well as the simultaneous obtainment of different compounds of high commercial value.Keywords: microalgae, Desmodesmus, lipid classes, fatty acid profile, proteins, carbohydrates
Procedia PDF Downloads 976366 Development of Alpha Spectroscopy Method with Solid State Nuclear Track Detector Using Aluminium Thin Films
Authors: Nidal Dwaikat
Abstract:
This work presents the development of alpha spectroscopy method with Solid-state nuclear track detectors using aluminum thin films. The resolution of this method is high, and it is able to discriminate between alpha particles at different incident energy. It can measure the exact number of alpha particles at specific energy without needing a calibration of alpha track diameter versus alpha energy. This method was tested by using Cf-252 alpha standard source at energies 5.11 Mev, 3.86 MeV and 2.7 MeV, which produced by the variation of detector -standard source distance. On front side, two detectors were covered with two Aluminum thin films and the third detector was kept uncovered. The thickness of Aluminum thin films was selected carefully (using SRIM 2013) such that one of the films will block the lower two alpha particles (3.86 MeV and 2.7 MeV) and the alpha particles at higher energy (5.11 Mev) can penetrate the film and reach the detector’s surface. The second thin film will block alpha particles at lower energy of 2.7 MeV and allow alpha particles at higher two energies (5.11 Mev and 3.86 MeV) to penetrate and produce tracks. For uncovered detector, alpha particles at three different energies can produce tracks on it. For quality assurance and accuracy, the detectors were mounted on thick enough copper substrates to block exposure from the backside. The tracks on the first detector are due to alpha particles at energy of 5.11 MeV. The difference between the tracks number on the first detector and the tracks number on the second detector is due to alpha particles at energy of 3.8 MeV. Finally, by subtracting the tracks number on the second detector from the tracks number on the third detector (uncovered), we can find the tracks number due to alpha particles at energy 2.7 MeV. After knowing the efficiency calibration factor, we can exactly calculate the activity of standard source.Keywords: aluminium thin film, alpha particles, copper substrate, CR-39 detector
Procedia PDF Downloads 3656365 Loading and Unloading Scheduling Problem in a Multiple-Multiple Logistics Network: Modelling and Solving
Authors: Yasin Tadayonrad
Abstract:
Most of the supply chain networks have many nodes starting from the suppliers’ side up to the customers’ side that each node sends/receives the raw materials/products from/to the other nodes. One of the major concerns in this kind of supply chain network is finding the best schedule for loading /unloading the shipments through the whole network by which all the constraints in the source and destination nodes are met and all the shipments are delivered on time. One of the main constraints in this problem is loading/unloading capacity in each source/ destination node at each time slot (e.g., per week/day/hour). Because of the different characteristics of different products/groups of products, the capacity of each node might differ based on each group of products. In most supply chain networks (especially in the Fast-moving consumer goods industry), there are different planners/planning teams working separately in different nodes to determine the loading/unloading timeslots in source/destination nodes to send/receive the shipments. In this paper, a mathematical problem has been proposed to find the best timeslots for loading/unloading the shipments minimizing the overall delays subject to respecting the capacity of loading/unloading of each node, the required delivery date of each shipment (considering the lead-times), and working-days of each node. This model was implemented on python and solved using Python-MIP on a sample data set. Finally, the idea of a heuristic algorithm has been proposed as a way of improving the solution method that helps to implement the model on larger data sets in real business cases, including more nodes and shipments.Keywords: supply chain management, transportation, multiple-multiple network, timeslots management, mathematical modeling, mixed integer programming
Procedia PDF Downloads 916364 Translating Discourse Organization Structures Used in Chinese and English Scientific and Engineering Writings
Authors: Ming Qian, Davis Qian
Abstract:
This study compares the different organization structures of Chinese and English writing discourses in the engineering and scientific fields, and recommends approaches for translators to convert the organization structures properly. Based on existing intercultural communication literature, English authors tend to deductively give their main points at the beginning, following with detailed explanations or arguments afterwards while the Chinese authors tend to place their main points inductively towards the end. In this study, this hypothesis has been verified by the authors’ Chinese-to-English translation experiences in the fields of science and engineering (e.g. journal papers, conference papers and monographs). The basic methodology used is the comparison of writings by Chinese authors with writings of the same or similar topic written by English authors in terms of organization structures. Translators should be aware of this nuance, so that instead of limiting themselves to translating the contents of an article in its original structure, they can convert the structures to fill the cross-culture gap. This approach can be controversial because if a translator changes the structure organization of a paragraph (e.g. from a 'because-therefore' inductive structure by a Chinese author to a deductive structure in English), this change of sentence order could be questioned by the original authors. For this reason, translators need to properly inform the original authors on the intercultural differences of English and Chinese writing (e.g. inductive structure versus deductive structure), and work with the original authors to maintain accuracy while converting from one structure used in a source language to another structure in the target language. The authors have incorporated these methodologies into their translation practices and work closely with the authors on the inter-cultural organization structure mapping. Translating discourse organization structure should become a standard practice in the translation process.Keywords: discourse structure, information structure, intercultural communication, translation practice
Procedia PDF Downloads 4416363 The Importance of Conserving Pre-Historical, Historical and Cultural Heritage and Its Tourist Exploitation
Authors: Diego Renan G. Tudela, Veruska C. Dutra, Mary Lucia Gomes Silveira de Senna, Afonso R. Aquino
Abstract:
Tourism in the present is the largest industry in the world, being an important global activity that has grown a lot in recent times. In this context, the activity of cultural tourism is growing, being seen as an important source of knowledge and information enjoyed by visitors. This article aims to discuss the cultural tourism, archaeological records and indigenous communities and the importance of preserving these invaluable sources of information, focusing on the records of the first peoples inhabiting the South American and North American lands. The study was based on discussions, theoretical studies, bibliographical research. Archaeological records are an important source of knowledge and information. Indigenous ethnic tourism represents a rescue of the authenticity of indigenous traditional cultures and their relation to the natural habitat. Cultural and indigenous tourism activity requires long-term planning to make it a sustainable activity.Keywords: tourism, culture, preservation, discussions
Procedia PDF Downloads 2626362 Audio-Visual Co-Data Processing Pipeline
Authors: Rita Chattopadhyay, Vivek Anand Thoutam
Abstract:
Speech is the most acceptable means of communication where we can quickly exchange our feelings and thoughts. Quite often, people can communicate orally but cannot interact or work with computers or devices. It’s easy and quick to give speech commands than typing commands to computers. In the same way, it’s easy listening to audio played from a device than extract output from computers or devices. Especially with Robotics being an emerging market with applications in warehouses, the hospitality industry, consumer electronics, assistive technology, etc., speech-based human-machine interaction is emerging as a lucrative feature for robot manufacturers. Considering this factor, the objective of this paper is to design the “Audio-Visual Co-Data Processing Pipeline.” This pipeline is an integrated version of Automatic speech recognition, a Natural language model for text understanding, object detection, and text-to-speech modules. There are many Deep Learning models for each type of the modules mentioned above, but OpenVINO Model Zoo models are used because the OpenVINO toolkit covers both computer vision and non-computer vision workloads across Intel hardware and maximizes performance, and accelerates application development. A speech command is given as input that has information about target objects to be detected and start and end times to extract the required interval from the video. Speech is converted to text using the Automatic speech recognition QuartzNet model. The summary is extracted from text using a natural language model Generative Pre-Trained Transformer-3 (GPT-3). Based on the summary, essential frames from the video are extracted, and the You Only Look Once (YOLO) object detection model detects You Only Look Once (YOLO) objects on these extracted frames. Frame numbers that have target objects (specified objects in the speech command) are saved as text. Finally, this text (frame numbers) is converted to speech using text to speech model and will be played from the device. This project is developed for 80 You Only Look Once (YOLO) labels, and the user can extract frames based on only one or two target labels. This pipeline can be extended for more than two target labels easily by making appropriate changes in the object detection module. This project is developed for four different speech command formats by including sample examples in the prompt used by Generative Pre-Trained Transformer-3 (GPT-3) model. Based on user preference, one can come up with a new speech command format by including some examples of the respective format in the prompt used by the Generative Pre-Trained Transformer-3 (GPT-3) model. This pipeline can be used in many projects like human-machine interface, human-robot interaction, and surveillance through speech commands. All object detection projects can be upgraded using this pipeline so that one can give speech commands and output is played from the device.Keywords: OpenVINO, automatic speech recognition, natural language processing, object detection, text to speech
Procedia PDF Downloads 806361 Artificial Neural Network Modeling of a Closed Loop Pulsating Heat Pipe
Authors: Vipul M. Patel, Hemantkumar B. Mehta
Abstract:
Technological innovations in electronic world demand novel, compact, simple in design, less costly and effective heat transfer devices. Closed Loop Pulsating Heat Pipe (CLPHP) is a passive phase change heat transfer device and has potential to transfer heat quickly and efficiently from source to sink. Thermal performance of a CLPHP is governed by various parameters such as number of U-turns, orientations, input heat, working fluids and filling ratio. The present paper is an attempt to predict the thermal performance of a CLPHP using Artificial Neural Network (ANN). Filling ratio and heat input are considered as input parameters while thermal resistance is set as target parameter. Types of neural networks considered in the present paper are radial basis, generalized regression, linear layer, cascade forward back propagation, feed forward back propagation; feed forward distributed time delay, layer recurrent and Elman back propagation. Linear, logistic sigmoid, tangent sigmoid and Radial Basis Gaussian Function are used as transfer functions. Prediction accuracy is measured based on the experimental data reported by the researchers in open literature as a function of Mean Absolute Relative Deviation (MARD). The prediction of a generalized regression ANN model with spread constant of 4.8 is found in agreement with the experimental data for MARD in the range of ±1.81%.Keywords: ANN models, CLPHP, filling ratio, generalized regression, spread constant
Procedia PDF Downloads 2926360 Hydrothermal Liquefaction for Astaxanthin Extraction from Wet Algae
Authors: Spandana Ramisetty, Mandan Chidambaram, Ramesh Bhujade
Abstract:
Algal biomass is not only a potential source for biocrude but also for high value chemicals like carotenoids, fatty acids, proteins, polysaccharides, vitamins etc. Astaxanthin is one such high value vital carotenoid which has extensive applications in pharmaceutical, aquaculture, poultry and cosmetic industries and expanding as dietary supplement to humans. Green microalgae Haematococcus pluvialis is identified as the richest natural source of astaxanthin and is the key source of commercial astaxanthin. Several extraction processes from wet and dry Haematococcus pluvialis biomass have been explored by researchers. Extraction with supercritical CO₂ and various physical disruption techniques like mortar and pestle, homogenization, ultrasonication and ball mill from dried algae are widely used extraction methods. However, these processes require energy intensive drying of biomass that escalates overall costs notably. From the process economics perspective, it is vital to utilize wet processing technology in order to eliminate drying costs. Hydrothermal liquefaction (HTL) is a thermo-chemical conversion process that converts wet biomass containing over 80% water to bio-products under high temperature and high pressure conditions. Astaxanthin is a lipid soluble pigment and is usually extracted along with lipid component. Mild HTL at 200°C and 60 bar has been demonstrated by researchers in a microfluidic platform achieving near complete extraction of astaxanthin from wet biomass. There is very limited work done in this field. An integrated approach of sequential HTL offers cost-effective option to extract astaxanthin/lipid from wet algal biomass without drying algae and also recovering water, minerals and nutrients. This paper reviews past work and evaluates the astaxanthin extraction processes with focus on hydrothermal extraction.Keywords: astaxanthin, extraction, high value chemicals, hydrothermal liquefaction
Procedia PDF Downloads 3076359 Characteristics of Tremella fuciformis and Annulohypoxylon stygium for Optimal Cultivation Conditions
Authors: Eun-Ji Lee, Hye-Sung Park, Chan-Jung Lee, Won-Sik Kong
Abstract:
We analyzed the DNA sequence of the ITS (Internal Transcribed Spacer) region of the 18S ribosomal gene and compared it with the gene sequence of T. fuciformis and Hypoxylon sp. in the BLAST database. The sequences of collected T. fuciformis and Hypoxylon sp. have over 99% homology in the T. fuciformis and Hypoxylon sp. sequence BLAST database. In order to select the optimal medium for T. fuciformis, five kinds of a medium such as Potato Dextrose Agar (PDA), Mushroom Complete Medium (MCM), Malt Extract Agar (MEA), Yeast extract (YM), and Compost Extract Dextrose Agar (CDA) were used. T. fuciformis showed the best growth on PDA medium, and Hypoxylon sp. showed the best growth on MCM. So as to investigate the optimum pH and temperature, the pH range was set to pH4 to pH8 and the temperature range was set to 15℃ to 35℃ (5℃ degree intervals). Optimum culture conditions for the T. fuciformis growth were pH5 at 25℃. Hypoxylon sp. were pH6 at 25°C. In order to confirm the most suitable carbon source, we used fructose, galactose, saccharose, soluble starch, inositol, glycerol, xylose, dextrose, lactose, dextrin, Na-CMC, adonitol. Mannitol, mannose, maltose, raffinose, cellobiose, ethanol, salicine, glucose, arabinose. In the optimum carbon source, T. fuciformis is xylose and Hypoxylon sp. is arabinose. Using the column test, we confirmed sawdust a suitable for T. fuciformis, since the composition of sawdust affects the growth of fruiting bodies of T. fuciformis. The sawdust we used is oak tree, pine tree, poplar, birch, cottonseed meal, cottonseed hull. In artificial cultivation of T. fuciformis with sawdust medium, T. fuciformis and Hypoxylon sp. showed fast mycelial growth on mixture of oak tree sawdust, cottonseed hull, and wheat bran.Keywords: cultivation, optimal condition, tremella fuciformis, nutritional source
Procedia PDF Downloads 2106358 Accelerator Mass Spectrometry Analysis of Isotopes of Plutonium in PM₂.₅
Authors: C. G. Mendez-Garcia, E. T. Romero-Guzman, H. Hernandez-Mendoza, C. Solis, E. Chavez-Lomeli, E. Chamizo, R. Garcia-Tenorio
Abstract:
Plutonium is present in different concentrations in the environment and biological samples related to nuclear weapons testing, nuclear waste recycling and accidental discharges of nuclear plants. This radioisotope is considered the most radiotoxic substance, particularly when it enters the human body through inhalation of powders insoluble or aerosols. This is the main reason of the determination of the concentration of this radioisotope in the atmosphere. Besides that, the isotopic ratio of ²⁴⁰Pu/²³⁹Pu provides information about the origin of the source. PM₂.₅ sampling was carried out in the Metropolitan Zone of the Valley of Mexico (MZVM) from February 18th to March 17th in 2015 on quartz filter. There have been significant developments recently due to the establishment of new methods for sample preparation and accurate measurement to detect ultra trace levels as the plutonium is found in the environment. The accelerator mass spectrometry (AMS) is a technique that allows measuring levels of detection around of femtograms (10-15 g). The AMS determinations include the chemical isolation of Pu. The Pu separation involved an acidic digestion and a radiochemical purification using an anion exchange resin. Finally, the source is prepared, when Pu is pressed in the corresponding cathodes. According to the author's knowledge on these aerosols showed variations on the ²³⁵U/²³⁸U ratio of the natural value, suggesting that could be an anthropogenic source altering it. The determination of the concentration of the isotopes of Pu can be a useful tool in order the clarify this presence in the atmosphere. The first results showed a mean value of activity concentration of ²³⁹Pu of 280 nBq m⁻³ thus the ²⁴⁰Pu/²³⁹Pu was 0.025 corresponding to the weapon production source; these results corroborate that there is an anthropogenic influence that is increasing the concentration of radioactive material in PM₂.₅. According to the author's knowledge in Total Suspended Particles (TSP) have been reported activity concentrations of ²³⁹⁺²⁴⁰Pu around few tens of nBq m⁻³ and 0.17 of ²⁴⁰Pu/²³⁹Pu ratios. The preliminary results in MZVM show high activity concentrations of isotopes of Pu (40 and 700 nBq m⁻³) and low ²⁴⁰Pu/²³⁹Pu ratio than reported. These results are in the order of the activity concentrations of Pu in weapons-grade of high purity.Keywords: aerosols, fallout, mass spectrometry, radiochemistry, tracer, ²⁴⁰Pu/²³⁹Pu ratio
Procedia PDF Downloads 1676357 Catchment Nutrient Balancing Approach to Improve River Water Quality: A Case Study at the River Petteril, Cumbria, United Kingdom
Authors: Nalika S. Rajapaksha, James Airton, Amina Aboobakar, Nick Chappell, Andy Dyer
Abstract:
Nutrient pollution and their impact on water quality is a key concern in England. Many water quality issues originate from multiple sources of pollution spread across the catchment. The river water quality in England has improved since 1990s and wastewater effluent discharges into rivers now contain less phosphorus than in the past. However, excess phosphorus is still recognised as the prevailing issue for rivers failing Water Framework Directive (WFD) good ecological status. To achieve WFD Phosphorus objectives, Wastewater Treatment Works (WwTW) permit limits are becoming increasingly stringent. Nevertheless, in some rural catchments, the apportionment of Phosphorus pollution can be greater from agricultural runoff and other sources such as septic tanks. Therefore, the challenge of meeting the requirements of watercourses to deliver WFD objectives often goes beyond water company activities, providing significant opportunities to co-deliver activities in wider catchments to reduce nutrient load at source. The aim of this study was to apply the United Utilities' Catchment Systems Thinking (CaST) strategy and pilot an innovative permitting approach - Catchment Nutrient Balancing (CNB) in a rural catchment in Cumbria (the River Petteril) in collaboration with the regulator and others to achieve WFD objectives and multiple benefits. The study area is mainly agricultural land, predominantly livestock farms. The local ecology is impacted by significant nutrient inputs which require intervention to meet WFD obligations. There are a range of Phosphorus inputs into the river, including discharges from wastewater assets but also significantly from agricultural contributions. Solely focusing on the WwTW discharges would not have resolved the problem hence in order to address this issue effectively, a CNB trial was initiated at a small WwTW, targeting the removal of a total of 150kg of Phosphorus load, of which 13kg were to be reduced through the use of catchment interventions. Various catchment interventions were implemented across selected farms in the upstream of the catchment and also an innovative polonite reactive filter media was implemented at the WwTW as an alternative to traditional Phosphorus treatment methods. During the 3 years of this trial, the impact of the interventions in the catchment and the treatment works were monitored. In 2020 and 2022, it respectively achieved a 69% and 63% reduction in the phosphorus level in the catchment against the initial reduction target of 9%. Phosphorus treatment at the WwTW had a significant impact on overall load reduction. The wider catchment impact, however, was seven times greater than the initial target when wider catchment interventions were also established. While it is unlikely that all the Phosphorus load reduction was delivered exclusively from the interventions implemented though this project, this trial evidenced the enhanced benefits that can be achieved with an integrated approach, that engages all sources of pollution within the catchment - rather than focusing on a one-size-fits-all solution. Primarily, the CNB approach and the act of collaboratively engaging others, particularly the agriculture sector is likely to yield improved farm and land management performance and better compliance, which can lead to improved river quality as well as wider benefits.Keywords: agriculture, catchment nutrient balancing, phosphorus pollution, water quality, wastewater
Procedia PDF Downloads 666356 High Motivational Salient Face Distractors Slowed Target Detection: Evidence from Behavioral Studies
Authors: Rashmi Gupta
Abstract:
Rewarding stimuli capture attention involuntarily as a result of an association process that develops quickly during value learning, referred to as the reward or value-driven attentional capture. It is essential to compare reward with punishment processing to get a full picture of value-based modulation in visual attention processing. Hence, the present study manipulated both valence/value (reward as well as punishment) and motivational salience (probability of an outcome: high vs. low) together. Series of experiments were conducted, and there were two phases in each experiment. In phase 1, participants were required to learn to associate specific face stimuli with a high or low probability of winning or losing points. In the second phase, these conditioned stimuli then served as a distractor or prime in a speeded letter search task. Faces with high versus low outcome probability, regardless of valence, slowed the search for targets (specifically the left visual field target) and suggesting that the costs to performance on non-emotional cognitive tasks were only driven by motivational salience (high vs. loss) associated with the stimuli rather than the valence (gain vs. loss). It also suggests that the processing of motivationally salient stimuli is right-hemisphere biased. Together, results of these studies strengthen the notion that our visual attention system is more sensitive to affected by motivational saliency rather than valence, which termed here as motivational-driven attentional capture.Keywords: attention, distractors, motivational salience, valence
Procedia PDF Downloads 2206355 Quantifying the Protein-Protein Interaction between the Ion-Channel-Forming Colicin A and the Tol Proteins by Potassium Efflux in E. coli Cells
Authors: Fadilah Aleanizy
Abstract:
Colicins are a family of bacterial toxins that kill Escherichia coli and other closely related species. The mode of action of colicins involves binding to an outer membrane receptor and translocation across the cell envelope, leading to cytotoxicity through specific targets. The mechanism of colicin cytotoxicity includes a non-specific endonuclease activity or depolarization of the cytoplasmic membrane by pore-forming activity. For Group A colicins, translocation requires an interaction between the N-terminal domain of the colicin and a series of membrane- bound and periplasmic proteins known as the Tol system (TolB, TolR, TolA, TolQ, and Pal and the active domain must be translocated through the outer membranes. Protein-protein interactions are intrinsic to virtually every cellular process. The transient protein-protein interactions of the colicin include the interaction with much more complicated assemblies during colicin translocation across the cellular membrane to its target. The potassium release assay detects variation in the K+ content of bacterial cells (K+in). This assays is used to measure the effect of pore-forming colicins such as ColA on an indicator organism by measuring the changes of the K+ concentration in the external medium (K+out ) that are caused by cell killing with a K+ selective electrode. One of the goals of this work is to employ a quantifiable in-vivo method to spot which Tol protein are more implicated in the interaction with colicin A as it is translocated to its target.Keywords: K+ efflux, Colicin A, Tol-proteins, E. coli
Procedia PDF Downloads 4106354 Multiscale Hub: An Open-Source Framework for Practical Atomistic-To-Continuum Coupling
Authors: Masoud Safdari, Jacob Fish
Abstract:
Despite vast amount of existing theoretical knowledge, the implementation of a universal multiscale modeling, analysis, and simulation software framework remains challenging. Existing multiscale software and solutions are often domain-specific, closed-source and mandate a high-level of experience and skills in both multiscale analysis and programming. Furthermore, tools currently existing for Atomistic-to-Continuum (AtC) multiscaling are developed with the assumptions such as accessibility of high-performance computing facilities to the users. These issues mentioned plus many other challenges have reduced the adoption of multiscale in academia and especially industry. In the current work, we introduce Multiscale Hub (MsHub), an effort towards making AtC more accessible through cloud services. As a joint effort between academia and industry, MsHub provides a universal web-enabled framework for practical multiscaling. Developed on top of universally acclaimed scientific programming language Python, the package currently provides an open-source, comprehensive, easy-to-use framework for AtC coupling. MsHub offers an easy to use interface to prominent molecular dynamics and multiphysics continuum mechanics packages such as LAMMPS and MFEM (a free, lightweight, scalable C++ library for finite element methods). In this work, we first report on the design philosophy of MsHub, challenges identified and issues faced regarding its implementation. MsHub takes the advantage of a comprehensive set of tools and algorithms developed for AtC that can be used for a variety of governing physics. We then briefly report key AtC algorithms implemented in MsHub. Finally, we conclude with a few examples illustrating the capabilities of the package and its future directions.Keywords: atomistic, continuum, coupling, multiscale
Procedia PDF Downloads 1776353 Understanding the Processwise Entropy Framework in a Heat-powered Cooling Cycle
Authors: P. R. Chauhan, S. K. Tyagi
Abstract:
Adsorption refrigeration technology offers a sustainable and energy-efficient cooling alternative over traditional refrigeration technologies for meeting the fast-growing cooling demands. With its ability to utilize natural refrigerants, low-grade heat sources, and modular configurations, it has the potential to revolutionize the cooling industry. Despite these benefits, the commercial viability of this technology is hampered by several fundamental limiting constraints, including its large size, low uptake capacity, and poor performance as a result of deficient heat and mass transfer characteristics. The primary cause of adequate heat and mass transfer characteristics and magnitude of exergy loss in various real processes of adsorption cooling system can be assessed by the entropy generation rate analysis, i. e. Second law of Thermodynamics. Therefore, this article presents the second law of thermodynamic-based investigation in terms of entropy generation rate (EGR) to identify the energy losses in various processes of the HPCC-based adsorption system using MATLAB R2021b software. The adsorption technology-based cooling system consists of two beds made up of silica gel and arranged in a single stage, while the water is employed as a refrigerant, coolant, and hot fluid. The variation in process-wise EGR is examined corresponding to cycle time, and a comparative analysis is also presented. Moreover, the EGR is also evaluated in the external units, such as the heat source and heat sink unit used for regeneration and heat dump, respectively. The research findings revealed that the combination of adsorber and desorber, which operates across heat reservoirs with a higher temperature gradient, shares more than half of the total amount of EGR. Moreover, the EGR caused by the heat transfer process is determined to be the highest, followed by a heat sink, heat source, and mass transfer, respectively. in case of heat transfer process, the operation of the valve is determined to be responsible for more than half (54.9%) of the overall EGR during the heat transfer. However, the combined contribution of the external units, such as the source (18.03%) and sink (21.55%), to the total EGR, is 35.59%. The analysis and findings of the present research are expected to pinpoint the source of the energy waste in HPCC based adsorption cooling systems.Keywords: adsorption cooling cycle, heat transfer, mass transfer, entropy generation, silica gel-water
Procedia PDF Downloads 1076352 Laser-TIG Welding-Brazing for Dissimilar Metals between Aluminum Alloy and Steel
Authors: Xiangfang Xu, Bintao Wu, Yugang Miao, Duanfeng Han
Abstract:
Experiments were conducted on 5A06 aluminum alloy and Q235 steel using the laser-TIG hybrid heat source welding-brazing method to realize the reliable connection of Al/Fe dissimilar metals and the welding characteristics were analyzed. It was found that the joints with uniform seam and high tensile strength could be obtained using such a method, while the welding process demanded special welding parameters. Spectrum measurements showed that the Al and Fe atoms diffused more thoroughly at the brazing interface and formed a 3μm-thick intermetallic compound layer at the Al/Fe joints brazed connection interface. Shearing tests indicated that the shearing strength of the Al/Fe welding-brazed joint was 165MPa. The fracture occurred near the melting zone of aluminum alloy, which belonged to the mixed mode with the ductile fracture as the base and the brittle fracture as the supplement.Keywords: Al/Fe dissimilar metals, laser-TIG hybrid heat source, shearing strength, welding-brazing method
Procedia PDF Downloads 4046351 Humoral and Cellular Immune Responses to Major Human Cytomegalovirus Antigens in Mice Model
Authors: S. Essa, H. Safar, R. Raghupathy
Abstract:
Human cytomegalovirus (CMV) continues to be a source of severe complications to immunologically immature and immune-compromised hosts. Effective CMV vaccine that diminishes CMV disease in transplant patients and avoids congenital infection remains of high importance as no approved vaccines exist. Though the exact links of defense mechanisms are unidentified, viral-specific antibodies and Th1/Th2 cytokine responses have been involved in controlling viral infections. CMV envelope glycoprotein B (UL55/gB), the matrix proteins (UL83/pp65, UL99/pp28, UL32/pp150), and the assembly protein UL80a/pp38 are known to be targets of antiviral immune responses. In this study, mice were immunized with five HCMV antigens (UL32/pp150, UL80a/pp38, UL99/pp28, and UL83/pp65), and serum samples were collected and evaluated for eliciting viral-specific antibody responses. Moreover, Splenocytes were collected, stimulated, and assessed for cytokine responses. The results demonstrated a CMV-antigen-specific antibody response to pp38 and pp65 (E/C >2.0). The highest titers were detected with pp38 (average E/C 16.275) followed by pp65 (average E/C 7.72). Compared to control cells, splenocytes from PP38 antigen immunized mice gave a significantly higher concentration of GM-CSF, IFN-γ, IL-2 IL-4, IL-5, and IL-17A (P<0.05). Also, splenocytes from pp65 antigen immunized mice resulted in a significantly higher concentration of GM-CSF, IFN-γ, IL-2 IL-4, IL-10, IL-12, IL-17A, and TNF- α. The designation of target CMV peptides by identifying viral-specific antibodies and cytokine responses is vital for understanding the protective immune mechanisms during CMV infection and identifying appropriate viral antigens to develop novel vaccines.Keywords: hepatitis C virus, peripheral blood mononuclear cells, neutrophils, cytokines
Procedia PDF Downloads 1396350 Sound Source Localisation and Augmented Reality for On-Site Inspection of Prefabricated Building Components
Authors: Jacques Cuenca, Claudio Colangeli, Agnieszka Mroz, Karl Janssens, Gunther Riexinger, Antonio D'Antuono, Giuseppe Pandarese, Milena Martarelli, Gian Marco Revel, Carlos Barcena Martin
Abstract:
This study presents an on-site acoustic inspection methodology for quality and performance evaluation of building components. The work focuses on global and detailed sound source localisation, by successively performing acoustic beamforming and sound intensity measurements. A portable experimental setup is developed, consisting of an omnidirectional broadband acoustic source and a microphone array and sound intensity probe. Three main acoustic indicators are of interest, namely the sound pressure distribution on the surface of components such as walls, windows and junctions, the three-dimensional sound intensity field in the vicinity of junctions, and the sound transmission loss of partitions. The measurement data is post-processed and converted into a three-dimensional numerical model of the acoustic indicators with the help of the simultaneously acquired geolocation information. The three-dimensional acoustic indicators are then integrated into an augmented reality platform superimposing them onto a real-time visualisation of the spatial environment. The methodology thus enables a measurement-supported inspection process of buildings and the correction of errors during construction and refurbishment. Two experimental validation cases are shown. The first consists of a laboratory measurement on a full-scale mockup of a room, featuring a prefabricated panel. The latter is installed with controlled defects such as lack of insulation and joint sealing material. It is demonstrated that the combined acoustic and augmented reality tool is capable of identifying acoustic leakages from the building defects and assist in correcting them. The second validation case is performed on a prefabricated room at a near-completion stage in the factory. With the help of the measurements and visualisation tools, the homogeneity of the partition installation is evaluated and leakages from junctions and doors are identified. Furthermore, the integration of acoustic indicators together with thermal and geometrical indicators via the augmented reality platform is shown.Keywords: acoustic inspection, prefabricated building components, augmented reality, sound source localization
Procedia PDF Downloads 3846349 Oxidation of Lignin for Production of Chemicals
Authors: Abayneh Getachew Demesa
Abstract:
Interest in renewable feedstock for the chemical industry has increased considerably over the last decades, mainly due to environmental concerns and foreseeable shortage of fossil raw materials. Lignocellulosic biomass is an abundant source of bio-based raw material that is readily available and can be utilized as an alternative source for chemical production. Lignin accrues in enormous amounts as a by-product of the pulping process in the pulp and paper industry. It is estimated that 70 million tons of lignin are annually processed worldwide from the pulp and paper industry alone. Despite its attractive chemical composition, lignin is still insufficiently exploited and mainly regarded as bio-waste. Therefore, an environmentally benign process that can completely and competitively convert lignin into different value-added chemicals is needed to launch its commercial success on industrial scale. Partial wet oxidation by molecular oxygen has received increased attention as a potential process for production of chemicals from biomass wastes. In this paper, the production of chemicals by oxidation of lignin is investigated. The factors influencing the different types of products formed during the oxidation of lignin and their yields and compositions are discussed.Keywords: biomass, lignin, waste, chemicals
Procedia PDF Downloads 2396348 The Processing of Context-Dependent and Context-Independent Scalar Implicatures
Authors: Liu Jia’nan
Abstract:
The default accounts hold the view that there exists a kind of scalar implicature which can be processed without context and own a psychological privilege over other scalar implicatures which depend on context. In contrast, the Relevance Theorist regards context as a must because all the scalar implicatures have to meet the need of relevance in discourse. However, in Katsos, the experimental results showed: Although quantitatively the adults rejected under-informative utterance with lexical scales (context-independent) and the ad hoc scales (context-dependent) at almost the same rate, adults still regarded the violation of utterance with lexical scales much more severe than with ad hoc scales. Neither default account nor Relevance Theory can fully explain this result. Thus, there are two questionable points to this result: (1) Is it possible that the strange discrepancy is due to other factors instead of the generation of scalar implicature? (2) Are the ad hoc scales truly formed under the possible influence from mental context? Do the participants generate scalar implicatures with ad hoc scales instead of just comparing semantic difference among target objects in the under- informative utterance? In my Experiment 1, the question (1) will be answered by repetition of Experiment 1 by Katsos. Test materials will be showed by PowerPoint in the form of pictures, and each procedure will be done under the guidance of a tester in a quiet room. Our Experiment 2 is intended to answer question (2). The test material of picture will be transformed into the literal words in DMDX and the target sentence will be showed word-by-word to participants in the soundproof room in our lab. Reading time of target parts, i.e. words containing scalar implicatures, will be recorded. We presume that in the group with lexical scale, standardized pragmatically mental context would help generate scalar implicature once the scalar word occurs, which will make the participants hope the upcoming words to be informative. Thus if the new input after scalar word is under-informative, more time will be cost for the extra semantic processing. However, in the group with ad hoc scale, scalar implicature may hardly be generated without the support from fixed mental context of scale. Thus, whether the new input is informative or not does not matter at all, and the reading time of target parts will be the same in informative and under-informative utterances. People’s mind may be a dynamic system, in which lots of factors would co-occur. If Katsos’ experimental result is reliable, will it shed light on the interplay of default accounts and context factors in scalar implicature processing? We might be able to assume, based on our experiments, that one single dominant processing paradigm may not be plausible. Furthermore, in the processing of scalar implicature, the semantic interpretation and the pragmatic interpretation may be made in a dynamic interplay in the mind. As to the lexical scale, the pragmatic reading may prevail over the semantic reading because of its greater exposure in daily language use, which may also lead the possible default or standardized paradigm override the role of context. However, those objects in ad hoc scale are not usually treated as scalar membership in mental context, and thus lexical-semantic association of the objects may prevent their pragmatic reading from generating scalar implicature. Only when the sufficient contextual factors are highlighted, can the pragmatic reading get privilege and generate scalar implicature.Keywords: scalar implicature, ad hoc scale, dynamic interplay, default account, Mandarin Chinese processing
Procedia PDF Downloads 3236347 Biohydrogen Production from Starch Residues
Authors: Francielo Vendruscolo
Abstract:
This review summarizes the potential of starch agroindustrial residues as substrate for biohydrogen production. Types of potential starch agroindustrial residues, recent developments and bio-processing conditions for biohydrogen production will be discussed. Biohydrogen is a clean energy source with great potential to be an alternative fuel, because it releases energy explosively in heat engines or generates electricity in fuel cells producing water as only by-product. Anaerobic hydrogen fermentation or dark fermentation seems to be more favorable, since hydrogen is yielded at high rates and various organic waste enriched with carbohydrates as substrate result in low cost for hydrogen production. Abundant biomass from various industries could be source for biohydrogen production where combination of waste treatment and energy production would be an advantage. Carbohydrate-rich nitrogen-deficient solid wastes such as starch residues can be used for hydrogen production by using suitable bioprocess technologies. Alternatively, converting biomass into gaseous fuels, such as biohydrogen is possibly the most efficient way to use these agroindustrial residues.Keywords: biofuel, dark fermentation, starch residues, food waste
Procedia PDF Downloads 3996346 Using Authentic and Instructional Materials to Support Intercultural Communicative Competence in ELT
Authors: Jana Beresova
Abstract:
The paper presents a study carried out in 2015-2016 within the national scheme of research - VEGA 1/0106/15 based on theoretical research and empirical verification of the concept of intercultural communicative competence. It focuses on the current conception concerning target languages teaching compatible with the Common European Framework of Reference for Languages: Learning, teaching, assessment. Our research had revealed how the concept of intercultural communicative competence had been perceived by secondary-school teachers of English in Slovakia before they were intensively trained. Intensive workshops were based on the use of both authentic and instructional materials with the goal to support interculturally oriented language teaching aimed at challenging thinking. The former concept that supported the development of the students´ linguistic knowledge and the use of a target language to obtain information about the culture of the country whose language learners were learning was expanded by the meaning-making framework which views language as a typical means by which culture is mediated. The goal of the workshop was to influence English teachers to better understand the concept of intercultural communicative competence, combining theory and practice optimally. The results of the study will be presented and analysed, providing particular recommendations for language teachers and suggesting some changes in the National Educational Programme from which English learners should benefit in their future studies or professional careers.Keywords: authentic materials, English language teaching, instructional materials, intercultural communicative competence
Procedia PDF Downloads 270