Search results for: free speech
2845 Tibyan Automated Arabic Correction Using Machine-Learning in Detecting Syntactical Mistakes
Authors: Ashwag O. Maghraby, Nida N. Khan, Hosnia A. Ahmed, Ghufran N. Brohi, Hind F. Assouli, Jawaher S. Melibari
Abstract:
The Arabic language is one of the most important languages. Learning it is so important for many people around the world because of its religious and economic importance and the real challenge lies in practicing it without grammatical or syntactical mistakes. This research focused on detecting and correcting the syntactic mistakes of Arabic syntax according to their position in the sentence and focused on two of the main syntactical rules in Arabic: Dual and Plural. It analyzes each sentence in the text, using Stanford CoreNLP morphological analyzer and machine-learning approach in order to detect the syntactical mistakes and then correct it. A prototype of the proposed system was implemented and evaluated. It uses support vector machine (SVM) algorithm to detect Arabic grammatical errors and correct them using the rule-based approach. The prototype system has a far accuracy 81%. In general, it shows a set of useful grammatical suggestions that the user may forget about while writing due to lack of familiarity with grammar or as a result of the speed of writing such as alerting the user when using a plural term to indicate one person.Keywords: Arabic language acquisition and learning, natural language processing, morphological analyzer, part-of-speech
Procedia PDF Downloads 1522844 A High-Throughput Enzyme Screening Method Using Broadband Coherent Anti-stokes Raman Spectroscopy
Authors: Ruolan Zhang, Ryo Imai, Naoko Senda, Tomoyuki Sakai
Abstract:
Enzymes have attracted increasing attentions in industrial manufacturing for their applicability in catalyzing complex chemical reactions under mild conditions. Directed evolution has become a powerful approach to optimize enzymes and exploit their full potentials under the circumstance of insufficient structure-function knowledge. With the incorporation of cell-free synthetic biotechnology, rapid enzyme synthesis can be realized because no cloning procedure such as transfection is needed. Its open environment also enables direct enzyme measurement. These properties of cell-free biotechnology lead to excellent throughput of enzymes generation. However, the capabilities of current screening methods have limitations. Fluorescence-based assay needs applicable fluorescent label, and the reliability of acquired enzymatic activity is influenced by fluorescent label’s binding affinity and photostability. To acquire the natural activity of an enzyme, another method is to combine pre-screening step and high-performance liquid chromatography (HPLC) measurement. But its throughput is limited by necessary time investment. Hundreds of variants are selected from libraries, and their enzymatic activities are then identified one by one by HPLC. The turn-around-time is 30 minutes for one sample by HPLC, which limits the acquirable enzyme improvement within reasonable time. To achieve the real high-throughput enzyme screening, i.e., obtain reliable enzyme improvement within reasonable time, a widely applicable high-throughput measurement of enzymatic reactions is highly demanded. Here, a high-throughput screening method using broadband coherent anti-Stokes Raman spectroscopy (CARS) was proposed. CARS is one of coherent Raman spectroscopy, which can identify label-free chemical components specifically from their inherent molecular vibration. These characteristic vibrational signals are generated from different vibrational modes of chemical bonds. With the broadband CARS, chemicals in one sample can be identified from their signals in one broadband CARS spectrum. Moreover, it can magnify the signal levels to several orders of magnitude greater than spontaneous Raman systems, and therefore has the potential to evaluate chemical's concentration rapidly. As a demonstration of screening with CARS, alcohol dehydrogenase, which converts ethanol and nicotinamide adenine dinucleotide oxidized form (NAD+) to acetaldehyde and nicotinamide adenine dinucleotide reduced form (NADH), was used. The signal of NADH at 1660 cm⁻¹, which is generated from nicotinamide in NADH, was utilized to measure the concentration of it. The evaluation time for CARS signal of NADH was determined to be as short as 0.33 seconds while having a system sensitivity of 2.5 mM. The time course of alcohol dehydrogenase reaction was successfully measured from increasing signal intensity of NADH. This measurement result of CARS was consistent with the result of a conventional method, UV-Vis. CARS is expected to have application in high-throughput enzyme screening and realize more reliable enzyme improvement within reasonable time.Keywords: Coherent Anti-Stokes Raman Spectroscopy, CARS, directed evolution, enzyme screening, Raman spectroscopy
Procedia PDF Downloads 1412843 Investigation of Mutagenicity and DNA Binding Properties of Metal-Free and Metallophthalocyanines Containing α-Napththolbenzein Groups on the Peripheral Positions
Authors: Meltem Betül Sağlam, Halil İbrahim Güler, Aykut Sağlam
Abstract:
In this work, phthalocyanine compounds containing α-naphtholbenzeinunits have been synthesized. Mutagenicity and DNA binding properties of the compounds were investigated by Salmonella/Microsome Assay and spectrophotometer. According to the results of the preliminary range finding tests, the compounds gave no toxic effect to all tester strain S. typhimurium TA98 and TA100 at doses of 500, 1100, 350, 500 and 750 µg/plate in the presence and absence of S9, respectively. This study showed that all compounds exhibited efficient DNA-binding activity. In conclusion, these non-toxic compounds may be used as effective DNA dyes for molecular biology studies.Keywords: dye, mutagenicity, phthalocyanine, toxicity
Procedia PDF Downloads 2312842 Chatbots in Education: Case of Development Using a Chatbot Development Platform
Authors: Dulani Jayasuriya
Abstract:
This study outlines the developmental steps of a chatbot for administrative purposes of a large undergraduate course. The chatbot is able to handle student queries about administrative details, including assessment deadlines, course documentation, how to navigate the course, group formation, etc. The development window screenshots are that of a free account on the Snatchbot platform such that this can be adopted by the wider public. While only one connection to an answer based on possible keywords is shown here, one needs to develop multiple connections leading to different answers based on different keywords for the actual chatbot to function. The overall flow of the chatbot showing connections between different interactions is depicted at the end.Keywords: chatbots, education, technology, snatch bot, artificial intelligence
Procedia PDF Downloads 1042841 The Effect of Affirmative Action in Private Schools on Education Expenditure in India: A Quasi-Experimental Approach
Authors: Athira Vinod
Abstract:
Under the Right to Education Act (2009), the Indian government introduced an affirmative action policy aimed at the reservation of seats in private schools at the entry-level and free primary education for children from lower socio-economic backgrounds. Using exogenous variation in the status of being in a lower social category (disadvantaged groups) and the year of starting school, this study investigates the effect of exposure to the policy on the expenditure on private education. It employs a difference-in-difference strategy with the help of repeated cross-sectional household data from the National Sample Survey (NSS) of India. It also exploits regional variation in exposure by combining the household data with administrative data on schools from the District Information System for Education (DISE). The study compares the outcome across two age cohorts of disadvantaged groups, starting school at different times, that is, before and after the policy. Regional variation in exposure is proxied with a measure of enrolment rate under the policy, calculated at the district level. The study finds that exposure to the policy led to an average reduction in annual private school fees of ₹223. Similarly, a 5% increase in the rate of enrolment under the policy in a district was associated with a reduction in annual private school fees of ₹240. Furthermore, there was a larger effect of the policy among households with a higher demand for private education. However, the effect is not due to fees waived through direct enrolment under the policy but rather an increase in the supply of low-fee private schools in India. The study finds that after the policy, 79,870 more private schools entered the market due to an increased demand for private education. The new schools, on average, charged a lower fee than existing schools and had a higher enrolment of children exposed to the policy. Additionally, the district-level variation in the enrolment under the policy was very strongly correlated with the entry of new schools, which not only charged a low fee but also had a higher enrolment under the policy. Results suggest that few disadvantaged children were admitted directly under the policy, but many were attending private schools, which were largely low-fee. This implies that disadvantaged households were willing to pay a lower fee to secure a place in a private school even if they did not receive a free place under the policy.Keywords: affirmative action, disadvantaged groups, private schools, right to education act, school fees
Procedia PDF Downloads 1162840 Teaching How to Speak ‘Correct’ English in No Time: An Assessment of the ‘Success’ of Professor Higgins’ Motivation in George Bernard Shaw’s Pygmalion
Authors: Armel Mbon
Abstract:
This paper examines the ‘success’ of George Bernard Shaw's main character Professor Higgins' motivation in teaching Eliza Doolittle, a young Cockney flower girl, how to speak 'correct' English in no time in Pygmalion. Notice should be given that Shaw in whose writings, language issues feature prominently, does not believe there is such a thing as perfectly correct English, but believes in the varieties of spoken English as a source of its richness. Indeed, along with his fellow phonetician Colonel Pickering, Henry Higgins succeeds in teaching Eliza that he first judges unfairly, the dialect of the upper classes and Received Pronunciation, to facilitate her social advancement. So, after six months of rigorous learning, Eliza's speech and manners are transformed, and she is able to pass herself off as a lady. Such is the success of Professor Higgins’ motivation in linguistically transforming his learner in record time. On the other side, his motivation is unsuccessful since, by the end of the play, he cannot have Eliza he believes he has shaped to his so-called good image, for wife. So, this paper aims to show, in support of the psychological approach, that in motivation, feelings, pride and prejudice cannot be combined, and that one has not to pre-judge someone’s attitude based purely on how well they speak English.Keywords: teaching, speak, in no time, success
Procedia PDF Downloads 692839 Optics Meets Microfluidics for Highly Sensitive Force Sensing
Authors: Iliya Dimitrov Stoev, Benjamin Seelbinder, Elena Erben, Nicola Maghelli, Moritz Kreysing
Abstract:
Despite the revolutionizing impact of optical tweezers in materials science and cell biology up to the present date, trapping has so far extensively relied on specific material properties of the probe and local heating has limited applications related to investigating dynamic processes within living systems. To overcome these limitations while maintaining high sensitivity, here we present a new optofluidic approach that can be used to gently trap microscopic particles and measure femtoNewton forces in a contact-free manner and with thermally limited precision.Keywords: optofluidics, force measurements, microrheology, FLUCS, thermoviscous flows
Procedia PDF Downloads 1702838 Resin Finishing of Cotton: Teaching and Learning Materials
Authors: C. W. Kan
Abstract:
Cotton is the most commonly used material for apparel purpose because of its durability, good perspiration absorption characteristics, comfort during wear and dyeability. However, proneness to creasing and wrinkling give cotton garments a poor rating during actual wear. Resin finishing is a process to bring out crease or wrinkle free/resistant effect to cotton fabric. Thus, the aim of this study is to illustrate the proper application of resin finishing to cotton fabric, and the results could provide guidance note to the students in learning this topic. Acknowledgment: Authors would like to thank the financial support from the Hong Kong Polytechnic University for this work.Keywords: learning materials, resin, textiles, wrinkle
Procedia PDF Downloads 2542837 An Interactive Methodology to Demonstrate the Level of Effectiveness of the Synthesis of Local-Area Networks
Abstract:
This study focuses on disconfirming that wide-area networks can be made mobile, highly-available, and wireless. This methodological test shows that IPv7 and context-free grammar are mismatched. In the cases of robots, a similar tendency is also revealed. Further, we also prove that public-private key pairs could be built embedded, adaptive, and wireless. Finally, we disconfirm that although hash tables can be made distributed, interposable, and autonomous, XML and DNS can interfere to realize this purpose. Our experiments soon proved that exokernelizing our replicated Knesis keyboards was more significant than interrupting them. Our experiments exhibited degraded average sampling rate.Keywords: collaborative communication, DNS, local-area networks, XML
Procedia PDF Downloads 1872836 Braille Lab: A New Design Approach for Social Entrepreneurship and Innovation in Assistive Tools for the Visually Impaired
Authors: Claudio Loconsole, Daniele Leonardis, Antonio Brunetti, Gianpaolo Francesco Trotta, Nicholas Caporusso, Vitoantonio Bevilacqua
Abstract:
Unfortunately, many people still do not have access to communication, with specific regard to reading and writing. Among them, people who are blind or visually impaired, have several difficulties in getting access to the world, compared to the sighted. Indeed, despite technology advancement and cost reduction, nowadays assistive devices are still expensive such as Braille-based input/output systems which enable reading and writing texts (e.g., personal notes, documents). As a consequence, assistive technology affordability is fundamental in supporting the visually impaired in communication, learning, and social inclusion. This, in turn, has serious consequences in terms of equal access to opportunities, freedom of expression, and actual and independent participation to a society designed for the sighted. Moreover, the visually impaired experience difficulties in recognizing objects and interacting with devices in any activities of daily living. It is not a case that Braille indications are commonly reported only on medicine boxes and elevator keypads. Several software applications for the automatic translation of written text into speech (e.g., Text-To-Speech - TTS) enable reading pieces of documents. However, apart from simple tasks, in many circumstances TTS software is not suitable for understanding very complicated pieces of text requiring to dwell more on specific portions (e.g., mathematical formulas or Greek text). In addition, the experience of reading\writing text is completely different both in terms of engagement, and from an educational perspective. Statistics on the employment rate of blind people show that learning to read and write provides the visually impaired with up to 80% more opportunities of finding a job. Especially in higher educational levels, where the ability to digest very complex text is key, accessibility and availability of Braille plays a fundamental role in reducing drop-out rate of the visually impaired, thus affecting the effectiveness of the constitutional right to get access to education. In this context, the Braille Lab project aims at overcoming these social needs by including affordability in designing and developing assistive tools for visually impaired people. In detail, our awarded project focuses on a technology innovation of the operation principle of existing assistive tools for the visually impaired leaving the Human-Machine Interface unchanged. This can result in a significant reduction of the production costs and consequently of tool selling prices, thus representing an important opportunity for social entrepreneurship. The first two assistive tools designed within the Braille Lab project following the proposed approach aims to provide the possibility to personally print documents and handouts and to read texts written in Braille using refreshable Braille display, respectively. The former, named ‘Braille Cartridge’, represents an alternative solution for printing in Braille and consists in the realization of an electronic-controlled dispenser printing (cartridge) which can be integrated within traditional ink-jet printers, in order to leverage the efficiency and cost of the device mechanical structure which are already being used. The latter, named ‘Braille Cursor’, is an innovative Braille display featuring a substantial technology innovation by means of a unique cursor virtualizing Braille cells, thus limiting the number of active pins needed for Braille characters.Keywords: Human rights, social challenges and technology innovations, visually impaired, affordability, assistive tools
Procedia PDF Downloads 2732835 Nanobiosensor System for Aptamer Based Pathogen Detection in Environmental Waters
Authors: Nimet Yildirim Tirgil, Ahmed Busnaina, April Z. Gu
Abstract:
Environmental waters are monitored worldwide to protect people from infectious diseases primarily caused by enteric pathogens. All long, Escherichia coli (E. coli) is a good indicator for potential enteric pathogens in waters. Thus, a rapid and simple detection method for E. coli is very important to predict the pathogen contamination. In this study, to the best of our knowledge, as the first time we developed a rapid, direct and reusable SWCNTs (single walled carbon nanotubes) based biosensor system for sensitive and selective E. coli detection in water samples. We use a novel and newly developed flexible biosensor device which was fabricated by high-rate nanoscale offset printing process using directed assembly and transfer of SWCNTs. By simple directed assembly and non-covalent functionalization, aptamer (biorecognition element that specifically distinguish the E. coli O157:H7 strain from other pathogens) based SWCNTs biosensor system was designed and was further evaluated for environmental applications with simple and cost-effective steps. The two gold electrode terminals and SWCNTs-bridge between them allow continuous resistance response monitoring for the E. coli detection. The detection procedure is based on competitive mode detection. A known concentration of aptamer and E. coli cells were mixed and after a certain time filtered. The rest of free aptamers injected to the system. With hybridization of the free aptamers and their SWCNTs surface immobilized probe DNA (complementary-DNA for E. coli aptamer), we can monitor the resistance difference which is proportional to the amount of the E. coli. Thus, we can detect the E. coli without injecting it directly onto the sensing surface, and we could protect the electrode surface from the aggregation of target bacteria or other pollutants that may come from real wastewater samples. After optimization experiments, the linear detection range was determined from 2 cfu/ml to 10⁵ cfu/ml with higher than 0.98 R² value. The system was regenerated successfully with 5 % SDS solution over 100 times without any significant deterioration of the sensor performance. The developed system had high specificity towards E. coli (less than 20 % signal with other pathogens), and it could be applied to real water samples with 86 to 101 % recovery and 3 to 18 % cv values (n=3).Keywords: aptamer, E. coli, environmental detection, nanobiosensor, SWCTs
Procedia PDF Downloads 1972834 Real-Time Demonstration of Visible Light Communication Based on Frequency-Shift Keying Employing a Smartphone as the Receiver
Authors: Fumin Wang, Jiaqi Yin, Lajun Wang, Nan Chi
Abstract:
In this article, we demonstrate a visible light communication (VLC) system over 8 meters free space transmission based on a commercial LED and a receiver in connection with an audio interface of a smart phone. The signal is in FSK modulation format. The successful experimental demonstration validates the feasibility of the proposed system in future wireless communication network.Keywords: visible light communication, smartphone communication, frequency shift keying, wireless communication
Procedia PDF Downloads 3912833 Nanoemulsion Formulation of Ethanolic Extracts of Propolis and Its Antioxidant Activity
Authors: Rachmat Mauludin, Dita Sasri Primaviri, Irda Fidrianny
Abstract:
Propolis contains several antioxidant compounds which can be used in topical application to protect skin against free radical, prevent skin cancer and skin aging. Previous study showed that 70% ethanolic extract of propolis (EEP) provided the greatest antioxidant activity. Since EEP has very small solubility in water, the extract was prepared in nanoemulsion (NE). Nanoemulsion is chosen as cosmetic dosage forms according to its properties namely to decrease the risk of skin’s irritation, increase penetration, prolong its time to remain in our skin, and improve stability. Propolis was extracted using reflux methods and concentrated using rotavapor. EEP was characterized with several tests such as phytochemical screening, density, and antioxidant activity using DPPH method. Optimation of total surfactant, co-surfactant, oil, and amount of EEP that can be included in NE were required to get the best NE formulation. The evaluations included to organoleptic observation, globul size, polydispersity index, morphology using TEM, viscosity, pH, centrifuge, stability, Freeze and Thaw test, radical scavenging activity using DPPH method, and primary irritation test. The yield extracts was 11.12% from raw propolis contained of steroid/triterpenoid, flavonoid, and saponin based on phytochemical screening. EEP had the value of DPPH scavenging activity 61.14% and IC50 0.41629 ppm. The best NE formulation consisted of 26.25% Kolliphor RH40; 8.75% glycerine; 5% rice bran oil; and 3% EEP. NE was transparant, had globul size of 21.9 nm; polydispersity index of 0.338; and pH of 5.67. Based on TEM morphology, NE was almost spherical and has particle size below 50 nm. NE propolis revealed to be physically stable after stability test within 63 days at 25oC, centrifuged for 30 mins at 13.000 rpm, and passed 6 cycles of Freeze and Thaw test without separated. NE propolis reduced 58% of free radical DPPH similar to antioxidant activity of the original extracts. Antioxidant activity of NE propolis is relatively stable after stored for 6 weeks. NE Propolis was proven to be safe by primary irritation test with the value of primary irritation index (OECD) was 0. The best formulation for NE propolis contained of 26.25% Kolliphor RH40; 8.75% glycerine; 5% rice bran oil; and 3% EEP with globul size of 21.9 nm and polydispersity index of 0.338. NE propolis was stable and had antioxidant activity similar to EEP.Keywords: propolis, antioxidant, nanoemulsion, irritation test
Procedia PDF Downloads 3052832 Cross Line of Causality in Childhood Stuttering between Psychology and Neurolinguistics: Systematic Literature Review and Meta-Analysis
Authors: Sadeq Al Yaari, Muhammad Alkhunayn, Ayman Al Yaari, Montaha Al Yaari, Aayah Al Yaari, Adham Al Yaari, Sajedah Al Yaari, Fatehi Eissa
Abstract:
Stuttering is a multidimensional disorder that is influenced by different factors. As a result of their un-understanding of the genuine reasons behind stuttering, psychiatrists and Speech and Language Pathologists/Therapists (SLP/Ts) are often unfamiliar with the psychoneurolinguistic characteristics, support needs, and the disability measurement impacting requested rehabilitation of the stuttering population. PubMed, PsycInfo, Web of Science, Scopus, and Google scholar searches, in addition to some unpublished literature, were conducted in this Systematic Literature Review and Meta-analysis (SLR and Meta-analysis) to identify whether stuttering is caused by psychological or neurological reasons. The study concluded that psychological, not neurolinguistic factors were identified as most significant for the causality of childhood stuttering. Stutterers have intact language skills, but impaired ability more to communicate with others than to form letters in the brain or to articulate them. The study recommends research in the future that sheds light on the adult stuttering population often left out of the focus of diagnosis and in need of further exploration vis-a-vis issues they encounter, as well as the possible ways to deal with them psychoneurolinguistically.Keywords: causality, childhood stuttering, psychology, neurolinguistics, systematic literature review, meta-analysis
Procedia PDF Downloads 492831 Piql Preservation Services - A Holistic Approach to Digital Long-Term Preservation
Authors: Alexander Rych
Abstract:
Piql Preservation Services (“Piql”) is a turnkey solution designed for secure, migration-free long- term preservation of digital data. Piql sets an open standard for long- term preservation for the future. It consists of equipment and processes needed for writing and retrieving digital data. Exponentially growing amounts of data demand for logistically effective and cost effective processes. Digital storage media (hard disks, magnetic tape) exhibit limited lifetime. Repetitive data migration to overcome rapid obsolescence of hardware and software bears accelerated risk of data loss, data corruption or even manipulation and adds significant repetitive costs for hardware and software investments. Piql stores any kind of data in its digital as well as analog form securely for 500 years. The medium that provides this is a film reel. Using photosensitive film polyester base, a very stable material that is known for its immutability over hundreds of years, secure and cost-effective long- term preservation can be provided. The film reel itself is stored in a packaging capable of protecting the optical storage medium. These components have undergone extensive testing to ensure longevity of up to 500 years. In addition to its durability, film is a true WORM (write once- read many) medium. It therefore is resistant to editing or manipulation. Being able to store any form of data onto the film makes Piql a superior solution for long-term preservation. Paper documents, images, video or audio sequences – all of those file formats and documents can be preserved in its native file structure. In order to restore the encoded digital data, only a film scanner, a digital camera or any appropriate optical reading device will be needed in the future. Every film reel includes an index section describing the data saved on the film. It also contains a content section carrying meta-data, enabling users in the future to rebuild software in order to read and decode the digital information.Keywords: digital data, long-term preservation, migration-free, photosensitive film
Procedia PDF Downloads 3922830 On the Implementation of The Pulse Coupled Neural Network (PCNN) in the Vision of Cognitive Systems
Authors: Hala Zaghloul, Taymoor Nazmy
Abstract:
One of the great challenges of the 21st century is to build a robot that can perceive and act within its environment and communicate with people, while also exhibiting the cognitive capabilities that lead to performance like that of people. The Pulse Coupled Neural Network, PCNN, is a relative new ANN model that derived from a neural mammal model with a great potential in the area of image processing as well as target recognition, feature extraction, speech recognition, combinatorial optimization, compressed encoding. PCNN has unique feature among other types of neural network, which make it a candid to be an important approach for perceiving in cognitive systems. This work show and emphasis on the potentials of PCNN to perform different tasks related to image processing. The main drawback or the obstacle that prevent the direct implementation of such technique, is the need to find away to control the PCNN parameters toward perform a specific task. This paper will evaluate the performance of PCNN standard model for processing images with different properties, and select the important parameters that give a significant result, also, the approaches towards find a way for the adaptation of the PCNN parameters to perform a specific task.Keywords: cognitive system, image processing, segmentation, PCNN kernels
Procedia PDF Downloads 2802829 Ultra-High Molecular Weight Polyethylene (UHMWPE) for Radiation Dosimetry Applications
Authors: Malik Sajjad Mehmood, Aisha Ali, Hamna Khan, Tariq Yasin, Masroor Ikram
Abstract:
Ultra-high molecular weight polyethylene (UHMWPE) is one of the polymers belongs to polyethylene (PE) family having monomer –CH2– and average molecular weight is approximately 3-6 million g/mol. Due its chemical, mechanical, physical and biocompatible properties, it has been extensively used in the field of electrical insulation, medicine, orthopedic, microelectronics, engineering, chemistry and the food industry etc. In order to alter/modify the properties of UHMWPE for particular application of interest, certain various procedures are in practice e.g. treating the material with high energy irradiations like gamma ray, e-beam, and ion bombardment. Radiation treatment of UHMWPE induces free radicals within its matrix, and these free radicals are the precursors of chain scission, chain accumulation, formation of double bonds, molecular emission, crosslinking etc. All the aforementioned physical and chemical processes are mainly responsible for the modification of polymers properties to use them in any particular application of our interest e.g. to fabricate LEDs, optical sensors, antireflective coatings, polymeric optical fibers, and most importantly for radiation dosimetry applications. It is therefore, to check the feasibility of using UHMWPE for radiation dosimetery applications, the compressed sheets of UHMWPE were irradiated at room temperature (~25°C) for total dose values of 30 kGy and 100 kGy, respectively while one were kept un-irradiated as reference. Transmittance data (from 400 nm to 800 nm) of e-beam irradiated UHMWPE and its hybrids were measured by using Muller matrix spectro-polarimeter. As a result significant changes occur in the absorption behavior of irradiated samples. To analyze these (radiation induced) changes in polymer matrix Urbach edge method and modified Tauc’s equation has been used. The results reveal that optical activation energy decreases with irradiation. The values of activation energies are 2.85 meV, 2.48 meV, and 2.40 meV for control, 30 kGy, and 100 kGy samples, respectively. Direct and indirect energy band gaps were also found to decrease with irradiation due to variation of C=C unsaturation in clusters. We believe that the reported results would open new horizons for radiation dosimetery applications.Keywords: electron beam, radiation dosimetry, Tauc’s equation, UHMWPE, Urbach method
Procedia PDF Downloads 4082828 Temporal and Spacial Adaptation Strategies in Aerodynamic Simulation of Bluff Bodies Using Vortex Particle Methods
Authors: Dario Milani, Guido Morgenthal
Abstract:
Fluid dynamic computation of wind caused forces on bluff bodies e.g light flexible civil structures or high incidence of ground approaching airplane wings, is one of the major criteria governing their design. For such structures a significant dynamic response may result, requiring the usage of small scale devices as guide-vanes in bridge design to control these effects. The focus of this paper is on the numerical simulation of the bluff body problem involving multiscale phenomena induced by small scale devices. One of the solution methods for the CFD simulation that is relatively successful in this class of applications is the Vortex Particle Method (VPM). The method is based on a grid free Lagrangian formulation of the Navier-Stokes equations, where the velocity field is modeled by particles representing local vorticity. These vortices are being convected due to the free stream velocity as well as diffused. This representation yields the main advantages of low numerical diffusion, compact discretization as the vorticity is strongly localized, implicitly accounting for the free-space boundary conditions typical for this class of FSI problems, and a natural representation of the vortex creation process inherent in bluff body flows. When the particle resolution reaches the Kolmogorov dissipation length, the method becomes a Direct Numerical Simulation (DNS). However, it is crucial to note that any solution method aims at balancing the computational cost against the accuracy achievable. In the classical VPM method, if the fluid domain is discretized by Np particles, the computational cost is O(Np2). For the coupled FSI problem of interest, for example large structures such as long-span bridges, the aerodynamic behavior may be influenced or even dominated by small structural details such as barriers, handrails or fairings. For such geometrically complex and dimensionally large structures, resolving the complete domain with the conventional VPM particle discretization might become prohibitively expensive to compute even for moderate numbers of particles. It is possible to reduce this cost either by reducing the number of particles or by controlling its local distribution. It is also possible to increase the accuracy of the solution without increasing substantially the global computational cost by computing a correction of the particle-particle interaction in some regions of interest. In this paper different strategies are presented in order to extend the conventional VPM method to reduce the computational cost whilst resolving the required details of the flow. The methods include temporal sub stepping to increase the accuracy of the particles convection in certain regions as well as dynamically re-discretizing the particle map to locally control the global and the local amount of particles. Finally, these methods will be applied on a test case and the improvements in the efficiency as well as the accuracy of the proposed extension to the method are presented. The important benefits in terms of accuracy and computational cost of the combination of these methods will be thus presented as long as their relevant applications.Keywords: adaptation, fluid dynamic, remeshing, substepping, vortex particle method
Procedia PDF Downloads 2622827 Macroscopic Support Structure Design for the Tool-Free Support Removal of Laser Powder Bed Fusion-Manufactured Parts Made of AlSi10Mg
Authors: Tobias Schmithuesen, Johannes Henrich Schleifenbaum
Abstract:
The additive manufacturing process laser powder bed fusion offers many advantages over conventional manufacturing processes. For example, almost any complex part can be produced, such as topologically optimized lightweight parts, which would be inconceivable with conventional manufacturing processes. A major challenge posed by the LPBF process, however, is, in most cases, the need to use and remove support structures on critically inclined part surfaces (α < 45 ° regarding substrate plate). These are mainly used for dimensionally accurate mapping of part contours and to reduce distortion by absorbing process-related internal stresses. Furthermore, they serve to transfer the process heat to the substrate plate and are, therefore, indispensable for the LPBF process. A major challenge for the economical use of the LPBF process in industrial process chains is currently still the high manual effort involved in removing support structures. According to the state of the art (SoA), the parts are usually treated by simple hand tools (e.g., pliers, chisels) or by machining (e.g., milling, turning). New automatable approaches are the removal of support structures by means of wet chemical ablation and thermal deburring. According to the state of the art, the support structures are essentially adapted to the LPBF process and not to potential post-processing steps. The aim of this study is the determination of support structure designs that are adapted to the mentioned post-processing approaches. In the first step, the essential boundary conditions for complete removal by means of the respective approaches are identified. Afterward, a representative demonstrator part with various macroscopic support structure designs will be LPBF-manufactured and tested with regard to a complete powder and support removability. Finally, based on the results, potentially suitable support structure designs for the respective approaches will be derived. The investigations are carried out on the example of the aluminum alloy AlSi10Mg.Keywords: additive manufacturing, laser powder bed fusion, laser beam melting, selective laser melting, post processing, tool-free, wet chemical ablation, thermal deburring, aluminum alloy, AlSi10Mg
Procedia PDF Downloads 912826 Development of Positron Emission Tomography (PET) Tracers for the in-Vivo Imaging of α-Synuclein Aggregates in α-Synucleinopathies
Authors: Bright Chukwunwike Uzuegbunam, Wojciech Paslawski, Hans Agren, Christer Halldin, Wolfgang Weber, Markus Luster, Thomas Arzberger, Behrooz Hooshyar Yousefi
Abstract:
There is a need to develop a PET tracer that will enable to diagnosis and track the progression of Alpha-synucleinopathies (Parkinson’s disease [PD], dementia with Lewy bodies [DLB], multiple system atrophy [MSA]) in living subjects over time. Alpha-synuclein aggregates (a-syn), which are present in all the stages of disease progression, for instance, in PD, are a suitable target for in vivo PET imaging. For this reason, we have developed some promising a-syn tracers based on a disarylbisthiazole (DABTA) scaffold. The precursors are synthesized via a modified Hantzsch thiazole synthesis. The precursors were then radiolabeled via one- or two-step radiofluorination methods. The ligands were initially screened using a combination of molecular dynamics and quantum/molecular mechanics approaches in order to calculate the binding affinity to a-syn (in silico binding experiments). Experimental in vitro binding assays were also performed. The ligands were further screened in other experiments such as log D, in vitro plasma protein binding & plasma stability, biodistribution & brain metabolite analyses in healthy mice. Radiochemical yields were up to 30% - 72% in some cases. Molecular docking revealed possible binding sites in a-syn and also the free energy of binding to those sites (-28.9 - -66.9 kcal/mol), which correlated to the high binding affinity of the DABTAs to a-syn (Ki as low as 0.5 nM) and selectivity (> 100-fold) over Aβ and tau, which usually co-exist with a-synin some pathologies. The log D values range from 2.88 - 2.34, which correlated with free-protein fraction of 0.28% - 0.5%. Biodistribution experiments revealed that the tracers are taken up (5.6 %ID/g - 7.3 %ID/g) in the brain at 5 min (post-injection) p.i., and cleared out (values as low as 0.39 %ID/g were obtained at 120 min p.i. Analyses of the mice brain 20 min p.i. Revealed almost no radiometabolites in the brain in most cases. It can be concluded that in silico study presents a new venue for the rational development of radioligands with suitable features. The results obtained so far are promising and encourage us to further validate the DABTAs in autoradiography, immunohistochemistry, and in vivo imaging in non-human primates and humans.Keywords: alpha-synuclein aggregates, alpha-synucleinopathies, PET imaging, tracer development
Procedia PDF Downloads 2352825 Scanning Electronic Microscopy for Analysis of the Effects of Surfactants on De-Wrinkling and Dispersion of Graphene
Authors: Kostandinos Katsamangas, Fawad Inam
Abstract:
Graphene was dispersed using a tip sonicator and the effect of surfactants were analysed. Sodium Dodecyl Sulphate (SDS) and Polyvinyl Alcohol (PVA) were compared to observe whether or not they had any effect on any de-wrinkling, and secondly whether they aided to achieve better dispersions. There is a huge demand for wrinkle free graphene as this will greatly increase its usefulness in various engineering applications. A comprehensive literature on de-wrinkling graphene has been discussed. Low magnification Scanning Electronic Microscopy (SEM) was conducted to assess the quality of graphene de-wrinkling. The utilization of the PVA has a significant effect on de-wrinkling whereas SDS had minimal effect on the de-wrinkling of graphene.Keywords: Graphene, de-wrinkling, dispersion, surfactants, scanning electronic microscopy
Procedia PDF Downloads 4712824 Metaphors Investigation between President Xi Jinping of China and Trump of Us on the Corpus-Based Approach
Authors: Jie Zheng, Ruifeng Luo
Abstract:
The United States is the world’s most developed economy with the strongest military power. China is the fastest growing country with growing comprehensive strength and its economic strength is second only to the US. However, the conflict between them is getting serious in recent years. President’s address is the representative of a nation’s ideology. The paper has built up a small sized corpus of President Xi Jinping and Trump’s speech in Davos to investigate their respective use and types of metaphors and calculate the respective percentage of each type of metaphor. The result shows President Xi Jinping employs more metaphors than Trump. The metaphors of Xi includes “building” metaphor, “plant” metaphor, “journey” metaphor, “ship” metaphor, “traffic” metaphor, “nation is a person” metaphor, “show” metaphor, etc while Trump’s comprises “war” metaphor, “building” metaphor, “journey” metaphor, “traffic” metaphor, “tax” metaphor, “book” metaphor, etc. After investigating metaphor use differences, the paper makes an analysis of the underlying ideology between the two nations. China is willing to strengthen ties with all the countries all over the world and has built a platform of development for them and itself to go to the destination of social well being while the US pays much concern to itself, emphasizing its first leading position and is also willing to help its alliances to development. The paper’s comparison of the ideology difference between the two countries will help them get a better understanding and reduce the conflict to some extent.Keywords: metaphor; corpus; ideology; conflict
Procedia PDF Downloads 1472823 [Keynote Speech]: Experimental Study on the Effects of Water-in-Oil Emulsions to the Pressure Drop in Pipeline Flow
Authors: S. S. Dol, M. S. Chan, S. F. Wong, J. S. Lim
Abstract:
Emulsion formation is unavoidable and can be detrimental to an oil field production. The presence of stable emulsions also reduces the quality of crude oil and causes more problems in the downstream refinery operations, such as corrosion and pipeline pressure drop. Hence, it is important to know the effects of emulsions in the pipeline. Light crude oil was used for the continuous phase in the W/O emulsions where the emulsions pass through a flow loop to test the pressure drop across the pipeline. The results obtained shows that pressure drop increases as water cut is increased until it peaks at the phase inversion of the W/O emulsion between 30% to 40% water cut. Emulsions produced by gradual constrictions show a lower stability as compared to sudden constrictions. Lower stability of emulsions in gradual constriction has the higher influence of pressure drop compared to a sudden sharp decrease in diameter in sudden constriction. Generally, sudden constriction experiences pressure drop of 0.013% to 0.067% higher than gradual constriction of the same ratio. Lower constriction ratio cases cause larger pressure drop ranging from 0.061% to 0.241%. Considering the higher profitability in lower emulsion stability and lower pressure drop at the developed flow region of different constrictions, an optimum design of constriction is found to be gradual constriction with a ratio of 0.5.Keywords: constriction, pressure drop, turbulence, water-in-oil emulsions
Procedia PDF Downloads 3352822 Teacher-Scaffolding vs. Peer-Scaffolding in Task-Based ILP Instruction: Effects on EFL Learners’ Metapragmatic Awareness
Authors: Amir Zand-Moghadam, Mahnaz Alizadeh
Abstract:
The aim of the present study was to investigate the effect of teacher-scaffolding versus peer-scaffolding on EFL learners’ metapragmatic awareness in the paradigm of task-based language teaching (TBLT). To this end, a number of dialogic information-gap tasks requiring two-way interactant relationship were designed for the five speech acts of request, refusal, apology, suggestion, and compliment following Ellis’s (2003) model. Then, 48 intermediate EFL learners were randomly selected, homogenized, and assigned to two groups: 26 participants in the teacher-scaffolding group (Group One) and 22 in the peer-scaffolding group (Group Two). While going through the three phases of pre-task, while-task, and post-task, the participants in the first group completed the designed tasks by the teacher’s interaction, scaffolding, and feedback. On the other hand, the participants in the second group were required to complete the tasks in expert-novice pairs through peer scaffolding in all the three phases of a task-based syllabus. The findings revealed that the participants in the teacher-scaffolding group developed their L2 metapragmatic awareness more than the peer-scaffolding group. Thus, it can be concluded that teacher-scaffolding is more effective than peer scaffolding in developing metapragmatic awareness among EFL learners. It can also be claimed that the use of tasks can be more influential when they are accompanied by teacher-scaffolding. The findings of the present study have implications for language teachers and researchers.Keywords: ILP, metapragmatic awareness, scaffolding, task-based instruction
Procedia PDF Downloads 5842821 Examining the Changes in Complexity, Accuracy, and Fluency in Japanese L2 Writing Over an Academic Semester
Authors: Robert Long
Abstract:
The results of a one-year study on the evolution of complexity, accuracy, and fluency (CAF) in the compositions of Japanese L2 university students throughout a semester are presented in this study. One goal was to determine if any improvement in writing abilities over this academic term had occurred, while another was to examine methods of editing. Participants had 30 minutes to write each essay with an additional 10 minutes allotted for editing. As for editing, participants were divided into two groups, one of which utilized an online grammar checker, while the other half self-edited their initial manuscripts. From the three different institutions, there was a total of 159 students. Research questions focused on determining if the CAF had evolved over the previous year, identifying potential variations in editing techniques, and describing the connections between the CAF dimensions. According to the findings, there was some improvement in accuracy (fewer errors) in all three of the measures), whereas there was a marked decline in complexity and fluency. As for the second research aim relating to the interaction among the three dimensions (CAF) and of possible increases in fluency being offset by decreases in grammatical accuracy, results showed (there is a logical high correlation with clauses and word counts, and mean length of T-unit (MLT) and (coordinate phrase of T-unit (CP/T) as well as MLT and clause per T-unit (C/T); furthermore, word counts and error/100 ratio correlated highly with error-free clause totals (EFCT). Issues of syntactical complexity had a negative correlation with EFCT, indicating that more syntactical complexity relates to decreased accuracy. Concerning a difference in error correction between those who self-edited and those who used an online grammar correction tool, results indicated that the variable of errors-free clause ratios (EFCR) had the greatest difference regarding accuracy, with fewer errors noted with writers using an online grammar checker. As for possible differences between the first and second (edited) drafts regarding CAF, results indicated there were positive changes in accuracy, the most significant change seen in complexity (CP/T and MLT), while there were relatively insignificant changes in fluency. Results also indicated significant differences among the three institutions, with Fujian University of Technology having the most fluency and accuracy. These findings suggest that to raise students' awareness of their overall writing development, teachers should support them in developing more complex syntactic structures, improving their fluency, and making more effective use of online grammar checkers.Keywords: complexity, accuracy, fluency, writing
Procedia PDF Downloads 392820 The Use of STIMULAN Resorbable Antibiotic Beads in Conjunction with Autologous Tissue Transfer to Treat Recalcitrant Infections and Osteomyelitis in Diabetic Foot Wounds
Authors: Hayden R Schott, John M Felder III
Abstract:
Introduction: Chronic lower extremity wounds in the diabetic and vasculopathic populations are associated with a high degree of morbidity.When wounds require more extensive treatment than can be offered by wound care centers, more aggressive solutions involve local tissue transfer and microsurgical free tissue transfer for achieving definitive soft tissue coverage. These procedures of autologous tissue transfer (ATT) offer resilient, soft tissue coverage of limb-threatening wounds and confer promising limb salvage rates. However, chronic osteomyelitis and recalcitrant soft tissue infections are common in severe diabetic foot wounds and serve to significantly complicate ATT procedures. Stimulan is a resorbable calcium sulfate antibiotic carrier. The use of stimulan antibiotic beads to treat chronic osteomyelitis is well established in the orthopedic and plastic surgery literature. In these procedures, the beads are placed beneath the skin flap to directly deliver antibiotics to the infection site. The purpose of this study was to quantify the success of Stimulan antibiotic beads in treating recalcitrant infections in patients with diabetic foot wounds receiving ATT. Methods: A retrospective review of clinical and demographic information was performed on patients who underwent ATT with the placement of Stimulan antibiotic beads for attempted limb salvage from 2018-21. Patients were analyzed for preoperative wound characteristics, demographics, infection recurrence, and adverse outcomes as a result of product use. The primary endpoint was 90 day infection recurrence, with secondary endpoints including 90 day complications. Outcomes were compared using basic statistics and Fisher’s exact tests. Results: In this time span, 14 patients were identified. At the time of surgery, all patients exhibited clinical signs of active infection, including positive cultures and erythema. 57% of patients (n=8) exhibited chronic osteomyelitis prior to surgery, and 71% (n=10) had exposed bone at the wound base. In 57% of patients (n=8), Stimulan beads were placed beneath a free tissue flap and beneath a pedicle tissue flap in 42% of patients (n=6). In all patients, Stimulan beads were only applied once. Recurrent infections were observed in 28% of patients (n=4) at 90 days post-op, and flap nonadherence was observed in 7% (n=1). These were the only Stimulan related complications observed. Ultimately, lower limb salvage was successful in 85% of patients (n=12). Notably, there was no significant association between the preoperative presence of osteomyelitis and recurrent infections. Conclusions: The use of Stimulanantiobiotic beads to treat recalcitrant infections in patients receiving definitive skin coverage of diabetic foot wounds does not appear to demonstrate unnecessary risk. Furthermore, the lack of significance between the preoperative presence of osteomyelitis and recurrent infections indicates the successful use of Stimulan to dampen infection in patients with osteomyelitis, as is consistent with the literature. Further research is needed to identify Stimulan as the significant contributor to infection treatment using future cohort and case control studies with more patients. Nonetheless, the use of Stimulan antibiotic beads in patients with diabetic foot wounds demonstrates successful infection suppression and maintenance of definitive soft tissue coverage.Keywords: wound care, stimulan antibiotic beads, free tissue transfer, plastic surgery, wound, infection
Procedia PDF Downloads 902819 A Survey and Theory of the Effects of Various Hamlet Videos on Viewers’ Brains
Authors: Mark Pizzato
Abstract:
How do ideas, images, and emotions in stage-plays and videos affect us? Do they evoke a greater awareness (or cognitive reappraisal of emotions) through possible shifts between left-cortical, right-cortical, and subcortical networks? To address these questions, this presentation summarizes the research of various neuroscientists, especially Bernard Baars and others involved in Global Workspace Theory, Matthew Lieberman in social neuroscience, Iain McGilchrist on left and right cortical functions, and Jaak Panksepp on the subcortical circuits of primal emotions. Through such research, this presentation offers an ‘inner theatre’ model of the brain, regarding major hubs of neural networks and our animal ancestry. It also considers recent experiments, by Mario Beauregard, on the cognitive reappraisal of sad, erotic, and aversive film clips. Finally, it applies the inner-theatre model and related research to survey results of theatre students who read and then watched the ‘To be or not to be’ speech in 8 different video versions (from stage and screen productions) of William Shakespeare’s Hamlet. Findings show that students become aware of left-cortical, right-cortical, and subcortical brain functions—and shifts between them—through staging and movie-making choices in each of the different videos.Keywords: cognitive reappraisal, Hamlet, neuroscience, Shakespeare, theatre
Procedia PDF Downloads 3152818 Multi-Modal Feature Fusion Network for Speaker Recognition Task
Authors: Xiang Shijie, Zhou Dong, Tian Dan
Abstract:
Speaker recognition is a crucial task in the field of speech processing, aimed at identifying individuals based on their vocal characteristics. However, existing speaker recognition methods face numerous challenges. Traditional methods primarily rely on audio signals, which often suffer from limitations in noisy environments, variations in speaking style, and insufficient sample sizes. Additionally, relying solely on audio features can sometimes fail to capture the unique identity of the speaker comprehensively, impacting recognition accuracy. To address these issues, we propose a multi-modal network architecture that simultaneously processes both audio and text signals. By gradually integrating audio and text features, we leverage the strengths of both modalities to enhance the robustness and accuracy of speaker recognition. Our experiments demonstrate significant improvements with this multi-modal approach, particularly in complex environments, where recognition performance has been notably enhanced. Our research not only highlights the limitations of current speaker recognition methods but also showcases the effectiveness of multi-modal fusion techniques in overcoming these limitations, providing valuable insights for future research.Keywords: feature fusion, memory network, multimodal input, speaker recognition
Procedia PDF Downloads 322817 Analysis on the Effectiveness of the "Three-Exemption" Policy Aimed at Promoting Unpaid Blood Donation in Zhejiang
Authors: Ni Tang, Jinping Zhang
Abstract:
An effective and sustainable volunteer team is needed to create a more available blood supply system. In order to promote the sustainable development of blood donation in Zhejiang Province, China, a “three-exemption” policy was proposed in 2014: blood donors who received the National Award for unpaid blood donation may government-invested and funded parks, scenic spots and other places for free, visit non-profit medical institutions for free outpatient fees, and be exempted from urban public transportation fees. As the policy has been in place for seven years, this study evaluated the effectiveness of the policy by comparing the increasing rate of blood donation in Hangzhou (capital city of Zhejiang) before and after the policy using the intermittent time series analysis. The blood donation in Anhui, a Province near Zhejiang, was also compared as a negative control. Blood donation data from 2012 to 2018 were obtained from the donation center's official websites. The increasing rate of blood donation volume since 2012 in Hangzhou is 34.37 units/month, and after 2014, the increasing rate additionally increases 71.69 (p=0.1442), which indicating a statistically non-significant change after the policy. While as a negative control, in Anhui, the increasing rate of blood donation volume since 2012 is -163.3 unit/month, and the increasing rate additionally increases 167.2 (p=5.63e-07) after 2014. The result shows that the three-exemption policy had a certain level of impact on encouraging volunteers to donate blood, but the effect was not substantial. One possible reason for the ineffectiveness of the policy might be a lack of public awareness of the policy. On the other hand, this policy mainly waived unnecessary life expenses, such as fares and scenic entrance fees, and requires a certain number of blood donations, registration procedures, and blood donation certificates. Perhaps, reducing life-related expenses such as oil, water and electricity, could better attract people to participate in blood donation. This current study on the three-exemption policy provides a new direction for promoting people's blood donation. Incentive policies may require greater publicity and incentives. In order to better ensure the operation of the blood donation system, other policies, especially incentive policies, should be further explored.Keywords: blood donation, policy, Zhejiang, unpaid blood donation, three-exemption policy
Procedia PDF Downloads 2092816 Political Discourse Used in the TV Talk Shows of Pakistani Media
Authors: Hafiz Sajjad Hussain, Asad Razzaq
Abstract:
The study aims to explore the relationship between application of speech and discourse used by the political workers and their leaders for maintaining authoritative approach and dialog power. The representation of these relationships between ideology and language in the analysis of discourse and spoken text following Van Dijk Socio-Cognitive model. Media and political leaders are two pillars of a state and their role is so important for development and effects on the society. Media has become an industry in the recent years in the globe, and especially, the private sector developed a lot in the last decade in Pakistan. Media is the easiest way of communication with the large community in a short time and used discourse independently. The prime time of the news channels in Pakistan presents the political programs on most favorite story or incident of the day. The current program broadcasted by a private channel ARY News July 6, 2014 covered the most top story of the day. The son of Ex. CJ Arslan Iftikhar moves an application to Election Commission of Pakistan about the daughter of the most popular political leader and chairman PTI Imran Khan. This movement turns the whole scenario of the political parties and media got a hot issue form discussion. This study also shows that the ideology and meanings which are presented by the TV channels not always obvious for readers.Keywords: electronic media, political discourse, ideology of media, power, authoritative approach
Procedia PDF Downloads 529