Search results for: point of view
1053 Current Applications of Artificial Intelligence (AI) in Chest Radiology
Authors: Angelis P. Barlampas
Abstract:
Learning Objectives: The purpose of this study is to inform briefly the reader about the applications of AI in chest radiology. Background: Currently, there are 190 FDA-approved radiology AI applications, with 42 (22%) pertaining specifically to thoracic radiology. Imaging findings OR Procedure details Aids of AI in chest radiology1: Detects and segments pulmonary nodules. Subtracts bone to provide an unobstructed view of the underlying lung parenchyma and provides further information on nodule characteristics, such as nodule location, nodule two-dimensional size or three dimensional (3D) volume, change in nodule size over time, attenuation data (i.e., mean, minimum, and/or maximum Hounsfield units [HU]), morphological assessments, or combinations of the above. Reclassifies indeterminate pulmonary nodules into low or high risk with higher accuracy than conventional risk models. Detects pleural effusion . Differentiates tension pneumothorax from nontension pneumothorax. Detects cardiomegaly, calcification, consolidation, mediastinal widening, atelectasis, fibrosis and pneumoperitoneum. Localises automatically vertebrae segments, labels ribs and detects rib fractures. Measures the distance from the tube tip to the carina and localizes both endotracheal tubes and central vascular lines. Detects consolidation and progression of parenchymal diseases such as pulmonary fibrosis or chronic obstructive pulmonary disease (COPD).Can evaluate lobar volumes. Identifies and labels pulmonary bronchi and vasculature and quantifies air-trapping. Offers emphysema evaluation. Provides functional respiratory imaging, whereby high-resolution CT images are post-processed to quantify airflow by lung region and may be used to quantify key biomarkers such as airway resistance, air-trapping, ventilation mapping, lung and lobar volume, and blood vessel and airway volume. Assesses the lung parenchyma by way of density evaluation. Provides percentages of tissues within defined attenuation (HU) ranges besides furnishing automated lung segmentation and lung volume information. Improves image quality for noisy images with built-in denoising function. Detects emphysema, a common condition seen in patients with history of smoking and hyperdense or opacified regions, thereby aiding in the diagnosis of certain pathologies, such as COVID-19 pneumonia. It aids in cardiac segmentation and calcium detection, aorta segmentation and diameter measurements, and vertebral body segmentation and density measurements. Conclusion: The future is yet to come, but AI already is a helpful tool for the daily practice in radiology. It is assumed, that the continuing progression of the computerized systems and the improvements in software algorithms , will redder AI into the second hand of the radiologist.Keywords: artificial intelligence, chest imaging, nodule detection, automated diagnoses
Procedia PDF Downloads 721052 Glycerol-Based Bio-Solvents for Organic Synthesis
Authors: Dorith Tavor, Adi Wolfson
Abstract:
In the past two decades a variety of green solvents have been proposed, including water, ionic liquids, fluorous solvents, and supercritical fluids. However, their implementation in industrial processes is still limited due to their tedious and non-sustainable synthesis, lack of experimental data and familiarity, as well as operational restrictions and high cost. Several years ago we presented, for the first time, the use of glycerol-based solvents as alternative sustainable reaction mediums in both catalytic and non-catalytic organic synthesis. Glycerol is the main by-product from the conversion of oils and fats in oleochemical production. Moreover, in the past decade, its price has substantially decreased due to an increase in supply from the production and use of fatty acid derivatives in the food, cosmetics, and drugs industries and in biofuel synthesis, i.e., biodiesel. The renewable origin, beneficial physicochemical properties and reusability of glycerol-based solvents, enabled improved product yield and selectivity as well as easy product separation and catalyst recycling. Furthermore, their high boiling point and polarity make them perfect candidates for non-conventional heating and mixing techniques such as ultrasound- and microwave-assisted reactions. Finally, in some reactions, such as catalytic transfer-hydrogenation or transesterification, they can also be used simultaneously as both solvent and reactant. In our ongoing efforts to design a viable protocol that will facilitate the acceptance of glycerol and its derivatives as sustainable solvents, pure glycerol and glycerol triacetate (triacetin) as well as various glycerol-triacetin mixtures were tested as sustainable solvents in several representative organic reactions, such as nucleophilic substitution of benzyl chloride to benzyl acetate, Suzuki-Miyaura cross-coupling of iodobenzene and phenylboronic acid, baker’s yeast reduction of ketones, and transfer hydrogenation of olefins. It was found that reaction performance was affected by the glycerol to triacetin ratio, as the solubility of the substrates in the solvent determined product yield. Thereby, employing optimal glycerol to triacetin ratio resulted in maximum product yield. In addition, using glycerol-based solvents enabled easy and successful separation of the products and recycling of the catalysts.Keywords: glycerol, green chemistry, sustainability, catalysis
Procedia PDF Downloads 6241051 The Processing of Context-Dependent and Context-Independent Scalar Implicatures
Authors: Liu Jia’nan
Abstract:
The default accounts hold the view that there exists a kind of scalar implicature which can be processed without context and own a psychological privilege over other scalar implicatures which depend on context. In contrast, the Relevance Theorist regards context as a must because all the scalar implicatures have to meet the need of relevance in discourse. However, in Katsos, the experimental results showed: Although quantitatively the adults rejected under-informative utterance with lexical scales (context-independent) and the ad hoc scales (context-dependent) at almost the same rate, adults still regarded the violation of utterance with lexical scales much more severe than with ad hoc scales. Neither default account nor Relevance Theory can fully explain this result. Thus, there are two questionable points to this result: (1) Is it possible that the strange discrepancy is due to other factors instead of the generation of scalar implicature? (2) Are the ad hoc scales truly formed under the possible influence from mental context? Do the participants generate scalar implicatures with ad hoc scales instead of just comparing semantic difference among target objects in the under- informative utterance? In my Experiment 1, the question (1) will be answered by repetition of Experiment 1 by Katsos. Test materials will be showed by PowerPoint in the form of pictures, and each procedure will be done under the guidance of a tester in a quiet room. Our Experiment 2 is intended to answer question (2). The test material of picture will be transformed into the literal words in DMDX and the target sentence will be showed word-by-word to participants in the soundproof room in our lab. Reading time of target parts, i.e. words containing scalar implicatures, will be recorded. We presume that in the group with lexical scale, standardized pragmatically mental context would help generate scalar implicature once the scalar word occurs, which will make the participants hope the upcoming words to be informative. Thus if the new input after scalar word is under-informative, more time will be cost for the extra semantic processing. However, in the group with ad hoc scale, scalar implicature may hardly be generated without the support from fixed mental context of scale. Thus, whether the new input is informative or not does not matter at all, and the reading time of target parts will be the same in informative and under-informative utterances. People’s mind may be a dynamic system, in which lots of factors would co-occur. If Katsos’ experimental result is reliable, will it shed light on the interplay of default accounts and context factors in scalar implicature processing? We might be able to assume, based on our experiments, that one single dominant processing paradigm may not be plausible. Furthermore, in the processing of scalar implicature, the semantic interpretation and the pragmatic interpretation may be made in a dynamic interplay in the mind. As to the lexical scale, the pragmatic reading may prevail over the semantic reading because of its greater exposure in daily language use, which may also lead the possible default or standardized paradigm override the role of context. However, those objects in ad hoc scale are not usually treated as scalar membership in mental context, and thus lexical-semantic association of the objects may prevent their pragmatic reading from generating scalar implicature. Only when the sufficient contextual factors are highlighted, can the pragmatic reading get privilege and generate scalar implicature.Keywords: scalar implicature, ad hoc scale, dynamic interplay, default account, Mandarin Chinese processing
Procedia PDF Downloads 3221050 The Gezi Park Protests in the Columns
Authors: Süleyman Hakan Yilmaz, Yasemin Gülsen Yilmaz
Abstract:
The Gezi Park protests of 2013 have significantly changed the Turkish agenda and its effects have been felt historically. The protests, which rapidly spread throughout the country, were triggered by the proposal to recreate the Ottoman Army Barracks to function as a shopping mall on Gezi Park located in Istanbul’s Taksim neighbourhood despite the oppositions of several NGOs and when trees were cut in the park for this purpose. Once the news that construction vehicles entered the park on May 27 spread on social media, activists moved into the park to stop the demolition, against whom the police used disproportioned force. With this police intervention and the then prime-minister Tayyip Erdoğan's insistent statements about the construction plans, the protests turned into anti-government demonstrations, which then spread to the rest of the country, mainly in big cities like Ankara and Izmir. According to the Ministry of Internal Affairs’ June 23rd reports, 2.5 million people joined the demonstrations in 79 provinces, that is all of them, except for the provinces of Bayburt and Bingöl, while even more people shared their opinions via social networks. As a result of these events, 8 civilians and 2 security personnel lost their lives, namely police chief Mustafa Sarı, police officer Ahmet Küçükdağ, citizens Mehmet Ayvalıtaş, Abdullah Cömert, Ethem Sarısülük, Ali İsmail Korkmaz, Ahmet Atakan, Berkin Elvan, Burak Can Karamanoğlu, Mehmet İstif, and Elif Çermik, and 8163 more were injured. Besides being a turning point in Turkish history, the Gezi Park protests also had broad repercussions in both in Turkish and in global media, which focused on Turkey throughout the events. Our study conducts content analysis of three Turkish reporting newspapers with varying ideological standpoints, Hürriyet, Cumhuriyet ve Yeni Şafak, in order to reveal their basic approach to columns casting in context of the Gezi Park protests. Columns content relating to the Gezi protests were treated and analysed for this purpose. The aim of this study is to understand the social effects of the Gezi Park protests through media samples with varying political attitudes towards news casting.Keywords: Gezi Park, media, news casting, columns
Procedia PDF Downloads 4331049 Farmers Perception in Pesticide Usage in Curry Leaf (Murraya koeinigii (L.))
Authors: Swarupa Shashi Senivarapu Vemuri
Abstract:
Curry leaf (Murraya koeinigii (L.)) exported from India had insecticide residues above maximum residue limits, which are hazardous to consumer health and caused rejection of the commodity at the point of entry in Europe and middle east resulting in a check on export of curry leaf. Hence to study current pesticide usage patterns in major curry leaf growing areas, a survey on pesticide use pattern was carried out in curry leaf growing areas in Guntur districts of Andhra Pradesh during 2014-15, by interviewing farmers growing curry leaf utilizing the questionnaire to assess their knowledge and practices on crop cultivation, general awareness on pesticide recommendations and use. Education levels of farmers are less, where 13.96 per cent were only high school educated, and 13.96% were illiterates. 18.60% farmers were found cultivating curry leaf crop in less than 1 acre of land, 32.56% in 2-5 acres, 20.93% in 5-10 acres and 27.91% of the farmers in more than 10 acres of land. Majority of the curry leaf farmers (93.03%) used pesticide mixtures rather than applying single pesticide at a time, basically to save time, labour, money and to combat two or more pests with single spray. About 53.48% of farmers applied pesticides at 2 days interval followed by 34.89% of the farmers at 4 days interval, and about 11.63% of the farmers sprayed at weekly intervals. Only 27.91% of farmers thought that the quantity of pesticides used at their farm is adequate, 90.69% of farmers had perception that pesticides are helpful in getting good returns. 83.72% of farmers felt that crop change is the only way to control sucking pests which damages whole crop. About 4.65% of the curry leaf farmers opined that integrated pest management practices are alternative to pesticides and only 11.63% of farmers felt natural control as an alternative to pesticides. About 65.12% of farmers had perception that high pesticide dose will give higher yields. However, in general, Curry leaf farmers preferred to contact pesticide dealers (100%) and were not interested in contacting either agricultural officer or a scientist. Farmers were aware of endosulfan ban 93.04%), in contrast, only 65.12, per cent of farmers knew about the ban of monocrotophos on vegetables. Very few farmers knew about pesticide residues and decontamination by washing. Extension educational interventions are necessary to produce fresh curry leaf free from pesticide residues.Keywords: Curry leaf, decontamination, endosulfan, leaf roller, psyllids, tetranychid mite
Procedia PDF Downloads 3351048 Dosimetric Comparison of Conventional Plans versus Three Dimensional Conformal Simultaneously Integrated Boost Plans
Authors: Shoukat Ali, Amjad Hussain, Latif-ur-Rehman, Sehrish Inam
Abstract:
Radiotherapy plays an important role in the management of cancer patients. Approximately 50% of the cancer patients receive radiotherapy at one point or another during the course of treatment. The entire radiotherapy treatment of curative intent is divided into different phases, depending on the histology of the tumor. The established protocols are useful in deciding the total dose, fraction size, and numbers of phases. The objective of this study was to evaluate the dosimetric differences between the conventional treatment protocols and the three-dimensional conformal simultaneously integrated boost (SIB) plans for three different tumors sites (i.e. bladder, breast, and brain). A total of 30 patients with brain, breast and bladder cancers were selected in this retrospective study. All the patients were CT simulated initially. The primary physician contoured PTV1 and PTV2 in the axial slices. The conventional doses prescribed for brain and breast is 60Gy/30 fractions, and 64.8Gy/36 fractions for bladder treatment. For the SIB plans biological effective doses (BED) were calculated for 25 fractions. The two conventional (Phase I and Phase II) and a single SIB plan for each patient were generated on Eclipse™ treatment planning system. Treatment plans were compared and analyzed for coverage index, conformity index, homogeneity index, dose gradient and organs at risk doses.In both plans 95% of PTV volume received a minimum of 95% of the prescribe dose. Dose deviation in the optic chiasm was found to be less than 0.5%. There is no significant difference in lung V20 and heart V30 in the breast plans. In the rectum plans V75%, V50% and V25% were found to be less than 1.2% different. Deviation in the tumor coverage, conformity and homogeneity indices were found to be less than 1%. SIB plans with three dimensional conformal radiotherapy technique reduce the overall treatment time without compromising the target coverage and without increasing dose to the organs at risk. The higher dose per fraction may increase the late effects to some extent. Further studies are required to evaluate the late effects with the intention of standardizing the SIB technique for practical implementation.Keywords: coverage index, conformity index, dose gradient, homogeneity index, simultaneously integrated boost
Procedia PDF Downloads 4761047 Update on Epithelial Ovarian Cancer (EOC), Types, Origin, Molecular Pathogenesis, and Biomarkers
Authors: Salina Yahya Saddick
Abstract:
Ovarian cancer remains the most lethal gynecological malignancy due to the lack of highly sensitive and specific screening tools for detection of early-stage disease. The OSE provides the progenitor cells for 90% of human ovarian cancers. Recent morphologic, immunohistochemical and molecular genetic studies have led to the development of a new paradigm for the pathogenesis and origin of epithelial ovarian cancer (EOC) based on a ualistic model of carcinogenesis that divides EOC into two broad categories designated Types I and II which are characterized by specific mutations, including KRAS, BRAF, ERBB2, CTNNB1, PTEN PIK3CA, ARID1A, and PPPR1A, which target specific cell signaling pathways. Type 1 tumors rarely harbor TP53. type I tumors are relatively genetically stable and typically display a variety of somatic sequence mutations that include KRAS, BRAF, PTEN, PIK3CA CTNNB1 (the gene encoding beta catenin), ARID1A and PPP2R1A but very rarely TP53 . The cancer stem cell (CSC) hypothesis postulates that the tumorigenic potential of CSCs is confined to a very small subset of tumor cells and is defined by their ability to self-renew and differentiate leading to the formation of a tumor mass. Potential protein biomarker miRNA, are promising biomarkers as they are remarkably stable to allow isolation and analysis from tissues and from blood in which they can be found as free circulating nucleic acids and in mononuclear cells. Recently, genomic anaylsis have identified biomarkers and potential therapeutic targets for ovarian cancer namely, FGF18 which plays an active role in controlling migration, invasion, and tumorigenicity of ovarian cancer cells through NF-κB activation, which increased the production of oncogenic cytokines and chemokines. This review summarizes update information on epithelial ovarian cancers and point out to the most recent ongoing research.Keywords: epithelial ovarian cancers, somatic sequence mutations, cancer stem cell (CSC), potential protein, biomarker, genomic analysis, FGF18 biomarker
Procedia PDF Downloads 3801046 Electrophysiological Correlates of Statistical Learning in Children with and without Developmental Language Disorder
Authors: Ana Paula Soares, Alexandrina Lages, Helena Oliveira, Francisco-Javier Gutiérrez-Domínguez, Marisa Lousada
Abstract:
From an early age, exposure to a spoken language allows us to implicitly capture the structure underlying the succession of the speech sounds in that language and to segment it into meaningful units (words). Statistical learning (SL), i.e., the ability to pick up patterns in the sensory environment even without intention or consciousness of doing it, is thus assumed to play a central role in the acquisition of the rule-governed aspects of language and possibly to lie behind the language difficulties exhibited by children with development language disorder (DLD). The research conducted so far has, however, led to inconsistent results, which might stem from the behavioral tasks used to test SL. In a classic SL experiment, participants are first exposed to a continuous stream (e.g., syllables) in which, unbeknownst to the participants, stimuli are grouped into triplets that always appear together in the stream (e.g., ‘tokibu’, ‘tipolu’), with no pauses between each other (e.g., ‘tokibutipolugopilatokibu’) and without any information regarding the task or the stimuli. Following exposure, SL is assessed by asking participants to discriminate between triplets previously presented (‘tokibu’) from new sequences never presented together during exposure (‘kipopi’), i.e., to perform a two-alternative-forced-choice (2-AFC) task. Despite the widespread use of the 2-AFC to test SL, it has come under increasing criticism as it is an offline post-learning task that only assesses the result of the learning that had occurred during the previous exposure phase and that might be affected by other factors beyond the computation of regularities embedded in the input, typically the likelihood two syllables occurring together, a statistic known as transitional probability (TP). One solution to overcome these limitations is to assess SL as exposure to the stream unfolds using online techniques such as event-related potentials (ERP) that is highly sensitive to the time-course of the learning in the brain. Here we collected ERPs to examine the neurofunctional correlates of SL in preschool children with DLD, and chronological-age typical language development (TLD) controls who were exposed to an auditory stream in which eight three-syllable nonsense words, four of which presenting high-TPs and the other four low-TPs, to further analyze whether the ability of DLD and TLD children to extract-word-like units from the steam was modulated by words’ predictability. Moreover, to ascertain if the previous knowledge of the to-be-learned-regularities affected the neural responses to high- and low-TP words, children performed the auditory SL task, firstly, under implicit, and, subsequently, under explicit conditions. Although behavioral evidence of SL was not obtained in either group, the neural responses elicited during the exposure phases of the SL tasks differentiated children with DLD from children with TLD. Specifically, the results indicated that only children from the TDL group showed neural evidence of SL, particularly in the SL task performed under explicit conditions, firstly, for the low-TP, and, subsequently, for the high-TP ‘words’. Taken together, these findings support the view that children with DLD showed deficits in the extraction of the regularities embedded in the auditory input which might underlie the language difficulties.Keywords: development language disorder, statistical learning, transitional probabilities, word segmentation
Procedia PDF Downloads 1881045 Japanese and Europe Legal Frameworks on Data Protection and Cybersecurity: Asymmetries from a Comparative Perspective
Authors: S. Fantin
Abstract:
This study is the result of the legal research on cybersecurity and data protection within the EUNITY (Cybersecurity and Privacy Dialogue between Europe and Japan) project, aimed at fostering the dialogue between the European Union and Japan. Based on the research undertaken therein, the author offers an outline of the main asymmetries in the laws governing such fields in the two regions. The research is a comparative analysis of the two legal frameworks, taking into account specific provisions, ratio legis and policy initiatives. Recent doctrine was taken into account, too, as well as empirical interviews with EU and Japanese stakeholders and project partners. With respect to the protection of personal data, the European Union has recently reformed its legal framework with a package which includes a regulation (General Data Protection Regulation), and a directive (Directive 680 on personal data processing in the law enforcement domain). In turn, the Japanese law under scrutiny for this study has been the Act on Protection of Personal Information. Based on a comparative analysis, some asymmetries arise. The main ones refer to the definition of personal information and the scope of the two frameworks. Furthermore, the rights of the data subjects are differently articulated in the two regions, while the nature of sanctions take two opposite approaches. Regarding the cybersecurity framework, the situation looks similarly misaligned. Japan’s main text of reference is the Basic Cybersecurity Act, while the European Union has a more fragmented legal structure (to name a few, Network and Information Security Directive, Critical Infrastructure Directive and Directive on the Attacks at Information Systems). On an relevant note, unlike a more industry-oriented European approach, the concept of cyber hygiene seems to be neatly embedded in the Japanese legal framework, with a number of provisions that alleviate operators’ liability by turning such a burden into a set of recommendations to be primarily observed by citizens. With respect to the reasons to fill such normative gaps, these are mostly grounded on three basis. Firstly, the cross-border nature of cybercrime brings to consider both magnitude of the issue and its regulatory stance globally. Secondly, empirical findings from the EUNITY project showed how recent data breaches and cyber-attacks had shared implications between Europe and Japan. Thirdly, the geopolitical context is currently going through the direction of bringing the two regions to significant agreements from a trade standpoint, but also from a data protection perspective (with an imminent signature by both parts of a so-called ‘Adequacy Decision’). The research conducted in this study reveals two asymmetric legal frameworks on cyber security and data protection. With a view to the future challenges presented by the strengthening of the collaboration between the two regions and the trans-national fashion of cybercrime, it is urged that solutions are found to fill in such gaps, in order to allow European Union and Japan to wisely increment their partnership.Keywords: cybersecurity, data protection, European Union, Japan
Procedia PDF Downloads 1231044 Courtesy to Things and Sense of Unity with the Things: Psychological Evaluation Based on the Teaching of Buddha
Abstract:
This study aims to clarify factors of courtesy to things and the effect of courtesy on a sense of unity with things based on the teaching of Buddha. The teaching of Buddha explains when dealing with things in a courteous manner carefully, the border between selves and the external world disappears, then both are united. This is an example in Buddhist way that explains the connections with all existences, and in the modern world, it is also a lesson that humans should not let matters go to waste and treat them politely. In order to reveal concrete ways to practice courtesy to things, we clarify the factors of courtesy (Study 1) and examine the effect of courtesy on the sense of unity with the things (Study 2). In Study 1, 100 Japanese (mean age=54.39, SD=15.04, 50% female) described freely about what is courtesy to things that they use daily. These descriptions were classified, and 25 items were made asking for the degree of courtesy to the things. Then different 678 Japanese (mean age=44.72, SD=13.14, 50% female) answered the 25 items on 7-point about tools they use daily. An exploratory factor analysis revealed two factors. The first factor (α=.97) includes 'I deal with the thing carefully' and 'I clean up the thing after use'. This factor reflects how gently people care about things. The second factor (α=.96) includes 'A sense of self-control has come to me through using the thing' and 'I have got inner strength by taking care of the thing'. The second factor reflects how people learn by dealing with things carefully. In this Study 2, 200 Japanese (mean age=49.39, SD=11.07, 50% female) answered courtesy about things they use daily and the degree of sense of unity with the things using the inclusion of other in the self scale, replacing 'Other' with 'Your thing'. The ANOVA was conducted to examine the effect of courtesy (high/low level of two factors) on the score of sense of unity. The results showed the main effect of care level. People with a high level of care have a stronger sense of unity with the thing. The tendency of an interaction effect is also found. The condition with a high level of care and a high level of learning enhances the sense of unity more than the condition of a low level of care and high level in learning. Study 1 found that courtesy is composed of care and learning. That is, courtesy is not only active care to the things but also to learn the meaning of the things and grow personally with the things. Study 2 revealed that people with a high level of care feel a stronger sense of unity and also people with both a high level of care and learn tend to do so. The findings support the idea of the teaching of Buddha. In the future, it is necessary to examine a combined effect of care and learning.Keywords: courtesy, things, sense of unity, the teaching of Buddha
Procedia PDF Downloads 1501043 Characterization of Extra Virgin Olive Oil from Olive Cultivars Grown in Pothwar, Pakistan
Authors: Abida Mariam, Anwaar Ahmed, Asif Ahmad, Muhammad Sheeraz Ahmad, Muhammad Akram Khan, Muhammad Mazahir
Abstract:
The plant olive (Olea europaea L.) is known for its commercial significance due to nutritional and health benefits. Pakistan is ranked 4th among countries who import olive oil whereas, 70% of edible oil is imported to fulfil the needs of the country. There exists great potential for Olea europaea cultivation in Pakistan. The popularity and cultivation of olive fruit has increased in recent past due to its high socio-economic and health significance. There exist almost negligible data on the chemical composition of extra virgin olive oil extracted from cultivars grown in Pothwar, an area with arid climate conducive for growth of olive trees. Keeping in view these factors a study has been conducted to characterize the olive oil extracted from olive cultivars collected from Pothwar regions of Pakistan for their nutritional potential and value addition. Ten olive cultivars (Gemlik, Coratina, Sevillano, Manzanilla, Leccino, Koroneiki, Frantoio, Arbiquina, Earlik and Ottobratica) were collected from Barani Agriculture Research Institute, Chakwal. Extra Virgin Olive Oil (EVOO) was extracted by cold pressing and centrifuging of olive fruits. The highest amount of oil was yielded in Coratina (23.9%) followed by Frantoio (23.7%), Koroneiki (22.8%), Sevillano (22%), Ottobratica (22%), Leccino (20.5%), Arbiquina (19.2%), Manzanilla (17.2%), Earlik (14.4%) and Gemllik (13.1%). The extracted virgin olive oil was studied for various physico- chemical properties and fatty acid profile. The Physical and chemical properties i.e., characteristic odor and taste, light yellow color with no foreign matter, insoluble impurities (≤0.08), fee fatty acid (0.1 to 0.8), acidity (0.5 to 1.6 mg/g acid), peroxide value (1.5 to 5.2 meqO2/kg), Iodine value (82 to 90), saponification value (186 to 192 mg/g) and unsaponifiable matter (4 to 8g/kg), ultraviolet spectrophotometric analysis (k232 and k270), showed values in the acceptable range, established by PSQCA and IOOC set for extra virgin olive oil. Olive oil was analyzed by Near Infra-Red spectrophotometry (NIR) for fatty acids sin olive oils which were found as: palmitic, palmitoleic, stearic, oleic, linoleic and alpha-linolenic. Major fatty acid was Oleic acid in the highest percentage ranging from (55 to 66.1%), followed by linoleic (10.4 to 20.4%), palmitic (13.8 to 19.5%), stearic (3.9 to 4.4%), palmitoleic (0.3 to 1.7%) and alpha-linolenic (0.9 to 1.7%). The results were significant with differences in parameters analyzed for all ten cultivars which confirm that genetic factors are important contributors in the physico-chemical characteristics of oil. The olive oil showed superior physical and chemical properties and recommended as one of the healthiest forms of edible oil. This study will help consumers to be more aware of and make better choices of healthy oils available locally thus contributing towards their better health.Keywords: characterization, extra virgin olive oil, oil yield, fatty acids
Procedia PDF Downloads 971042 Understanding the Lithiation/Delithiation Mechanism of Si₁₋ₓGeₓ Alloys
Authors: Laura C. Loaiza, Elodie Salager, Nicolas Louvain, Athmane Boulaoued, Antonella Iadecola, Patrik Johansson, Lorenzo Stievano, Vincent Seznec, Laure Monconduit
Abstract:
Lithium-ion batteries (LIBs) have an important place among energy storage devices due to their high capacity and good cyclability. However, the advancements in portable and transportation applications have extended the research towards new horizons, and today the development is hampered, e.g., by the capacity of the electrodes employed. Silicon and germanium are among the considered modern anode materials as they can undergo alloying reactions with lithium while delivering high capacities. It has been demonstrated that silicon in its highest lithiated state can deliver up to ten times more capacity than graphite (372 mAh/g): 4200 mAh/g for Li₂₂Si₅ and 3579 mAh/g for Li₁₅Si₄, respectively. On the other hand, germanium presents a capacity of 1384 mAh/g for Li₁₅Ge₄, and a better electronic conductivity and Li ion diffusivity as compared to Si. Nonetheless, the commercialization potential of Ge is limited by its cost. The synergetic effect of Si₁₋ₓGeₓ alloys has been proven, the capacity is increased compared to Ge-rich electrodes and the capacity retention is increased compared to Si-rich electrodes, but the exact performance of this type of electrodes will depend on factors like specific capacity, C-rates, cost, etc. There are several reports on various formulations of Si₁₋ₓGeₓ alloys with promising LIB anode performance with most work performed on complex nanostructures resulting from synthesis efforts implying high cost. In the present work, we studied the electrochemical mechanism of the Si₀.₅Ge₀.₅ alloy as a realistic micron-sized electrode formulation using carboxymethyl cellulose (CMC) as the binder. A combination of a large set of in situ and operando techniques were employed to investigate the structural evolution of Si₀.₅Ge₀.₅ during lithiation and delithiation processes: powder X-ray diffraction (XRD), X-ray absorption spectroscopy (XAS), Raman spectroscopy, and 7Li solid state nuclear magnetic resonance spectroscopy (NMR). The results have presented a whole view of the structural modifications induced by the lithiation/delithiation processes. The Si₀.₅Ge₀.₅ amorphization was observed at the beginning of discharge. Further lithiation induces the formation of a-Liₓ(Si/Ge) intermediates and the crystallization of Li₁₅(Si₀.₅Ge₀.₅)₄ at the end of the discharge. At really low voltages a reversible process of overlithiation and formation of Li₁₅₊δ(Si₀.₅Ge₀.₅)₄ was identified and related with a structural evolution of Li₁₅(Si₀.₅Ge₀.₅)₄. Upon charge, the c-Li₁₅(Si₀.₅Ge₀.₅)₄ was transformed into a-Liₓ(Si/Ge) intermediates. At the end of the process an amorphous phase assigned to a-SiₓGey was recovered. Thereby, it was demonstrated that Si and Ge are collectively active along the cycling process, upon discharge with the formation of a ternary Li₁₅(Si₀.₅Ge₀.₅)₄ phase (with a step of overlithiation) and upon charge with the rebuilding of the a-Si-Ge phase. This process is undoubtedly behind the enhanced performance of Si₀.₅Ge₀.₅ compared to a physical mixture of Si and Ge.Keywords: lithium ion battery, silicon germanium anode, in situ characterization, X-Ray diffraction
Procedia PDF Downloads 2861041 Machine Learning in Gravity Models: An Application to International Recycling Trade Flow
Authors: Shan Zhang, Peter Suechting
Abstract:
Predicting trade patterns is critical to decision-making in public and private domains, especially in the current context of trade disputes among major economies. In the past, U.S. recycling has relied heavily on strong demand for recyclable materials overseas. However, starting in 2017, a series of new recycling policies (bans and higher inspection standards) was enacted by multiple countries that were the primary importers of recyclables from the U.S. prior to that point. As the global trade flow of recycling shifts, some new importers, mostly developing countries in South and Southeast Asia, have been overwhelmed by the sheer quantities of scrap materials they have received. As the leading exporter of recyclable materials, the U.S. now has a pressing need to build its recycling industry domestically. With respect to the global trade in scrap materials used for recycling, the interest in this paper is (1) predicting how the export of recyclable materials from the U.S. might vary over time, and (2) predicting how international trade flows for recyclables might change in the future. Focusing on three major recyclable materials with a history of trade, this study uses data-driven and machine learning (ML) algorithms---supervised (shrinkage and tree methods) and unsupervised (neural network method)---to decipher the international trade pattern of recycling. Forecasting the potential trade values of recyclables in the future could help importing countries, to which those materials will shift next, to prepare related trade policies. Such policies can assist policymakers in minimizing negative environmental externalities and in finding the optimal amount of recyclables needed by each country. Such forecasts can also help exporting countries, like the U.S understand the importance of healthy domestic recycling industry. The preliminary result suggests that gravity models---in addition to particular selection macroeconomic predictor variables--are appropriate predictors of the total export value of recyclables. With the inclusion of variables measuring aspects of the political conditions (trade tariffs and bans), predictions show that recyclable materials are shifting from more policy-restricted countries to less policy-restricted countries in international recycling trade. Those countries also tend to have high manufacturing activities as a percentage of their GDP.Keywords: environmental economics, machine learning, recycling, international trade
Procedia PDF Downloads 1681040 Estimation of Biomedical Waste Generated in a Tertiary Care Hospital in New Delhi
Authors: Priyanka Sharma, Manoj Jais, Poonam Gupta, Suraiya K. Ansari, Ravinder Kaur
Abstract:
Introduction: As much as the Health Care is necessary for the population, so is the management of the Biomedical waste produced. Biomedical waste is a wide terminology used for the waste material produced during the diagnosis, treatment or immunization of human beings and animals, in research or in the production or testing of biological products. Biomedical waste management is a chain of processes from the point of generation of Biomedical waste to its final disposal in the correct and proper way, assigned for that particular type of waste. Any deviation from the said processes leads to improper disposal of Biomedical waste which itself is a major health hazard. Proper segregation of Biomedical waste is the key for Biomedical Waste management. Improper disposal of BMW can cause sharp injuries which may lead to HIV, Hepatitis-B virus, Hepatitis-C virus infections. Therefore, proper disposal of BMW is of upmost importance. Health care establishments segregate the Biomedical waste and dispose it as per the Biomedical waste management rules in India. Objectives: This study was done to observe the current trends of Biomedical waste generated in a tertiary care Hospital in Delhi. Methodology: Biomedical waste management rounds were conducted in the hospital wards. Relevant details were collected and analysed and sites with maximum Biomedical waste generation were identified. All the data was cross checked with the commons collection site. Results: The total amount of waste generated in the hospital during January 2014 till December 2014 was 6,39,547 kg, of which 70.5% was General (non-hazardous) waste and the rest 29.5% was BMW which consisted highly infectious waste (12.2%), disposable plastic waste (16.3%) and sharps (1%). The maximum quantity of Biomedical waste producing sites were Obstetrics and Gynaecology wards with a total Biomedical waste production of 45.8%, followed by Paediatrics, Surgery and Medicine wards with 21.2 %, 4.6% and 4.3% respectively. The maximum average Biomedical waste generated was by Obstetrics and Gynaecology ward with 0.7 kg/bed/day, followed by Paediatrics, Surgery and Medicine wards with 0.29, 0.28 and 0.18 kg/bed/day respectively. Conclusions: Hospitals should pay attention to the sites which produce a large amount of BMW to avoid improper segregation of Biomedical waste. Also, induction and refresher training Program of Biomedical waste management should be conducted to avoid improper management of Biomedical waste. Healthcare workers should be made aware of risks of poor Biomedical waste management.Keywords: biomedical waste, biomedical waste management, hospital-tertiary care, New Delhi
Procedia PDF Downloads 2451039 Ensemble of Misplacement, Juxtaposing Feminine Identity in Time and Space: An Analysis of Works of Modern Iranian Female Photographers
Authors: Delaram Hosseinioun
Abstract:
In their collections, Shirin Neshat, Mitra Tabrizian, Gohar Dashti and Newsha Tavakolian adopt a hybrid form of narrative to confront the restrictions imposed on women in hegemonic public and private spaces. Focusing on motives such as social marginalisation, crisis of belonging, as well as lack of agency for women, the artists depict the regression of women’s rights in their respective generations. Based on the ideas of Michael Bakhtin, namely his concept of polyphony or the plurality of contradictory voices, the views of Judith Butler on giving an account to oneself and Henri Leverbre’s theories on social space, this study illustrates the artists’ concept of identity in crisis through time and space. The research explores how the artists took their art as a novel dimension to depict and confront the hardships imposed on Iranian women. Henri Lefebvre makes a distinction between complex social structures through which individuals situate, perceive and represent themselves. By adding Bakhtin’s polyphonic view to Lefebvre’s concepts of perceived and lived spaces, the study explores the sense of social fragmentation in the works of Dashti and Tavakolian. One argument is that as the representatives of the contemporary generation of female artists who spend their lives in Iran and faced a higher degree of restrictions, their hyperbolic and theatrical styles stand as a symbolic act of confrontation against restrictive socio-cultural norms imposed on women. Further, the research explores the possibility of reclaiming one's voice and sense of agency through art, corresponding with the Bakhtinian sense of polyphony and Butler’s concept of giving an account to oneself. Works of Neshat and Tabrizian as the representatives of the previous generation who faced exile and diaspora, encompass a higher degree of misplacement, violence and decay of women’s presence. In Their works, the women’s body encompasses Lefebvre’s dismantled temporal and special setting. Notably, the ongoing social conviction and gender-based dogma imposed on women frame some of the concurrent motives among the selected collections of the four artists. By applying an interdisciplinary lens and integrating the conducted interviews with the artists, the study illustrates how the artists seek a transcultural account for themselves and women in their generations. Further, the selected collections manifest the urgency for an authentic and liberal voice and setting for women, resonating with the concurrent Women, Life, Freedom movement in Iran.Keywords: persian modern female photographers, transcultural studies, shirin neshat, mitra tabrizian, gohar dashti, newsha tavakolian, butler, bakhtin, lefebvre
Procedia PDF Downloads 781038 Material Supply Mechanisms for Contemporary Assembly Systems
Authors: Rajiv Kumar Srivastava
Abstract:
Manufacturing of complex products such as automobiles and computers requires a very large number of parts and sub-assemblies. The design of mechanisms for delivery of these materials to the point of assembly is an important manufacturing system and supply chain challenge. Different approaches to this problem have been evolved for assembly lines designed to make large volumes of standardized products. However, contemporary assembly systems are required to concurrently produce a variety of products using approaches such as mixed model production, and at times even mass customization. In this paper we examine the material supply approaches for variety production in moderate to large volumes. The conventional approach for material delivery to high volume assembly lines is to supply and stock materials line-side. However for certain materials, especially when the same or similar items are used along the line, it is more convenient to supply materials in kits. Kitting becomes more preferable when lines concurrently produce multiple products in mixed model mode, since space requirements could increase as product/ part variety increases. At times such kits may travel along with the product, while in some situations it may be better to have delivery and station-specific kits rather than product-based kits. Further, in some mass customization situations it may even be better to have a single delivery and assembly station, to which an entire kit is delivered for fitment, rather than a normal assembly line. Finally, in low-moderate volume assembly such as in engineered machinery, it may be logistically more economical to gather materials in an order-specific kit prior to launching final assembly. We have studied material supply mechanisms to support assembly systems as observed in case studies of firms with different combinations of volume and variety/ customization. It is found that the appropriate approach tends to be a hybrid between direct line supply and different kitting modes, with the best mix being a function of the manufacturing and supply chain environment, as well as space and handling considerations. In our continuing work we are studying these scenarios further, through the use of descriptive models and progressing towards prescriptive models to help achieve the optimal approach, capturing the trade-offs between inventory, material handling, space, and efficient line supply.Keywords: assembly systems, kitting, material supply, variety production
Procedia PDF Downloads 2261037 Biotechnological Interventions for Crop Improvement in Nutricereal Pearl Millet
Authors: Supriya Ambawat, Subaran Singh, C. Tara Satyavathi, B. S. Rajpurohit, Ummed Singh, Balraj Singh
Abstract:
Pearl millet [Pennisetum glaucum (L.) R. Br.] is an important staple food of the arid and semiarid tropical regions of Asia, Africa, and Latin America. It is rightly termed as nutricereal as it has high nutrition value and a good source of carbohydrate, protein, fat, ash, dietary fiber, potassium, magnesium, iron, zinc, etc. Pearl millet has low prolamine fraction and is gluten free which is useful for people having a gluten allergy. It has several health benefits like reduction in blood pressure, thyroid, diabe¬tes, cardiovascular and celiac diseases but its direct consumption as food has significantly declined due to several reasons. Keeping this in view, it is important to reorient the ef¬forts to generate demand through value-addition and quality improvement and create awareness on the nutritional merits of pearl millet. In India, through Indian Council of Agricultural Research-All India Coordinated Research Project on Pearl millet, multilocational coordinated trials for developed hybrids were conducted at various centers. The gene banks of pearl millet contain varieties with high levels of iron and zinc which were used to produce new pearl millet varieties with elevated iron levels bred with the high‐yielding varieties. Thus, using breeding approaches and biochemical analysis, a total of 167 hybrids and 61 varieties were identified and released for cultivation in different agro-ecological zones of the country which also includes some biofortified hybrids rich in Fe and Zn. Further, using several biotechnological interventions such as molecular markers, next-generation sequencing (NGS), association mapping, nested association mapping (NAM), MAGIC populations, genome editing, genotyping by sequencing (GBS), genome wide association studies (GWAS) advancement in millet improvement has become possible by identifying and tagging of genes underlying a trait in the genome. Using DArT markers very high density linkage maps were constructed for pearl millet. Improved HHB67 has been released using marker assisted selection (MAS) strategies, and genomic tools were used to identify Fe-Zn Quantitative Trait Loci (QTL). The draft genome sequence of millet has also opened various ways to explore pearl millet. Further, genomic positions of significantly associated simple sequence repeat (SSR) markers with iron and zinc content in the consensus map is being identified and research is in progress towards mapping QTLs for flour rancidity. The sequence information is being used to explore genes and enzymatic pathways responsible for rancidity of flour. Thus, development and application of several biotechnological approaches along with biofortification can accelerate the genetic gain targets for pearl millet improvement and help improve its quality.Keywords: Biotechnological approaches, genomic tools, malnutrition, MAS, nutricereal, pearl millet, sequencing.
Procedia PDF Downloads 1851036 Study on Runoff Allocation Responsibilities of Different Land Uses in a Single Catchment Area
Authors: Chuan-Ming Tung, Jin-Cheng Fu, Chia-En Feng
Abstract:
In recent years, the rapid development of urban land in Taiwan has led to the constant increase of the areas of impervious surface, which has increased the risk of waterlogging during heavy rainfall. Therefore, in recent years, promoting runoff allocation responsibilities has often been used as a means of reducing regional flooding. In this study, the single catchment area covering both urban and rural land as the study area is discussed. Based on Storm Water Management Model, urban and rural land in a single catchment area was explored to develop the runoff allocation responsibilities according to their respective control regulation on land use. The impacts of runoff increment and reduction in sub-catchment area were studied to understand the impact of highly developed urban land on the reduction of flood risk of rural land at the back end. The results showed that the rainfall with 1 hour short delay of 2 years, 5 years, 10 years, and 25 years return period. If the study area was fully developed, the peak discharge at the outlet would increase by 24.46% -22.97% without runoff allocation responsibilities. The front-end urban land would increase runoff from back-end of rural land by 76.19% -46.51%. However, if runoff allocation responsibilities were carried out in the study area, the peak discharge could be reduced by 58.38-63.08%, which could make the front-end to reduce 54.05% -23.81% of the peak flow to the back-end. In addition, the researchers found that if it was seen from the perspective of runoff allocation responsibilities of per unit area, the residential area of urban land would benefit from the relevant laws and regulations of the urban system, which would have a better effect of reducing flood than the residential land in rural land. For rural land, the development scale of residential land was generally small, which made the effect of flood reduction better than that of industrial land. Agricultural land requires a large area of land, resulting in the lowest share of the flow per unit area. From the point of the planners, this study suggests that for the rural land around the city, its responsibility should be assigned to share the runoff. And setting up rain water storage facilities in the same way as urban land, can also take stock of agricultural land resources to increase the ridge of field for flood storage, in order to improve regional disaster reduction capacity and resilience.Keywords: runoff allocation responsibilities, land use, flood mitigation, SWMM
Procedia PDF Downloads 1041035 Allylation of Active Methylene Compounds with Cyclic Baylis-Hillman Alcohols: Why Is It Direct and Not Conjugate?
Authors: Karim Hrratha, Khaled Essalahb, Christophe Morellc, Henry Chermettec, Salima Boughdiria
Abstract:
Among the carbon-carbon bond formation types, allylation of active methylene compounds with cyclic Baylis-Hillman (BH) alcohols is a reliable and widely used method. This reaction is a very attractive tool in organic synthesis of biological and biodiesel compounds. Thus, in view of an insistent and peremptory request for an efficient and straightly method for synthesizing the desired product, a thorough analysis of various aspects of the reaction processes is an important task. The product afforded by the reaction of active methylene with BH alcohols depends largely on the experimental conditions, notably on the catalyst properties. All experiments reported that catalysis is needed for this reaction type because of the poor ability of alcohol hydroxyl group to be as a suitable leaving group. Within the catalysts, several transition- metal based have been used such as palladium in the presence of acid or base and have been considered as reliable methods. Furthemore, acid catalysts such as BF3.OEt2, BiX3 (X= Cl, Br, I, (OTf)3), InCl3, Yb(OTf)3, FeCl3, p-TsOH and H-montmorillonite have been employed to activate the C-C bond formation through the alkylation of active methylene compounds. Interestingly a report of a smoothly process for the ability of 4-imethyaminopyridine(DMAP) to catalyze the allylation reaction of active methylene compounds with cyclic Baylis-Hillman (BH) alcohol appeared recently. However, the reaction mechanism remains ambiguous, since the C- allylation process leads to an unexpected product (noted P1), corresponding to a direct allylation instead of conjugate allylation, which involves the most electrophilic center according to the electron withdrawing group CO effect. The main objective of the present theoretical study is to better understand the role of the DMAP catalytic activity as well as the process leading to the end- product (P1) for the catalytic reaction of a cyclic BH alcohol with active methylene compounds. For that purpose, we have carried out computations of a set of active methylene compounds varying by R1 and R2 toward the same alcohol, and we have attempted to rationalize the mechanisms thanks to the acid–base approach, and conceptual DFT tools such as chemical potential, hardness, Fukui functions, electrophilicity index and dual descriptor, as these approaches have shown a good prediction of reactions products.The present work is then organized as follows: In a first part some computational details will be given, introducing the reactivity indexes used in the present work, then Section 3 is dedicated to the discussion of the prediction of the selectivity and regioselectivity. The paper ends with some concluding remarks. In this work, we have shown, through DFT method at the B3LYP/6-311++G(d,p) level of theory that: The allylation of active methylene compounds with cyclic BH alcohol is governed by orbital control character. Hence the end- product denoted P1 is generated by direct allylation.Keywords: DFT calculation, gas phase pKa, theoretical mechanism, orbital control, charge control, Fukui function, transition state
Procedia PDF Downloads 3061034 Harnessing Emerging Creative Technology for Knowledge Discovery of Multiwavelenght Datasets
Authors: Basiru Amuneni
Abstract:
Astronomy is one domain with a rise in data. Traditional tools for data management have been employed in the quest for knowledge discovery. However, these traditional tools become limited in the face of big. One means of maximizing knowledge discovery for big data is the use of scientific visualisation. The aim of the work is to explore the possibilities offered by emerging creative technologies of Virtual Reality (VR) systems and game engines to visualize multiwavelength datasets. Game Engines are primarily used for developing video games, however their advanced graphics could be exploited for scientific visualization which provides a means to graphically illustrate scientific data to ease human comprehension. Modern astronomy is now in the era of multiwavelength data where a single galaxy for example, is captured by the telescope several times and at different electromagnetic wavelength to have a more comprehensive picture of the physical characteristics of the galaxy. Visualising this in an immersive environment would be more intuitive and natural for an observer. This work presents a standalone VR application that accesses galaxy FITS files. The application was built using the Unity Game Engine for the graphics underpinning and the OpenXR API for the VR infrastructure. The work used a methodology known as Design Science Research (DSR) which entails the act of ‘using design as a research method or technique’. The key stages of the galaxy modelling pipeline are FITS data preparation, Galaxy Modelling, Unity 3D Visualisation and VR Display. The FITS data format cannot be read by the Unity Game Engine directly. A DLL (CSHARPFITS) which provides a native support for reading and writing FITS files was used. The Galaxy modeller uses an approach that integrates cleaned FITS image pixels into the graphics pipeline of the Unity3d game Engine. The cleaned FITS images are then input to the galaxy modeller pipeline phase, which has a pre-processing script that extracts, pixel, galaxy world position, and colour maps the FITS image pixels. The user can visualise image galaxies in different light bands, control the blend of the image with similar images from different sources or fuse images for a holistic view. The framework will allow users to build tools to realise complex workflows for public outreach and possibly scientific work with increased scalability, near real time interactivity with ease of access. The application is presented in an immersive environment and can use all commercially available headset built on the OpenXR API. The user can select galaxies in the scene, teleport to the galaxy, pan, zoom in/out, and change colour gradients of the galaxy. The findings and design lessons learnt in the implementation of different use cases will contribute to the development and design of game-based visualisation tools in immersive environment by enabling informed decisions to be made.Keywords: astronomy, visualisation, multiwavelenght dataset, virtual reality
Procedia PDF Downloads 911033 Prediction of Cardiovascular Markers Associated With Aromatase Inhibitors Side Effects Among Breast Cancer Women in Africa
Authors: Jean Paul M. Milambo
Abstract:
Purpose: Aromatase inhibitors (AIs) are indicated in the treatment of hormone-receptive breast cancer in postmenopausal women in various settings. Studies have shown cardiovascular events in some developed countries. To date the data is sparce for evidence-based recommendations in African clinical settings due to lack of cancer registries, capacity building and surveillance systems. Therefore, this study was conducted to assess the feasibility of HyBeacon® probe genotyping adjunctive to standard care for timely prediction and diagnosis of Aromatase inhibitors (AIs) associated adverse events in breast cancer survivors in Africa. Methods: Cross sectional study was conducted to assess the knowledge of POCT among six African countries using online survey and telephonically contacted. Incremental cost effectiveness ratio (ICER) was calculated, using diagnostic accuracy study. This was based on mathematical modeling. Results: One hundred twenty-six participants were considered for analysis (mean age = 61 years; SD = 7.11 years; 95%CI: 60-62 years). Comparison of genotyping from HyBeacon® probe technology to Sanger sequencing showed that sensitivity was reported at 99% (95% CI: 94.55% to 99.97%), specificity at 89.44% (95% CI: 87.25 to 91.38%), PPV at 51% (95%: 43.77 to 58.26%), and NPV at 99.88% (95% CI: 99.31 to 100.00%). Based on the mathematical model, the assumptions revealed that ICER was R7 044.55. Conclusion: POCT using HyBeacon® probe genotyping for AI-associated adverse events maybe cost effective in many African clinical settings. Integration of preventive measures for early detection and prevention guided by different subtype of breast cancer diagnosis with specific clinical, biomedical and genetic screenings may improve cancer survivorship. Feasibility of POCT was demonstrated but the implementation could be achieved by improving the integration of POCT within primary health cares, referral cancer hospitals with capacity building activities at different level of health systems. This finding is pertinent for a future envisioned implementation and global scale-up of POCT-based initiative as part of risk communication strategies with clear management pathways.Keywords: breast cancer, diagnosis, point of care, South Africa, aromatase inhibitors
Procedia PDF Downloads 781032 Molecular Diagnosis of a Virus Associated with Red Tip Disease and Its Detection by Non Destructive Sensor in Pineapple (Ananas comosus)
Authors: A. K. Faizah, G. Vadamalai, S. K. Balasundram, W. L. Lim
Abstract:
Pineapple (Ananas comosus) is a common crop in tropical and subtropical areas of the world. Malaysia once ranked as one of the top 3 pineapple producers in the world in the 60's and early 70's, after Hawaii and Brazil. Moreover, government’s recognition of the pineapple crop as one of priority commodities to be developed for the domestics and international markets in the National Agriculture Policy. However, pineapple industry in Malaysia still faces numerous challenges, one of which is the management of disease and pest. Red tip disease on pineapple was first recognized about 20 years ago in a commercial pineapple stand located in Simpang Renggam, Johor, Peninsular Malaysia. Since its discovery, there has been no confirmation on its causal agent of this disease. The epidemiology of red tip disease is still not fully understood. Nevertheless, the disease symptoms and the spread within the field seem to point toward viral infection. Bioassay test on nucleic acid extracted from the red tip-affected pineapple was done on Nicotiana tabacum cv. Coker by rubbing the extracted sap. Localised lesions were observed 3 weeks after inoculation. Negative staining of the fresh inoculated Nicotiana tabacum cv. Coker showed the presence of membrane-bound spherical particles with an average diameter of 94.25nm under transmission electron microscope. The shape and size of the particles were similar to tospovirus. SDS-PAGE analysis of partial purified virions from inoculated N. tabacum produced a strong and a faint protein bands with molecular mass of approximately 29 kDa and 55 kDa. Partial purified virions of symptomatic pineapple leaves from field showed bands with molecular mass of approximately 29 kDa, 39 kDa and 55kDa. These bands may indicate the nucleocapsid protein identity of tospovirus. Furthermore, a handheld sensor, Greenseeker, was used to detect red tip symptoms on pineapple non-destructively based on spectral reflectance, measured as Normalized Difference Vegetation Index (NDVI). Red tip severity was estimated and correlated with NDVI. Linear regression models were calibrated and tested developed in order to estimate red tip disease severity based on NDVI. Results showed a strong positive relationship between red tip disease severity and NDVI (r= 0.84).Keywords: pineapple, diagnosis, virus, NDVI
Procedia PDF Downloads 7911031 Distribution of Dynamical and Energy Parameters in Axisymmetric Air Plasma Jet
Authors: Vitas Valinčius, Rolandas Uscila, Viktorija Grigaitienė, Žydrūnas Kavaliauskas, Romualdas Kėželis
Abstract:
Determination of integral dynamical and energy characteristics of high-temperature gas flows is a very important task of gas-dynamic for hazardous substances destruction systems. They are also always necessary for the investigation of high-temperature turbulent flow dynamics, heat and mass transfer. It is well known that distribution of dynamical and thermal characteristics of high-temperature flows and jets is strongly related to heat flux variation over an imposed area of heating. As is visible from numerous experiments and theoretical considerations, the fundamental properties of an isothermal jet are well investigated. However, the establishment of regularities in high-temperature conditions meets certain specific behavior comparing with moderate-temperature jets and flows. Their structures have not been thoroughly studied yet, especially in the cases of plasma ambient. It is well known that the distribution of local plasma jet parameters in high temperature and isothermal jets and flows may significantly differ. High temperature axisymmetric air jet generated by atmospheric pressure DC arc plasma torch was investigated employing enthalpy probe 3.8∙10-3 m of diameter. Distribution of velocities and temperatures were established in different cross-sections of the plasma jet outflowing from 42∙10-3 m diameter pipe at the average mean velocity of 700 m∙s-1, and averaged temperature of 4000 K. It has been found that gas heating fractionally influences shape and values of a dimensionless profile of velocity and temperature in the main zone of plasma jet and has a significant influence in the initial zone of the plasma jet. The width of the initial zone of the plasma jet has been found to be lesser than in the case of isothermal flow. The relation between dynamical thickness and turbulent number of Prandtl has been established along jet axis. Experimental results were generalized in dimensionless form. The presence of convective heating shows that heat transfer in a moving high-temperature jet also occurs due to heat transfer by moving particles of the jet. In this case, the intensity of convective heat transfer is proportional to the instantaneous value of the flow velocity at a given point in space. Consequently, the configuration of the temperature field in moving jets and flows essentially depends on the configuration of the velocity field.Keywords: plasma jet, plasma torch, heat transfer, enthalpy probe, turbulent number of Prandtl
Procedia PDF Downloads 1821030 Incidence and Predictors of Mortality Among HIV Positive Children on Art in Public Hospitals of Harer Town, Enrolled From 2011 to 2021
Authors: Getahun Nigusie
Abstract:
Background; antiretroviral treatment reduce HIV-related morbidity, and prolonged survival of patients however, there is lack of up-to-date information concerning the treatment long term effect on the survival of HIV positive children especially in the study area. Objective: To assess incidence and predictors of mortality among HIV positive children on ART in public hospitals of Harer town who were enrolled from 2011 to 2021. Methodology: Institution based retrospective cohort study was conducted among 429 HIV positive children enrolled in ART clinic from January 1st 2011 to December30th 2021. Data were collected from medical cards by using a data extraction form, Descriptive analyses were used to Summarized the results, and life table was used to estimate survival probability at specific point of time after introduction of ART. Kaplan Meier survival curve together with log rank test was used to compare survival between different categories of covariates, and Multivariate Cox-proportional hazard regression model was used to estimate adjusted Hazard rate. Variables with p-values ≤0.25 in bivariable analysis were candidates to the multivariable analysis. Finally, variables with p-values < 0.05 were considered as significant variables. Results: The study participants had followed for a total of 2549.6 child-years (30596 child months) with an overall mortality rate of 1.5 (95% CI: 1.1, 2.04) per 100 child-years. Their median survival time was 112 months (95% CI: 101–117). There were 38 children with unknown outcome, 39 deaths, and 55 children transfer out to different facility. The overall survival at 6, 12, 24, 48 months were 98%, 96%, 95%, 94% respectively. being in WHO clinical Stage four (AHR=4.55, 95% CI:1.36, 15.24), having anemia(AHR=2.56, 95% CI:1.11, 5.93), baseline low absolute CD4 count (AHR=2.95, 95% CI: 1.22, 7.12), stunting (AHR=4.1, 95% CI: 1.11, 15.42), wasting (AHR=4.93, 95% CI: 1.31, 18.76), poor adherence to treatment (AHR=3.37, 95% CI: 1.25, 9.11), having TB infection at enrollment (AHR=3.26, 95% CI: 1.25, 8.49),and no history of change their regimen(AHR=7.1, 95% CI: 2.74, 18.24), were independent predictors of death. Conclusion: more than half of death occurs within 2 years. Prevalent tuberculosis, anemia, wasting, and stunting nutritional status, socioeconomic factors, and baseline opportunistic infection were independent predictors of death. Increasing early screening and managing those predictors are required.Keywords: human immunodeficiency virus-positive children, anti-retroviral therapy, survival, Ethiopia
Procedia PDF Downloads 221029 Hearing Threshold Levels among Steel Industry Workers in Samut Prakan Province, Thailand
Authors: Petcharat Kerdonfag, Surasak Taneepanichskul, Winai Wadwongtham
Abstract:
Industrial noise is usually considered as the main impact of the environmental health and safety because its exposure can cause permanently serious hearing damage. Despite providing strictly hearing protection standards and campaigning extensively encouraging public health awareness among industrial workers in Thailand, hazard noise-induced hearing loss has dramatically been massive obstacles for workers’ health. The aims of the study were to explore and specify the hearing threshold levels among steel industrial workers responsible in which higher noise levels of work zone and to examine the relationships of hearing loss and workers’ age and the length of employment in Samut Prakan province, Thailand. Cross-sectional study design was done. Ninety-three steel industrial workers in the designated zone of higher noise (> 85dBA) with more than 1 year of employment from two factories by simple random sampling and available to participate in were assessed by the audiometric screening at regional Samut Prakan hospital. Data of doing screening were collected from October to December, 2016 by the occupational medicine physician and a qualified occupational nurse. All participants were examined by the same examiners for the validity. An Audiometric testing was performed at least 14 hours after the last noise exposure from the workplace. Workers’ age and the length of employment were gathered by the developed occupational record form. Results: The range of workers’ age was from 23 to 59 years, (Mean = 41.67, SD = 9.69) and the length of employment was from 1 to 39 years, (Mean = 13.99, SD = 9.88). Fifty three (60.0%) out of all participants have been exposing to the hazard of noise in the workplace for more than 10 years. Twenty-three (24.7%) of them have been exposing to the hazard of noise less than or equal to 5 years. Seventeen (18.3%) of them have been exposing to the hazard of noise for 5 to 10 years. Using the cut point of less than or equal to 25 dBA of hearing thresholds, the average means of hearing thresholds for participants at 4, 6, and 8 kHz were 31.34, 29.62, and 25.64 dB, respectively for the right ear and 40.15, 32.20, and 25.48 dB for the left ear, respectively. The more developing age of workers in the work zone with hazard of noise, the more the hearing thresholds would be increasing at frequencies of 4, 6, and 8 kHz (p =.012, p =.026, p =.024) for the right ear, respectively and for the left ear only at the frequency 4 kHz (p =.009). Conclusion: The participants’ age in the hazard of noise work zone was significantly associated with the hearing loss in different levels while the length of participants’ employment was not significantly associated with the hearing loss. Thus hearing threshold levels among industrial workers would be regularly assessed and needed to be protected at the beginning of working.Keywords: hearing threshold levels, hazard of noise, hearing loss, audiometric testing
Procedia PDF Downloads 2271028 Corporate Social Responsibility and Corporate Reputation: A Bibliometric Analysis
Authors: Songdi Li, Louise Spry, Tony Woodall
Abstract:
Nowadays, Corporate Social responsibility (CSR) is becoming a buzz word, and more and more academics are putting efforts on CSR studies. It is believed that CSR could influence Corporate Reputation (CR), and they hold a favourable view that CSR leads to a positive CR. To be specific, the CSR related activities in the reputational context have been regarded as ways that associate to excellent financial performance, value creation, etc. Also, it is argued that CSR and CR are two sides of one coin; hence, to some extent, doing CSR is equal to establishing a good reputation. Still, there is no consensus of the CSR-CR relationship in the literature; thus, a systematic literature review is highly in need. This research conducts a systematic literature review with both bibliometric and content analysis. Data are selected from English language sources, and academic journal articles only, then, keyword combinations are applied to identify relevant sources. Data from Scopus and WoS are gathered for bibliometric analysis. Scopus search results were saved in RIS and CSV formats, and Web of Science (WoS) data were saved in TXT format and CSV formats in order to process data in the Bibexcel software for further analysis which later will be visualised by the software VOSviewer. Also, content analysis was applied to analyse the data clusters and the key articles. In terms of the topic of CSR-CR, this literature review with bibliometric analysis has made four achievements. First, this paper has developed a systematic study which quantitatively depicts the knowledge structure of CSR and CR by identifying terms closely related to CSR-CR (such as ‘corporate governance’) and clustering subtopics emerged in co-citation analysis. Second, content analysis is performed to acquire insight on the findings of bibliometric analysis in the discussion section. And it highlights some insightful implications for the future research agenda, for example, a psychological link between CSR-CR is identified from the result; also, emerging economies and qualitative research methods are new elements emerged in the CSR-CR big picture. Third, a multidisciplinary perspective presents through the whole bibliometric analysis mapping and co-word and co-citation analysis; hence, this work builds a structure of interdisciplinary perspective which potentially leads to an integrated conceptual framework in the future. Finally, Scopus and WoS are compared and contrasted in this paper; as a result, Scopus which has more depth and comprehensive data is suggested as a tool for future bibliometric analysis studies. Overall, this paper has fulfilled its initial purposes and contributed to the literature. To the author’s best knowledge, this paper conducted the first literature review of CSR-CR researches that applied both bibliometric analysis and content analysis; therefore, this paper achieves its methodological originality. And this dual approach brings advantages of carrying out a comprehensive and semantic exploration in the area of CSR-CR in a scientific and realistic method. Admittedly, its work might exist subjective bias in terms of search terms selection and paper selection; hence triangulation could reduce the subjective bias to some degree.Keywords: corporate social responsibility, corporate reputation, bibliometric analysis, software program
Procedia PDF Downloads 1281027 Development of a Rice Fortification Technique Using Vacuum Assisted Rapid Diffusion for Low Cost Encapsulation of Fe and Zn
Authors: R. A. C. H. Seneviratne, M. Gunawardana, R. P. N. P. Rajapakse
Abstract:
To address the micronutrient deficiencies in the Asian region, the World Food Program in its current mandate highlights the requirement of employing efficient fortification of micronutrients in rice, under the program 'Scaling-up Rice Fortification in Asia'. The current industrial methods of rice fortification with micronutrients are not promising due to poor permeation or retention of fortificants. This study was carried out to develop a method to improve fortification of micronutrients in rice by removing the air barriers for diffusing micronutrients through the husk. For the purpose, soaking stage of paddy was coupled with vacuum (- 0.6 bar) for different time periods. Both long and short grain varieties of paddy (BG 352 and BG 358, respectively) initially tested for water uptake during hot soaking (70 °C) under vacuum (28.5 and 26.15%, respectively) were significantly (P < 0.05) higher than that of non-vacuum conditions (25.24 and 25.45% respectively), exhibiting the effectiveness of water diffusion into the rice grains through the cleared pores under negative pressure. To fortify the selected micronutrients (iron and zinc), paddy was vacuum-soaked in Fe2+ or Zn2+ solutions (500 ppm) separately for one hour, and continued soaking for another 3.5 h without vacuum. Significantly (P<0.05) higher amounts of Fe2+ and Zn2+ were observed throughout the soaking period, in both short and long grain varieties of rice compared to rice treated without vacuum. To achieve the recommended limits of World Food Program standards for fortified iron (40-48 mg/kg) and zinc (60-72 mg/kg) in rice, soaking was done with different concentrations of Fe2+ or Zn2+ for varying time periods. For both iron and zinc fortifications, hot soaking (70 °C) in 400 ppm solutions under vacuum (- 0.6 bar) during the first hour followed by 2.5 h under atmospheric pressure exhibited the optimum fortification (Fe2+: 46.59±0.37 ppm and Zn2+: 67.24±1.36 ppm) with a greater significance (P < 0.05) compared to the controls (Fe2+: 38.84±0.62 ppm and Zn2+: 52.55±0.55 ppm). This finding was further confirmed by the XRF images, clearly showing a greater fixation of Fe2+ and Zn2+ in the rice grains under vacuum treatment. Moreover, there were no significant (P>0.05) differences among both Fe2+ and Zn2+ contents in fortified rice even after polishing and washing, confirming their greater retention. A seven point hedonic scale showed that the overall acceptability for both iron and zinc fortified rice were significantly (P < 0.05) higher than the parboiled rice without fortificants. With all the drawbacks eliminated, per kilogram cost will be less than US$ 1 for both iron and zinc fortified rice. The new method of rice fortification studied and developed in this research, can be claimed as the best method in comparison to other rice fortification methods currently deployed.Keywords: fortification, vacuum assisted diffusion, micronutrients, parboiling
Procedia PDF Downloads 2531026 Variability and Stability of Bread and Durum Wheat for Phytic Acid Content
Authors: Gordana Branković, Vesna Dragičević, Dejan Dodig, Desimir Knežević, Srbislav Denčić, Gordana Šurlan-Momirović
Abstract:
Phytic acid is a major pool in the flux of phosphorus through agroecosystems and represents a sum equivalent to > 50% of all phosphorus fertilizer used annually. Nutrition rich in phytic acid can substantially decrease micronutrients apsorption as calcium, zink, iron, manganese, copper due to phytate salts excretion by human and non-ruminant animals as poultry, swine and fish, having in common very scarce phytase activity, and consequently the ability to digest and utilize phytic acid, thus phytic acid derived phosphorus in animal waste contributes to water pollution. The tested accessions consisted of 15 genotypes of bread wheat (Triticum aestivum L. ssp. vulgare) and of 15 genotypes of durum wheat (Triticum durum Desf.). The trials were sown at the three test sites in Serbia: Rimski Šančevi (RS) (45º19´51´´N; 19º50´59´´E), Zemun Polje (ZP) (44º52´N; 20º19´E) and Padinska Skela (PS) (44º57´N 20º26´E) during two vegetation seasons 2010-2011 and 2011-2012. The experimental design was randomized complete block design with four replications. The elementary plot consisted of 3 internal rows of 0.6 m2 area (3 × 0.2 m × 1 m). Grains were grinded with Laboratory Mill 120 Perten (“Perten”, Sweden) (particles size < 500 μm) and flour was used for the analysis. Phytic acid grain content was determined spectrophotometrically with the Shimadzu UV-1601 spectrophotometer (Shimadzu Corporation, Japan). Objectives of this study were to determine: i) variability and stability of the phytic acid content among selected genotypes of bread and durum wheat, ii) predominant source of variation regarding genotype (G), environment (E) and genotype × environment interaction (GEI) from the multi-environment trial, iii) influence of climatic variables on the GEI for the phytic acid content. Based on the analysis of variance it had been determined that the variation of phytic acid content was predominantly influenced by environment in durum wheat, while the GEI prevailed for the variation of the phytic acid content in bread wheat. Phytic acid content expressed on the dry mass basis was in the range 14.21-17.86 mg g-1 with the average of 16.05 mg g-1 for bread wheat and 14.63-16.78 mg g-1 with the average of 15.91 mg g-1 for durum wheat. Average-environment coordination view of the genotype by environment (GGE) biplot was used for the selection of the most desirable genotypes for breeding for low phytic acid content in the sense of good stability and lower level of phytic acid content. The most desirable genotypes of bread and durum wheat for breeding for phytic acid were Apache and 37EDUYT /07 No. 7849. Models of climatic factors in the highest percentage (> 91%) were useful in interpreting GEI for phytic acid content, and included relative humidity in June, sunshine hours in April, mean temperature in April and winter moisture reserves for genotypes of bread wheat, as well as precipitation in June and April, maximum temperature in April and mean temperature in June for genotypes of durum wheat.Keywords: genotype × environment interaction, phytic acid, stability, variability
Procedia PDF Downloads 3941025 Implementation of International Standards in the Field of Higher Secondary Education in Kerala
Authors: Bernard Morais Joosa
Abstract:
Kerala, the southern state of India, is known for its accomplishments in universal education and enrollments. Through this mission, the Government proposes comprehensive educational reforms including 1000 Government schools into international standards during the first phase. The idea is not only to improve the infrastructural facilities but also to reform the teaching and learning process to the present day needs by introducing ICT enabled learning and providing smart classrooms. There will be focus on creating educational programmes which are useful for differently abled students. It is also meant to reinforce the teaching–learning process by providing ample opportunities to each student to construct their own knowledge using modern technology tools. The mission will redefine the existing classroom learning process, coordinate resource mobilization efforts and develop ‘Janakeeya Vidyabhyasa Mathruka.' Special packages to support schools which are in existence for over 100 years will also be attempted. The implementation will enlist full involvement and partnership of the Parent Teacher Association. Kerala was the first state in the country to attain 100 percent literacy more than two and a half decades ago. Since then the State has not rested on its laurels. It has moved forward in leaps and bounds conquering targets that no other State could achieve. Now the government of Kerala is taking off towards new goal of comprehensive educational reforms. And it focuses on Betterment of educational surroundings, use of technology in education, renewal of learning method and 1000 schools will be uplifted as Smart Schools. Need to upgrade 1000 schools into international standards and turning classrooms from standard 9 to 12 in high schools and higher secondary into high-tech classrooms and a special unique package for the renovation of schools, which have completed 50 and 100 years. The government intends to focus on developing standards first to eighth standards in tune with the times by engaging the teachers, parents, and alumni to recapture the relevance of public schools. English learning will be encouraged in schools. The idea is not only to improve the infrastructure facilities but also reform the curriculum to the present day needs. Keeping in view the differently-abled friendly approach of the government, there will be focus on creating educational program which is useful for differently abled students. The idea is to address the infrastructural deficiencies being faced by such schools. There will be special emphasis on ensuring internet connectivity to promote IT-friendly existence. A task-force and a full-time chief executive will be in charge of managing the day to day affairs of the mission. Secretary of the Public Education Department will serve as the Mission Secretary and the Chairperson of Task Force. As the Task Force will stress on teacher training and the use of information technology, experts in the field, as well as Directors of SCERT, IT School, SSA, and RMSA, will also be a part of it.Keywords: educational standards, methodology, pedagogy, technology
Procedia PDF Downloads 1331024 35 MHz Coherent Plane Wave Compounding High Frequency Ultrasound Imaging
Authors: Chih-Chung Huang, Po-Hsun Peng
Abstract:
Ultrasound transient elastography has become a valuable tool for many clinical diagnoses, such as liver diseases and breast cancer. The pathological tissue can be distinguished by elastography due to its stiffness is different from surrounding normal tissues. An ultrafast frame rate of ultrasound imaging is needed for transient elastography modality. The elastography obtained in the ultrafast system suffers from a low quality for resolution, and affects the robustness of the transient elastography. In order to overcome these problems, a coherent plane wave compounding technique has been proposed for conventional ultrasound system which the operating frequency is around 3-15 MHz. The purpose of this study is to develop a novel beamforming technique for high frequency ultrasound coherent plane-wave compounding imaging and the simulated results will provide the standards for hardware developments. Plane-wave compounding imaging produces a series of low-resolution images, which fires whole elements of an array transducer in one shot with different inclination angles and receives the echoes by conventional beamforming, and compounds them coherently. Simulations of plane-wave compounding image and focused transmit image were performed using Field II. All images were produced by point spread functions (PSFs) and cyst phantoms with a 64-element linear array working at 35MHz center frequency, 55% bandwidth, and pitch of 0.05 mm. The F number is 1.55 in all the simulations. The simulated results of PSFs and cyst phantom which were obtained using single, 17, 43 angles plane wave transmission (angle of each plane wave is separated by 0.75 degree), and focused transmission. The resolution and contrast of image were improved with the number of angles of firing plane wave. The lateral resolutions for different methods were measured by -10 dB lateral beam width. Comparison of the plane-wave compounding image and focused transmit image, both images exhibited the same lateral resolution of 70 um as 37 angles were performed. The lateral resolution can reach 55 um as the plane-wave was compounded 47 angles. All the results show the potential of using high-frequency plane-wave compound imaging for realizing the elastic properties of the microstructure tissue, such as eye, skin and vessel walls in the future.Keywords: plane wave imaging, high frequency ultrasound, elastography, beamforming
Procedia PDF Downloads 539