Search results for: early order commitment
1069 Antibacterial Effects of Zinc Oxide Nanoparticles as Alternative Therapy on Drug-Resistant Group B Streptococcus Strains Isolated from Pregnant Women
Authors: Leila Fozouni, Anahita Mazandarani
Abstract:
Background: Maternal infections are the most common cause of infections in infants, and the level of infection and its severity highly depends on the degree of colonization of the bacteria in the mother; so, the occurrence of aggressive diseases is not unpredictable in mothers with very high colonization. Group B Streptococcus is part of the normal flora of the gastrointestinal and genital tracts in women and is the leading cause of septicemia and meningitis in newborns. Today Zinc oxide nanoparticle is regarded as one of the most commonly used and safest nanoparticles for defeating Gram-positive and Gram-negative bacteria. This study aims to determine the antibacterial effects of Zinc oxide on the growth of drug-resistant group B Streptococcus strains isolated from pregnant women. Materials and Methods: This cross-sectional study was conducted on 150 pregnant women of 28–37 weeks admitted to seven hospitals and maternity wards in Golestan province, northeast of Iran. For bacterial identification, rectovaginal swabs were firstly inoculated to the Todd-Hewitt Broth and cultured in blood agar (containing 5% sheep blood). Then microbiologic and PCR methods were performed to detect group B Streptococci. Disk diffusion and broth microdilution tests were used to determine the bacterial susceptibility to antibiotics according to CLSI M100(2021) criteria. The antibacterial properties of Zinc oxide nanoparticles were evaluated using the agar well-diffusion method. Results: The prevalence of group B Streptococcus was 18% in pregnant women. Out of twenty-seven positive cultures, 62.96% were higher than thirty years old. Ninety percent and 45% of isolates were resistant to clindamycin and erythromycin, respectively, and susceptibility to cefazolin was 71%. In addition, susceptibility to ampicillin and penicillin were 74% and 55%, respectively. The results showed that 82% of erythromycin-resistant, 92% clindamycin-resistant, and 78% of cefazolin-resistant isolates were eliminated by zinc oxide nanoparticles at a concentration of 100 mg/L of the nanoparticle. Furthermore, ZnONPs could inhibit all drug-resistant isolates at a concentration of 200 mg/mL (MIC90 ≥ 200). Conclusion: Since the drug resistance of group B streptococci against various antibiotics is increasing, determining and investigating the drug-resistance pattern of this bacterium to different antibiotics in order to prevent arbitrary consumption of antibiotics by pregnant women and ultimately prevent Infant mortality seems necessary. Generally, ZnONPs showed a high antimicrobial effect, and it was revealed that the bactericide effect increases upon the increase in the concentration of the nanoparticle.Keywords: group B beta-hemolytic streptococcus, pregnant women, zinc oxide nanoparticles, drug resistance
Procedia PDF Downloads 991068 Preparedness Level of Disaster Management Institutions in Context of Floods in Delhi
Authors: Aditi Madan, Jayant Kumar Routray
Abstract:
Purpose: Over the years flood related risks have compounded due to increasing vulnerability caused by rapid urbanisation and growing population. This increase is an indication of the need for enhancing the preparedness of institutions to respond to floods. The study describes disaster management structure and its linkages with institutions involved in managing disasters. It addresses issues and challenges associated with readiness of disaster management institutions to respond to floods. It suggests policy options for enhancing the current state of readiness of institutions to respond by considering factors like institutional, manpower, financial, technical, leadership & networking, training and awareness programs, monitoring and evaluation. Methodology: The study is based on qualitative data with statements and outputs from primary and secondary sources to understand the institutional framework for disaster management in India. Primary data included field visits, interviews with officials from institutions managing disasters and the affected community to identify the challenges faced in engaging national, state, district and local level institutions in managing disasters. For focus group discussions, meetings were held with district project officers and coordinators, local officials, community based organisation, civil defence volunteers and community heads. These discussions were held to identify the challenges associated with preparedness to respond of institutions to floods. Findings: Results show that disasters are handled by district authority and the role of local institutions is limited to a reactive role during disaster. Data also indicates that although the existing institutional setup is well coordinated at the district level but needs improvement at the local level. Wide variations exist in awareness and perception among the officials engaged in managing disasters. Additionally, their roles and responsibilities need to be clearly defined with adequate budget and dedicated permanent staff for managing disasters. Institutions need to utilise the existing manpower through proper delegation of work. Originality: The study suggests that disaster risk reduction needs to focus more towards inclusivity of the local urban bodies. Wide variations exist in awareness and perception among the officials engaged in managing disasters. In order to ensure community participation, it is important to address their social and economic problems since such issues can overshadow attempts made for reducing risks. Thus, this paper suggests development of direct linkages among institutions and community for enhancing preparedness to respond to floods.Keywords: preparedness, response, disaster, flood, community, institution
Procedia PDF Downloads 2341067 Natural Monopolies and Their Regulation in Georgia
Authors: Marina Chavleishvili
Abstract:
Introduction: Today, the study of monopolies, including natural monopolies, is topical. In real life, pure monopolies are natural monopolies. Natural monopolies are used widely and are regulated by the state. In particular, the prices and rates are regulated. The paper considers the problems associated with the operation of natural monopolies in Georgia, in particular, their microeconomic analysis, pricing mechanisms, and legal mechanisms of their operation. The analysis was carried out on the example of the power industry. The rates of natural monopolies in Georgia are controlled by the Georgian National Energy and Water Supply Regulation Commission. The paper analyzes the positive role and importance of the regulatory body and the issues of improving the legislative base that will support the efficient operation of the branch. Methodology: In order to highlight natural monopolies market tendencies, the domestic and international markets are studied. An analysis of monopolies is carried out based on the endogenous and exogenous factors that determine the condition of companies, as well as the strategies chosen by firms to increase the market share. According to the productivity-based competitiveness assessment scheme, the segmentation opportunities, business environment, resources, and geographical location of monopolist companies are revealed. Main Findings: As a result of the analysis, certain assessments and conclusions were made. Natural monopolies are quite a complex and versatile economic element, and it is important to specify and duly control their frame conditions. It is important to determine the pricing policy of natural monopolies. The rates should be transparent, should show the level of life in the country, and should correspond to the incomes. The analysis confirmed the significance of the role of the Antimonopoly Service in the efficient management of natural monopolies. The law should adapt to reality and should be applied only to regulate the market. The present-day differential electricity tariffs varying depending on the consumed electrical power need revision. The effects of the electricity price discrimination are important, segmentation in different seasons in particular. Consumers use more electricity in winter than in summer, which is associated with extra capacities and maintenance costs. If the price of electricity in winter is higher than in summer, the electricity consumption will decrease in winter. The consumers will start to consume the electricity more economically, what will allow reducing extra capacities. Conclusion: Thus, the practical realization of the views given in the paper will contribute to the efficient operation of natural monopolies. Consequently, their activity will be oriented not on the reduction but on the increase of increments of the consumers or producers. Overall, the optimal management of the given fields will allow for improving the well-being throughout the country. In the article, conclusions are made, and the recommendations are developed to deliver effective policies and regulations toward the natural monopolies in Georgia.Keywords: monopolies, natural monopolies, regulation, antimonopoly service
Procedia PDF Downloads 861066 Seismic Reinforcement of Existing Japanese Wooden Houses Using Folded Exterior Thin Steel Plates
Authors: Jiro Takagi
Abstract:
Approximately 90 percent of the casualties in the near-fault-type Kobe earthquake in 1995 resulted from the collapse of wooden houses, although a limited number of collapses of this type of building were reported in the more recent off-shore-type Tohoku Earthquake in 2011 (excluding direct damage by the Tsunami). Kumamoto earthquake in 2016 also revealed the vulnerability of old wooden houses in Japan. There are approximately 24.5 million wooden houses in Japan and roughly 40 percent of them are considered to have the inadequate seismic-resisting capacity. Therefore, seismic strengthening of these wooden houses is an urgent task. However, it has not been quickly done for various reasons, including cost and inconvenience during the reinforcing work. Residents typically spend their money on improvements that more directly affect their daily housing environment (such as interior renovation, equipment renewal, and placement of thermal insulation) rather than on strengthening against extremely rare events such as large earthquakes. Considering this tendency of residents, a new approach to developing a seismic strengthening method for wooden houses is needed. The seismic reinforcement method developed in this research uses folded galvanized thin steel plates as both shear walls and the new exterior architectural finish. The existing finish is not removed. Because galvanized steel plates are aesthetic and durable, they are commonly used in modern Japanese buildings on roofs and walls. Residents could feel a physical change through the reinforcement, covering existing exterior walls with steel plates. Also, this exterior reinforcement can be installed with only outdoor work, thereby reducing inconvenience for residents since they would not be required to move out temporarily during construction. The Durability of the exterior is enhanced, and the reinforcing work can be done efficiently since perfect water protection is not required for the new finish. In this method, the entire exterior surface would function as shear walls and thus the pull-out force induced by seismic lateral load would be significantly reduced as compared with a typical reinforcement scheme of adding braces in selected frames. Consequently, reinforcing details of anchors to the foundations would be less difficult. In order to attach the exterior galvanized thin steel plates to the houses, new wooden beams are placed next to the existing beams. In this research, steel connections between the existing and new beams are developed, which contain a gap for the existing finish between the two beams. The thin steel plates are screwed to the new beams and the connecting vertical members. The seismic-resisting performance of the shear walls with thin steel plates is experimentally verified both for the frames and connections. It is confirmed that the performance is high enough for bracing general wooden houses.Keywords: experiment, seismic reinforcement, thin steel plates, wooden houses
Procedia PDF Downloads 2261065 Women’s Lived Expriences in Prison: A Study Conducted in Haramaya Correctional Facilities, Ethiopia. March 2023
Authors: Ramzi Bekri Umer
Abstract:
Aim: This study attempts to investigate the causes and difficulties with women’s incarceration as well as threat for their reintegration after release from prison with emphasis on the correctional facility of Haramaya city. Method and Methodology: Both quantitative and qualitative research methods were employed in this study; key informant interviews and participant observation were utilized to gather qualitative data, while crosssectional and descriptive research designs were used to gather quantitative data. Findings: This study shows that the women's incarceration was caused by their family histories, genderbased violence, illiteracy, and socioeconomic issues. The principal charges made against the female culprits were theft, vandalism, murder, and moral perversion. A poor quality of life in prison, concerns about family dissolution, emotional instability, financial difficulties, and a lack of spirituality were the main causes of unhappiness for the women behind bars, while social stigma, mistrust, and retaliation fears were the main obstacles to the women's ability to reintegrate into their families and communities. Theoretical Importance: This study involves incarcerated women at correctional center of Haramaya who committed various types of crimes. The local government sectors and non-governmental organization will gain from the study in order to create workable plans to reduce women's criminality and the growing number of female lawbreakers. Local communities and other governmental and nongovernmental partners will be able to support gender equality initiatives that seek to eradicate gender-based violence and discrimination, which worsen the criminality of women. Data Collection and Analysis Procedures: The quantitative and qualitative data were collected prospectively from a sample of 100 women prisoners. Quantitative data were analyzed using descriptive statistics, whereas, thematic analysis, were used for qualitative data. Question Answered: 1. What are the main causes women’s imprisonment in Haramaya city correctional facility. 2. What are the main obstacles of the women's ability to reintegrate into their families and communities after released from incarceration. Conclusion: The study concludes that incarcerated women experience a tremendous impact on their daily life. It highlights the importance of addressing factors such as family backgrounds, gender-based violence, illiteracy and socio-economic problem to decrease the number of women imprisonment. Detention environment, fear for family breakup, financial hardship and deprivation of spiritual life are the major sources of distress among the incarcerated women.Keywords: Ethiopia, women prisoner, incarceration, reintegration
Procedia PDF Downloads 621064 Breast Cancer Metastasis Detection and Localization through Transfer-Learning Convolutional Neural Network Classification Based on Convolutional Denoising Autoencoder Stack
Authors: Varun Agarwal
Abstract:
Introduction: With the advent of personalized medicine, histopathological review of whole slide images (WSIs) for cancer diagnosis presents an exceedingly time-consuming, complex task. Specifically, detecting metastatic regions in WSIs of sentinel lymph node biopsies necessitates a full-scanned, holistic evaluation of the image. Thus, digital pathology, low-level image manipulation algorithms, and machine learning provide significant advancements in improving the efficiency and accuracy of WSI analysis. Using Camelyon16 data, this paper proposes a deep learning pipeline to automate and ameliorate breast cancer metastasis localization and WSI classification. Methodology: The model broadly follows five stages -region of interest detection, WSI partitioning into image tiles, convolutional neural network (CNN) image-segment classifications, probabilistic mapping of tumor localizations, and further processing for whole WSI classification. Transfer learning is applied to the task, with the implementation of Inception-ResNetV2 - an effective CNN classifier that uses residual connections to enhance feature representation, adding convolved outputs in the inception unit to the proceeding input data. Moreover, in order to augment the performance of the transfer learning CNN, a stack of convolutional denoising autoencoders (CDAE) is applied to produce embeddings that enrich image representation. Through a saliency-detection algorithm, visual training segments are generated, which are then processed through a denoising autoencoder -primarily consisting of convolutional, leaky rectified linear unit, and batch normalization layers- and subsequently a contrast-normalization function. A spatial pyramid pooling algorithm extracts the key features from the processed image, creating a viable feature map for the CNN that minimizes spatial resolution and noise. Results and Conclusion: The simplified and effective architecture of the fine-tuned transfer learning Inception-ResNetV2 network enhanced with the CDAE stack yields state of the art performance in WSI classification and tumor localization, achieving AUC scores of 0.947 and 0.753, respectively. The convolutional feature retention and compilation with the residual connections to inception units synergized with the input denoising algorithm enable the pipeline to serve as an effective, efficient tool in the histopathological review of WSIs.Keywords: breast cancer, convolutional neural networks, metastasis mapping, whole slide images
Procedia PDF Downloads 1301063 Isolation of Clitorin and Manghaslin from Carica papaya L. Leaves by CPC and Its Quantitative Analysis by QNMR
Authors: Norazlan Mohmad Misnan, Maizatul Hasyima Omar, Mohd Isa Wasiman
Abstract:
Papaya (Carica papaya L., Caricaceae) is a tree which mainly cultivated for its fruits in many tropical regions including Australia, Brazil, China, Hawaii, and Malaysia. Beside of fruits, its leaves, seeds, and latex have also been traditionally used for treating diseases, which also reported to possess anti-cancer and anti- malaria properties. Its leaves have been reported to consist of various chemical compounds such as alkaloids, flavonoids and phenolics. Clitorin and manghaslin are among major flavonoids presence. Thus, the aim of this study is to quantify the purity of these isolated compounds (clitorin and manghsalin) by using quantitative Nuclear Magnetic Resonance (qNMR) analysis. Only fresh C. papaya leaves were used for juice extraction procedure and subsequently was freeze-dried to obtain a dark green powdered form of the extract prior to Centrifugal Partition Chromatography (CPC) separation. The CPC experiments were performed using a two-phase solvent system comprising ethyl acetate/butanol/water (1:4:5, v/v/v/v) solvent. The upper organic phase was used as the stationary phase, and the lower aqueous phase was employed as the mobile phase. Ten fractions were obtained after an hour runtime analysis. Fraction 6 and fraction 8 has been identified as clitorin (m/z 739.21 [M-H]-) and manghaslin (m/z 755.21 [M-H]-), respectively, based on LCMS data and full analysis of NMR (1H NMR, 13C NMR, HMBC, and HSQC). The 1H-qNMR measurements were carried out using a 400 MHz NMR spectrometer (JEOL ECS 400MHz, Japan) and deuterated methanol was used as a solvent. Quantification was performed using the AQARI method (Accurate Quantitative NMR) with deuterated 1,4-Bis(trimethylsilyl)benzene (BTMSB) as an internal reference substances. This AQARI protocol includes not only NMR measurement but also sample preparation that provide highest precision and accuracy than other qNMR methods. The 90° pulse length and the T1 relaxation times for compounds and BTMSB were determined prior to the quantification to give the best signal-to-noise ratio. Regions containing the two downfield signals from aromatic part (6.00–6.89 ppm), and the singlet signal, (18H) arising from BTMSB (0.63-1.05ppm) were selected for integration. The purity of clitorin and manghaslin were calculated to be 52.22% and 43.36%, respectively. Further purification is needed in order to increase its purity. This finding has demonstrated the use of qNMR for quality control and standardization of various plant extracts and which can be applied for NMR fingerprinting of other plant-based products with good reproducibility and in the case where commercial standards is not readily available.Keywords: Carica papaya, clitorin, manghaslin, quantitative Nuclear Magnetic Resonance, Centrifugal Partition Chromatography
Procedia PDF Downloads 4971062 In vivo Wound Healing Activity and Phytochemical Screening of the Crude Extract and Various Fractions of Kalanchoe petitiana A. Rich (Crassulaceae) Leaves in Mice
Authors: Awol Mekonnen, Temesgen Sidamo, Epherm Engdawork, Kaleab Asresb
Abstract:
Ethnopharmacological Relevance: The leaves of Kalanchoe petitiana A. Rich (Crassulaceae) are used in Ethiopian folk medicine for treatment of evil eye, fractured surface for bone setting and several skin disorders including for the treatment of sores, boils, and malignant wounds. Aim of the Study: In order to scientifically prove the claimed utilization of the plant, the effects of the extracts and the fractions were investigated using in vivo excision, incision and dead space wound models. Materials and Method: Mice were used for wound healing study, while rats and rabbit were used for skin irritation test. For studying healing activity, 80% methanolic extract and the fractions were formulated in strength of 5% and 10%, either as ointment (hydroalcoholic extract, aqueous and methanol fractions) or gel (chloroform fraction). Oral administration of the crude extract was used for dead space model. Negative controls were treated either with simple ointment or sodium carboxyl methyl cellulose xerogel, while positive controls were treated with nitrofurazone (0.2 w/v) skin ointment. Negative controls for dead space model were treated with 1% carboxy methyl cellulose. Parameters, including rate of wound contraction, period of complete epithelializtion, hydroxyproline contents and skin breaking strength were evaluated. Results: Significant wound healing activity was observed with ointment formulated from the crude extract at both 5% and 10% concentration (p<0.01) compared to controls in both excision and incision models. In dead space model, 600 mg/kg (p<0.01), but not 300 mg/kg, significantly increased hydroxyproline content. Fractions showed variable effect, with the chloroform fraction lacking any significant effect. Both 5% and 10% formulations of the aqueous and methanolic fractions significantly increased wound contraction, decreased epithelializtion time and increased hydroxyproline content in excision wound model (p<0.05) as compared to controls. These fractions were also endowed with higher skin breaking strength in incision wound model (p<0.01). Conclusions: The present study provided evidence that the leaves of Kalanchoe petitiana A. Rich possess remarkable wound healing activities supporting the folkloric assertion of the plant. Fractionation revealed that polar or semi-polar compound may play vital role, as both aqueous and methanolic fractions were endowed with wound healing activity.Keywords: wound healing, Kalanchoae petitiana, excision wound, incision wound, dead space model
Procedia PDF Downloads 3091061 The Trumping of Science: Exploratory Study into Discrepancy between Politician and Scientist Sources in American Covid-19 News Coverage
Authors: Wafa Unus
Abstract:
Science journalism has been vanishing from America’s national newspapers for decades. Reportage on scientific topics is limited to only a handful of newspapers and of those, few employ dedicated science journalists to cover stories that require this specialized expertise. News organizations' lack of readiness to convey complex scientific concepts to a mass populace becomes particularly problematic when events like the Covid-19 pandemic occur. The lack of coverage of Covid-19 prior to its onset in the United States, suggests something more troubling - that the deprioritization of reporting on hard science as an educational tool in favor of political frames of coverage, places dangerous blinders on the American public. This research looks at the disparity between voices of health and science experts in news articles and the voices of political figures, in order to better understand the approach of American newspapers in conveying expert opinion on Covid-19. A content analysis of 300 articles on Covid-19 by major newspapers in the United States between January 1st, 2020 and April 30th, 2020 illuminates this investigation. The Boston Globe, the New York Times, and the Los Angeles Times are included in the content analysis. Initial findings reveal a significant disparity in the number of articles that mention Anthony Fauci, the director of the National Institute Allergy and Infectious Disease, and the number that make reference to political figures. Covid-related articles in the New York Times that focused on health topics (as opposed to economic or social issues) contained the voices of 54 different politicians who were mentioned a total of 608 times. Only five members of the scientific community were mentioned a total of 24 times (out of 674 articles). In the Boston Globe, 36 different politicians were mentioned a total of 147 times, and only two members of the scientific community, one being Anthony Fauci, were mentioned a total of nine times (out of 423 articles). In the Los Angeles Times, 52 different politicians were mentioned a total of 600 times, and only six members of the scientific community were included and were mentioned a total of 82 times with Fauci being mentioned 48 times (out of 851 articles). Results provide a better understanding of the frames in which American journalists in Covid hotspots conveyed information of expert analysis on Covid-19 during one of the most pressing news events of the century. Ultimately, the objective of this study is to utilize the exploratory data to evaluate the nature, extent and impact of Covid-19 reporting in the context of trustworthiness and scientific expertise. Secondarily, this data will illuminate the degree to which Covid-19 reporting focused on politics over science.Keywords: science reporting, science journalism, covid, misinformation, news
Procedia PDF Downloads 2171060 The Commodification of Internet Culture: Online Memes and Differing Perceptions of Their Commercial Uses
Authors: V. Esteves
Abstract:
As products of participatory culture, internet memes represent a global form of interaction with online culture. These digital objects draw upon a rich historical engagement with remix practices that dates back decades: from the copy and paste practices of Dadaism and punk to the re-appropriation techniques of the Situationist International; memes echo a long established form of cultural creativity that pivots on the art of the remix. Online culture has eagerly embraced the changes that the Web 2.0 afforded in terms of making use of remixing as an accessible form of societal expression, bridging these remix practices of the past into a more widely available and accessible platform. Memes embody the idea of 'intercreativity', allowing global creative collaboration to take place through networked digital media; they reflect the core values of participation and interaction that are present throughout much internet discourse whilst also existing in a historical remix continuum. Memes hold the power of cultural symbolism manipulated by global audiences through which societies make meaning, as these remixed digital objects have an elasticity and low literacy level that allows for a democratic form of cultural engagement and meaning-making by and for users around the world. However, because memes are so elastic, their ability to be re-appropriated by other powers for reasons beyond their original intention has become evident. Recently, corporations have made use of internet memes for advertising purposes, engaging in the circulation and re-appropriation of internet memes in commercial spaces – which has, in turn, complicated this relation between online users and memes' democratic possibilities further. By engaging in a widespread online ethnography supplemented by in-depth interviews with meme makers, this research was able to not only track different online meme use through commercial contexts, but it also allowed the possibility to engage in qualitative discussions with meme makers and users regarding their perception and experience of these varying commercial uses of memes. These can be broadly put within two categories: internet memes that are turned into physical merchandise and the use of memes in advertising to sell other (non-meme related) products. Whilst there has been considerable acceptance of the former type of commercial meme use, the use of memes in adverts in order to sell unrelated products has been met with resistance. The changes in reception regarding commercial meme use is dependent on ideas of cultural ownership and perceptions of authorship, ultimately uncovering underlying socio-cultural ideologies that come to the fore within these overlapping contexts. Additionally, this adoption of memes by corporate powers echoes the recuperation process that the Situationist International endured, creating a further link with older remix cultures and their lifecycles.Keywords: commodification, internet culture, memes, recuperation, remix
Procedia PDF Downloads 1441059 In Vitro Assessment of the Genotoxicity of Composite Obtained by Mixture of Natural Rubber and Leather Residues for Textile Application
Authors: Dalita G. S. M. Cavalcante, Elton A. P. dos Reis, Andressa S. Gomes, Caroline S. Danna, Leandra Ernest Kerche-Silva, Eidi Yoshihara, Aldo E. Job
Abstract:
In order to minimize environmental impacts, a composite was developed from mixture of leather shavings (LE) with natural rubber (NR), which patent is already deposited. The new material created can be used in applications such as floors e heels for shoes. Besides these applications, the aim is to use this new material for the production of products for the textile industry, such as boots, gloves and bags. But the question arises, as to biocompatibility of this new material. This is justified because the structure of the leather shavings has chrome. The trivalent chromium is usually not toxic, but the hexavalent chromium can be highly toxic and genotoxic for living beings, causing damage to the DNA molecule and contributing to the formation of cancer. Based on this, the objective of this study is evaluate the possible genotoxic effects of the new composite, using as system - test two cell lines (MRC-5 and CHO-K1) by comet assay. For this, the production of the composite was performed in three proportions: for every 100 grams of NR was added 40 (E40), 50 (E50) or 60 (E60) grams of LE. The latex was collected from the rubber tree (Hevea brasiliensis). For vulcanization of the NR, activators and accelerators were used. The two cell lines were exposed to the new composite in its three proportions using elution method, that is, cells exposed to liquid extracts obtained from the composite for 24 hours. For obtaining the liquid extract, each sample of the composite was crushed into pieces and mixed with an extraction solution. The quantification of total chromium and hexavalent chromium in the extracts were performed by Optical Emission Spectrometry by Inductively Coupled Plasma (ICP-OES). The levels of DNA damage in cells exposed to both extracts were monitored by alkaline version of the comet assay. The results of the quantification of metals in ICP-OES indicated the presence of total chromium in different extracts, but were not detected presence of hexavalent chromium in any extract. Through the comet assay were not found DNA damage of the CHO-K1 cells exposed to both extracts. As for MRC-5, was found a significant increase in DNA damage in cells exposed to E50 and E60. Based on the above data, it can be asserted that the extracts obtained from the composite were highly genotoxic for MRC-5 cells. These biological responses do not appear to be related to chromium metal, since there was a predominance of trivalent chromium in the extracts, indicating that during the production process of the new composite, there was no formation of hexavalent chromium. In conclusion it can infer that the leather shavings containing chromium can be reused, thereby reducing the environmental impacts of this waste. Already on the composite indicates to its incorporation in applications that do not aim at direct contact with the human skin, and it is suggested the chain of composite production be studied, in an attempt to make it biocompatible so that it may be safely used by the textile industry.Keywords: cell line, chrome, genotoxicity, leather, natural rubber
Procedia PDF Downloads 1961058 Ultrasonic Irradiation Synthesis of High-Performance Pd@Copper Nanowires/MultiWalled Carbon Nanotubes-Chitosan Electrocatalyst by Galvanic Replacement toward Ethanol Oxidation in Alkaline Media
Authors: Majid Farsadrouh Rashti, Amir Shafiee Kisomi, Parisa Jahani
Abstract:
The direct ethanol fuel cells (DEFCs) are contemplated as a promising energy source because, In addition to being used in portable electronic devices, it is also used for electric vehicles. The synthesis of bimetallic nanostructures due to their novel optical, catalytic and electronic characteristic which is precisely in contrast to their monometallic counterparts is attracting extensive attention. Galvanic replacement (sometimes is named to as cementation or immersion plating) is an uncomplicated and effective technique for making nanostructures (such as core-shell) of different metals, semiconductors, and their application in DEFCs. The replacement of galvanic does not need any external power supply compared to electrodeposition. In addition, it is different from electroless deposition because there is no need for a reducing agent to replace galvanizing. In this paper, a fast method for the palladium (Pd) wire nanostructures synthesis with the great surface area through galvanic replacement reaction utilizing copper nanowires (CuNWS) as a template by the assistance of ultrasound under room temperature condition is proposed. To evaluate the morphology and composition of Pd@ Copper nanowires/MultiWalled Carbon nanotubes-Chitosan, emission scanning electron microscopy, energy dispersive X-ray spectroscopy were applied. In order to measure the phase structure of the electrocatalysts were performed via room temperature X-ray powder diffraction (XRD) applying an X-ray diffractometer. Various electrochemical techniques including chronoamperometry and cyclic voltammetry were utilized for the electrocatalytic activity of ethanol electrooxidation and durability in basic solution. Pd@ Copper nanowires/MultiWalled Carbon nanotubes-Chitosan catalyst demonstrated substantially enhanced performance and long-term stability for ethanol electrooxidation in the basic solution in comparison to commercial Pd/C that demonstrated the potential in utilizing Pd@ Copper nanowires/MultiWalled Carbon nanotubes-Chitosan as efficient catalysts towards ethanol oxidation. Noticeably, the Pd@ Copper nanowires/MultiWalled Carbon nanotubes-Chitosan presented excellent catalytic activities with a peak current density of 320.73 mAcm² which was 9.5 times more than in comparison to Pd/C (34.2133 mAcm²). Additionally, activation energy thermodynamic and kinetic evaluations revealed that the Pd@ Copper nanowires/MultiWalled Carbon nanotubes-Chitosan catalyst has lower compared to Pd/C which leads to a lower energy barrier and an excellent charge transfer rate towards ethanol oxidation.Keywords: core-shell structure, electrocatalyst, ethanol oxidation, galvanic replacement reaction
Procedia PDF Downloads 1471057 Classification of Coughing and Breathing Activities Using Wearable and a Light-Weight DL Model
Authors: Subham Ghosh, Arnab Nandi
Abstract:
Background: The proliferation of Wireless Body Area Networks (WBAN) and Internet of Things (IoT) applications demonstrates the potential for continuous monitoring of physical changes in the body. These technologies are vital for health monitoring tasks, such as identifying coughing and breathing activities, which are necessary for disease diagnosis and management. Monitoring activities such as coughing and deep breathing can provide valuable insights into a variety of medical issues. Wearable radio-based antenna sensors, which are lightweight and easy to incorporate into clothing or portable goods, provide continuous monitoring. This mobility gives it a substantial advantage over stationary environmental sensors like as cameras and radar, which are constrained to certain places. Furthermore, using compressive techniques provides benefits such as reduced data transmission speeds and memory needs. These wearable sensors offer more advanced and diverse health monitoring capabilities. Methodology: This study analyzes the feasibility of using a semi-flexible antenna operating at 2.4 GHz (ISM band) and positioned around the neck and near the mouth to identify three activities: coughing, deep breathing, and idleness. Vector network analyzer (VNA) is used to collect time-varying complex reflection coefficient data from perturbed antenna nearfield. The reflection coefficient (S11) conveys nuanced information caused by simultaneous variations in the nearfield radiation of three activities across time. The signatures are sparsely represented with gaussian windowed Gabor spectrograms. The Gabor spectrogram is used as a sparse representation approach, which reassigns the ridges of the spectrogram images to improve their resolution and focus on essential components. The antenna is biocompatible in terms of specific absorption rate (SAR). The sparsely represented Gabor spectrogram pictures are fed into a lightweight deep learning (DL) model for feature extraction and classification. Two antenna locations are investigated in order to determine the most effective localization for three different activities. Findings: Cross-validation techniques were used on data from both locations. Due to the complex form of the recorded S11, separate analyzes and assessments were performed on the magnitude, phase, and their combination. The combination of magnitude and phase fared better than the separate analyses. Various sliding window sizes, ranging from 1 to 5 seconds, were tested to find the best window for activity classification. It was discovered that a neck-mounted design was effective at detecting the three unique behaviors.Keywords: activity recognition, antenna, deep-learning, time-frequency
Procedia PDF Downloads 101056 Predictive Semi-Empirical NOx Model for Diesel Engine
Authors: Saurabh Sharma, Yong Sun, Bruce Vernham
Abstract:
Accurate prediction of NOx emission is a continuous challenge in the field of diesel engine-out emission modeling. Performing experiments for each conditions and scenario cost significant amount of money and man hours, therefore model-based development strategy has been implemented in order to solve that issue. NOx formation is highly dependent on the burn gas temperature and the O2 concentration inside the cylinder. The current empirical models are developed by calibrating the parameters representing the engine operating conditions with respect to the measured NOx. This makes the prediction of purely empirical models limited to the region where it has been calibrated. An alternative solution to that is presented in this paper, which focus on the utilization of in-cylinder combustion parameters to form a predictive semi-empirical NOx model. The result of this work is shown by developing a fast and predictive NOx model by using the physical parameters and empirical correlation. The model is developed based on the steady state data collected at entire operating region of the engine and the predictive combustion model, which is developed in Gamma Technology (GT)-Power by using Direct Injected (DI)-Pulse combustion object. In this approach, temperature in both burned and unburnt zone is considered during the combustion period i.e. from Intake Valve Closing (IVC) to Exhaust Valve Opening (EVO). Also, the oxygen concentration consumed in burnt zone and trapped fuel mass is also considered while developing the reported model. Several statistical methods are used to construct the model, including individual machine learning methods and ensemble machine learning methods. A detailed validation of the model on multiple diesel engines is reported in this work. Substantial numbers of cases are tested for different engine configurations over a large span of speed and load points. Different sweeps of operating conditions such as Exhaust Gas Recirculation (EGR), injection timing and Variable Valve Timing (VVT) are also considered for the validation. Model shows a very good predictability and robustness at both sea level and altitude condition with different ambient conditions. The various advantages such as high accuracy and robustness at different operating conditions, low computational time and lower number of data points requires for the calibration establishes the platform where the model-based approach can be used for the engine calibration and development process. Moreover, the focus of this work is towards establishing a framework for the future model development for other various targets such as soot, Combustion Noise Level (CNL), NO2/NOx ratio etc.Keywords: diesel engine, machine learning, NOₓ emission, semi-empirical
Procedia PDF Downloads 1141055 A Dynamic Mechanical Thermal T-Peel Test Approach to Characterize Interfacial Behavior of Polymeric Textile Composites
Authors: J. R. Büttler, T. Pham
Abstract:
Basic understanding of interfacial mechanisms is of importance for the development of polymer composites. For this purpose, we need techniques to analyze the quality of interphases, their chemical and physical interactions and their strength and fracture resistance. In order to investigate the interfacial phenomena in detail, advanced characterization techniques are favorable. Dynamic mechanical thermal analysis (DMTA) using a rheological system is a sensitive tool. T-peel tests were performed with this system, to investigate the temperature-dependent peel behavior of woven textile composites. A model system was made of polyamide (PA) woven fabric laminated with films of polypropylene (PP) or PP modified by grafting with maleic anhydride (PP-g-MAH). Firstly, control measurements were performed with solely PP matrixes. Polymer melt investigations, as well as the extensional stress, extensional viscosity and extensional relaxation modulus at -10°C, 100 °C and 170 °C, demonstrate similar viscoelastic behavior for films made of PP-g-MAH and its non-modified PP-control. Frequency sweeps have shown that PP-g-MAH has a zero phase viscosity of around 1600 Pa·s and PP-control has a similar zero phase viscosity of 1345 Pa·s. Also, the gelation points are similar at 2.42*104 Pa (118 rad/s) and 2.81*104 Pa (161 rad/s) for PP-control and PP-g-MAH, respectively. Secondly, the textile composite was analyzed. The extensional stress of PA66 fabric laminated with either PP-control or PP-g-MAH at -10 °C, 25 °C and 170 °C for strain rates of 0.001 – 1 s-1 was investigated. The laminates containing the modified PP need more stress for T-peeling. However, the strengthening effect due to the modification decreases by increasing temperature and at 170 °C, just above the melting temperature of the matrix, the difference disappears. Independent of the matrix used in the textile composite, there is a decrease of extensional stress by increasing temperature. It appears that the more viscous is the matrix, the weaker the laminar adhesion. Possibly, the measurement is influenced by the fact that the laminate becomes stiffer at lower temperatures. Adhesive lap-shear testing at room temperature supports the findings obtained with the T-peel test. Additional analysis of the textile composite at the microscopic level ensures that the fibers are well embedded in the matrix. Atomic force microscopy (AFM) imaging of a cross section of the composite shows no gaps between the fibers and matrix. Measurements of the water contact angle show that the MAH grafted PP is more polar than the virgin-PP, and that suggests a more favorable chemical interaction of PP-g-MAH with PA, compared to the non-modified PP. In fact, this study indicates that T-peel testing by DMTA is a technique to achieve more insights into polymeric textile composites.Keywords: dynamic mechanical thermal analysis, interphase, polyamide, polypropylene, textile composite
Procedia PDF Downloads 1291054 Methods of Detoxification of Nuts With Aflatoxin B1 Contamination
Authors: Auteleyeva Laura, Maikanov Balgabai, Smagulova Ayana
Abstract:
In order to find and select detoxification methods, patent and information research was conducted, as a result of which 68 patents for inventions were found, among them from the near abroad - 14 (Russia), from far abroad: China – 27, USA - 6, South Korea–1, Germany - 2, Mexico – 4, Yugoslavia – 7, Austria, Taiwan, Belarus, Denmark, Italy, Japan, Canada for 1 security document. Aflatoxin B₁ in various nuts was determined by two methods: enzyme immunoassay "RIDASCREEN ® FAST Aflatoxin" with determination of optical density on a microplate spectrophotometer RIDA®ABSORPTION 96 with RIDASOFT® software Win.NET (Germany) and the method of high-performance liquid chromatography (HPLC Corporation Water, USA) according to GOST 307112001. For experimental contamination of nuts, the cultivation of strain A was carried out. flavus KWIK-STIK on the medium of Chapek (France) with subsequent infection of various nuts (peanuts, peanuts with shells, badam, walnuts with and without shells, pistachios).Based on our research, we have selected 2 detoxification methods: method 1 – combined (5% citric acid solution + microwave for 640 W for 3 min + UV for 20 min) and a chemical method with various leaves of plants: Artemisia terra-albae, Thymus vulgaris, Callogonum affilium, collected in the territory of Akmola region (Artemisia terra-albae, Thymus vulgaris) and Western Kazakhstan (Callogonum affilium). The first stage was the production of ethanol extracts of Artemisia terraea-albae, Thymus vulgaris, Callogonum affilium. To obtain them, 100 g of vegetable raw materials were taken, which was dissolved in 70% ethyl alcohol. Extraction was carried out for 2 hours at the boiling point of the solvent with a reverse refrigerator using an ultrasonic bath "Sapphire". The obtained extracts were evaporated on a rotary evaporator IKA RV 10. At the second stage, the three samples obtained were tested for antimicrobial and antifungal activity. Extracts of Thymus vulgaris and Callogonum affilium showed high antimicrobial and antifungal activity. Artemisia terraea-albae extract showed high antimicrobial activity and low antifungal activity. When testing method 1, it was found that in the first and third experimental groups there was a decrease in the concentration of aflatoxin B1 in walnut samples by 63 and 65%, respectively, but these values also exceeded the maximum permissible concentrations, while the nuts in the second and third experimental groups had a tart lemon flavor; When testing method 2, a decrease in the concentration of aflatoxin B1 to a safe level was observed by 91% (0.0038 mg/kg) in nuts of the 1st and 2nd experimental groups (Artemisia terra-albae, Thymus vulgaris), while in samples of the 2nd and 3rd experimental groups, a decrease in the amount of aflatoxin in 1 to a safe level was observed.Keywords: nuts, aflatoxin B1, my, mycotoxins
Procedia PDF Downloads 861053 Organic Matter Distribution in Bazhenov Source Rock: Insights from Sequential Extraction and Molecular Geochemistry
Authors: Margarita S. Tikhonova, Alireza Baniasad, Anton G. Kalmykov, Georgy A. Kalmykov, Ralf Littke
Abstract:
There is a high complexity in the pore structure of organic-rich rocks caused by the combination of inter-particle porosity from inorganic mineral matter and ultrafine intra-particle porosity from both organic matter and clay minerals. Fluids are retained in that pore space, but there are major uncertainties in how and where the fluids are stored and to what extent they are accessible or trapped in 'closed' pores. A large degree of tortuosity may lead to fractionation of organic matter so that the lighter and flexible compounds would diffuse to the reservoir whereas more complicated compounds may be locked in place. Additionally, parts of hydrocarbons could be bound to solid organic matter –kerogen– and mineral matrix during expulsion and migration. Larger compounds can occupy thin channels so that clogging or oil and gas entrapment will occur. Sequential extraction of applying different solvents is a powerful tool to provide more information about the characteristics of trapped organic matter distribution. The Upper Jurassic – Lower Cretaceous Bazhenov shale is one of the most petroliferous source rock extended in West Siberia, Russia. Concerning the variable mineral composition, pore space distribution and thermal maturation, there are high uncertainties in distribution and composition of organic matter in this formation. In order to address this issue geological and geochemical properties of 30 samples including mineral composition (XRD and XRF), structure and texture (thin-section microscopy), organic matter contents, type and thermal maturity (Rock-Eval) as well as molecular composition (GC-FID and GC-MS) of different extracted materials during sequential extraction were considered. Sequential extraction was performed by a Soxhlet apparatus using different solvents, i.e., n-hexane, chloroform and ethanol-benzene (1:1 v:v) first on core plugs and later on pulverized materials. The results indicate that the studied samples are mainly composed of type II kerogen with TOC contents varied from 5 to 25%. The thermal maturity ranged from immature to late oil window. Whereas clay contents decreased with increasing maturity, the amount of silica increased in the studied samples. According to molecular geochemistry, stored hydrocarbons in open and closed pore space reveal different geochemical fingerprints. The results improve our understanding of hydrocarbon expulsion and migration in the organic-rich Bazhenov shale and therefore better estimation of hydrocarbon potential for this formation.Keywords: Bazhenov formation, bitumen, molecular geochemistry, sequential extraction
Procedia PDF Downloads 1701052 Development of Intellectual Property Information Services in Zimbabwe’s University Libraries: Assessing the Current Status and Mapping the Future Direction
Authors: Jonathan Munyoro, Takawira Machimbidza, Stephen Mutula
Abstract:
The study investigates the current status of Intellectual Property (IP) information services in Zimbabwe's university libraries. Specifically, the study assesses the current IP information services offered in Zimbabwe’s university libraries, identifies challenges to the development of comprehensive IP information services in Zimbabwe’s university libraries, and suggests solutions for the development of IP information services in Zimbabwe’s university libraries. The study is born out of a realisation that research on IP information services in university libraries has received little attention, especially in developing country contexts, despite the fact that there are calls for heightened participation of university libraries in IP information services. In Zimbabwe, the launch of the National Intellectual Property Policy and Implementation Strategy 2018-2022 and the introduction of the Education 5.0 concept are set to significantly change the IP landscape in the country. Education 5.0 places more emphasis on innovation and industrialisation (in addition to teaching, community service, and research), and has the potential to shift the focus and level of IP output produced in higher and tertiary education institutions beyond copyrights and more towards commercially exploited patents, utility models, and industrial designs. The growing importance of IP commercialisation in universities creates a need for appropriate IP information services to assist students, academics, researchers, administrators, start-ups, entrepreneurs, and inventors. The critical challenge for university libraries is to reposition themselves and remain relevant in the new trajectory. Designing specialised information services to support increased IP generation and commercialisation appears to be an opportunity for university libraries to stay relevant in the knowledge economy. However, IP information services in Zimbabwe’s universities appear to be incomplete and focused mostly on assisting with research publications and copyright-related activities. Research on the existing status of IP services in university libraries in Zimbabwe is therefore necessary to help identify gaps and provide solutions in order to stimulate the growth of new forms of such services. The study employed a quantitative approach. An online questionnaire was administered to 57 academic librarians from 15 university libraries. Findings show that the current focus of the surveyed institutions is on providing scientific research support services (15); disseminating/sharing university research output (14); and copyright activities (12). More specialised IP information services such as IP education and training, patent information services, IP consulting services, IP online service platforms, and web-based IP information services are largely unavailable in Zimbabwean university libraries. Results reveal that the underlying challenge in the development of IP information services in Zimbabwe's university libraries is insufficient IP knowledge among academic librarians, which is exacerbated by inadequate IP management frameworks in university institutions. The study proposes a framework for the entrenchment of IP information services in Zimbabwe's university libraries.Keywords: academic libraries, information services, intellectual property, IP knowledge, university libraries, Zimbabwe
Procedia PDF Downloads 1561051 Investigation of a Single Feedstock Particle during Pyrolysis in Fluidized Bed Reactors via X-Ray Imaging Technique
Authors: Stefano Iannello, Massimiliano Materazzi
Abstract:
Fluidized bed reactor technologies are one of the most valuable pathways for thermochemical conversions of biogenic fuels due to their good operating flexibility. Nevertheless, there are still issues related to the mixing and separation of heterogeneous phases during operation with highly volatile feedstocks, including biomass and waste. At high temperatures, the volatile content of the feedstock is released in the form of the so-called endogenous bubbles, which generally exert a “lift” effect on the particle itself by dragging it up to the bed surface. Such phenomenon leads to high release of volatile matter into the freeboard and limited mass and heat transfer with particles of the bed inventory. The aim of this work is to get a better understanding of the behaviour of a single reacting particle in a hot fluidized bed reactor during the devolatilization stage. The analysis has been undertaken at different fluidization regimes and temperatures to closely mirror the operating conditions of waste-to-energy processes. Beechwood and polypropylene particles were used to resemble the biomass and plastic fractions present in waste materials, respectively. The non-invasive X-ray technique was coupled to particle tracking algorithms to characterize the motion of a single feedstock particle during the devolatilization with high resolution. A high-energy X-ray beam passes through the vessel where absorption occurs, depending on the distribution and amount of solids and fluids along the beam path. A high-speed video camera is synchronised to the beam and provides frame-by-frame imaging of the flow patterns of fluids and solids within the fluidized bed up to 72 fps (frames per second). A comprehensive mathematical model has been developed in order to validate the experimental results. Beech wood and polypropylene particles have shown a very different dynamic behaviour during the pyrolysis stage. When the feedstock is fed from the bottom, the plastic material tends to spend more time within the bed than the biomass. This behaviour can be attributed to the presence of the endogenous bubbles, which drag effect is more pronounced during the devolatilization of biomass, resulting in a lower residence time of the particle within the bed. At the typical operating temperatures of thermochemical conversions, the synthetic polymer softens and melts, and the bed particles attach on its outer surface, generating a wet plastic-sand agglomerate. Consequently, this additional layer of sand may hinder the rapid evolution of volatiles in the form of endogenous bubbles, and therefore the establishment of a poor drag effect acting on the feedstock itself. Information about the mixing and segregation of solid feedstock is of prime importance for the design and development of more efficient industrial-scale operations.Keywords: fluidized bed, pyrolysis, waste feedstock, X-ray
Procedia PDF Downloads 1721050 The Association of Southeast Asian Nations (ASEAN) and the Dynamics of Resistance to Sovereignty Violation: The Case of East Timor (1975-1999)
Authors: Laura Southgate
Abstract:
The Association of Southeast Asian Nations (ASEAN), as well as much of the scholarship on the organisation, celebrates its ability to uphold the principle of regional autonomy, understood as upholding the norm of non-intervention by external powers in regional affairs. Yet, in practice, this has been repeatedly violated. This dichotomy between rhetoric and practice suggests an interesting avenue for further study. The East Timor crisis (1975-1999) has been selected as a case-study to test the dynamics of ASEAN state resistance to sovereignty violation in two distinct timeframes: Indonesia’s initial invasion of the territory in 1975, and the ensuing humanitarian crisis in 1999 which resulted in a UN-mandated, Australian-led peacekeeping intervention force. These time-periods demonstrate variation on the dependent variable. It is necessary to observe covariation in order to derive observations in support of a causal theory. To establish covariation, my independent variable is therefore a continuous variable characterised by variation in convergence of interest. Change of this variable should change the value of the dependent variable, thus establishing causal direction. This paper investigates the history of ASEAN’s relationship to the norm of non-intervention. It offers an alternative understanding of ASEAN’s history, written in terms of the relationship between a key ASEAN state, which I call a ‘vanguard state’, and selected external powers. This paper will consider when ASEAN resistance to sovereignty violation has succeeded, and when it has failed. It will contend that variation in outcomes associated with vanguard state resistance to sovereignty violation can be best explained by levels of interest convergence between the ASEAN vanguard state and designated external actors. Evidence will be provided to support the hypothesis that in 1999, ASEAN’s failure to resist violations to the sovereignty of Indonesia was a consequence of low interest convergence between Indonesia and the external powers. Conversely, in 1975, ASEAN’s ability to resist violations to the sovereignty of Indonesia was a consequence of high interest convergence between Indonesia and the external powers. As the vanguard state, Indonesia was able to apply pressure on the ASEAN states and obtain unanimous support for Indonesia’s East Timor policy in 1975 and 1999. However, the key factor explaining the variance in outcomes in both time periods resides in the critical role played by external actors. This view represents a serious challenge to much of the existing scholarship that emphasises ASEAN’s ability to defend regional autonomy. As these cases attempt to show, ASEAN autonomy is much more contingent than portrayed in the existing literature.Keywords: ASEAN, east timor, intervention, sovereignty
Procedia PDF Downloads 3581049 Modeling Geogenic Groundwater Contamination Risk with the Groundwater Assessment Platform (GAP)
Authors: Joel Podgorski, Manouchehr Amini, Annette Johnson, Michael Berg
Abstract:
One-third of the world’s population relies on groundwater for its drinking water. Natural geogenic arsenic and fluoride contaminate ~10% of wells. Prolonged exposure to high levels of arsenic can result in various internal cancers, while high levels of fluoride are responsible for the development of dental and crippling skeletal fluorosis. In poor urban and rural settings, the provision of drinking water free of geogenic contamination can be a major challenge. In order to efficiently apply limited resources in the testing of wells, water resource managers need to know where geogenically contaminated groundwater is likely to occur. The Groundwater Assessment Platform (GAP) fulfills this need by providing state-of-the-art global arsenic and fluoride contamination hazard maps as well as enabling users to create their own groundwater quality models. The global risk models were produced by logistic regression of arsenic and fluoride measurements using predictor variables of various soil, geological and climate parameters. The maps display the probability of encountering concentrations of arsenic or fluoride exceeding the World Health Organization’s (WHO) stipulated concentration limits of 10 µg/L or 1.5 mg/L, respectively. In addition to a reconsideration of the relevant geochemical settings, these second-generation maps represent a great improvement over the previous risk maps due to a significant increase in data quantity and resolution. For example, there is a 10-fold increase in the number of measured data points, and the resolution of predictor variables is generally 60 times greater. These same predictor variable datasets are available on the GAP platform for visualization as well as for use with a modeling tool. The latter requires that users upload their own concentration measurements and select the predictor variables that they wish to incorporate in their models. In addition, users can upload additional predictor variable datasets either as features or coverages. Such models can represent an improvement over the global models already supplied, since (a) users may be able to use their own, more detailed datasets of measured concentrations and (b) the various processes leading to arsenic and fluoride groundwater contamination can be isolated more effectively on a smaller scale, thereby resulting in a more accurate model. All maps, including user-created risk models, can be downloaded as PDFs. There is also the option to share data in a secure environment as well as the possibility to collaborate in a secure environment through the creation of communities. In summary, GAP provides users with the means to reliably and efficiently produce models specific to their region of interest by making available the latest datasets of predictor variables along with the necessary modeling infrastructure.Keywords: arsenic, fluoride, groundwater contamination, logistic regression
Procedia PDF Downloads 3481048 Semiotics of the New Commercial Music Paradigm
Authors: Mladen Milicevic
Abstract:
This presentation will address how the statistical analysis of digitized popular music influences the music creation and emotionally manipulates consumers.Furthermore, it will deal with semiological aspect of uniformization of musical taste in order to predict the potential revenues generated by popular music sales. In the USA, we live in an age where most of the popular music (i.e. music that generates substantial revenue) has been digitized. It is safe to say that almost everything that was produced in last 10 years is already digitized (either available on iTunes, Spotify, YouTube, or some other platform). Depending on marketing viability and its potential to generate additional revenue most of the “older” music is still being digitized. Once the music gets turned into a digital audio file,it can be computer-analyzed in all kinds of respects, and the similar goes for the lyrics because they also exist as a digital text file, to which any kin of N Capture-kind of analysis may be applied. So, by employing statistical examination of different popular music metrics such as tempo, form, pronouns, introduction length, song length, archetypes, subject matter,and repetition of title, the commercial result may be predicted. Polyphonic HMI (Human Media Interface) introduced the concept of the hit song science computer program in 2003.The company asserted that machine learning could create a music profile to predict hit songs from its audio features Thus,it has been established that a successful pop song must include: 100 bpm or more;an 8 second intro;use the pronoun 'you' within 20 seconds of the start of the song; hit the bridge middle 8 between 2 minutes and 2 minutes 30 seconds; average 7 repetitions of the title; create some expectations and fill that expectation in the title. For the country song: 100 bpm or less for a male artist; 14-second intro; uses the pronoun 'you' within the first 20 seconds of the intro; has a bridge middle 8 between 2 minutes and 2 minutes 30 seconds; has 7 repetitions of title; creates an expectation,fulfills it in 60 seconds.This approach to commercial popular music minimizes the human influence when it comes to which “artist” a record label is going to sign and market. Twenty years ago,music experts in the A&R (Artists and Repertoire) departments of the record labels were making personal aesthetic judgments based on their extensive experience in the music industry. Now, the computer music analyzing programs, are replacing them in an attempt to minimize investment risk of the panicking record labels, in an environment where nobody can predict the future of the recording industry.The impact on the consumers taste through the narrow bottleneck of the above mentioned music selection by the record labels,created some very peculiar effects not only on the taste of popular music consumers, but also the creative chops of the music artists as well. What is the meaning of this semiological shift is the main focus of this research and paper presentation.Keywords: music, semiology, commercial, taste
Procedia PDF Downloads 3931047 Modeling Engagement with Multimodal Multisensor Data: The Continuous Performance Test as an Objective Tool to Track Flow
Authors: Mohammad H. Taheri, David J. Brown, Nasser Sherkat
Abstract:
Engagement is one of the most important factors in determining successful outcomes and deep learning in students. Existing approaches to detect student engagement involve periodic human observations that are subject to inter-rater reliability. Our solution uses real-time multimodal multisensor data labeled by objective performance outcomes to infer the engagement of students. The study involves four students with a combined diagnosis of cerebral palsy and a learning disability who took part in a 3-month trial over 59 sessions. Multimodal multisensor data were collected while they participated in a continuous performance test. Eye gaze, electroencephalogram, body pose, and interaction data were used to create a model of student engagement through objective labeling from the continuous performance test outcomes. In order to achieve this, a type of continuous performance test is introduced, the Seek-X type. Nine features were extracted including high-level handpicked compound features. Using leave-one-out cross-validation, a series of different machine learning approaches were evaluated. Overall, the random forest classification approach achieved the best classification results. Using random forest, 93.3% classification for engagement and 42.9% accuracy for disengagement were achieved. We compared these results to outcomes from different models: AdaBoost, decision tree, k-Nearest Neighbor, naïve Bayes, neural network, and support vector machine. We showed that using a multisensor approach achieved higher accuracy than using features from any reduced set of sensors. We found that using high-level handpicked features can improve the classification accuracy in every sensor mode. Our approach is robust to both sensor fallout and occlusions. The single most important sensor feature to the classification of engagement and distraction was shown to be eye gaze. It has been shown that we can accurately predict the level of engagement of students with learning disabilities in a real-time approach that is not subject to inter-rater reliability, human observation or reliant on a single mode of sensor input. This will help teachers design interventions for a heterogeneous group of students, where teachers cannot possibly attend to each of their individual needs. Our approach can be used to identify those with the greatest learning challenges so that all students are supported to reach their full potential.Keywords: affective computing in education, affect detection, continuous performance test, engagement, flow, HCI, interaction, learning disabilities, machine learning, multimodal, multisensor, physiological sensors, student engagement
Procedia PDF Downloads 941046 Quantifying the Effects of Canopy Cover and Cover Crop Species on Water Use Partitioning in Micro-Sprinkler Irrigated Orchards in South Africa
Authors: Zanele Ntshidi, Sebinasi Dzikiti, Dominic Mazvimavi
Abstract:
South Africa is a dry country and yet it is ranked as the 8th largest exporter of fresh apples (Malus Domestica) globally. Prime apple producing regions are in the Eastern and Western Cape Provinces of the country where all the fruit is grown under irrigation. Climate change models predict increasingly drier future conditions in these regions and the frequency and severity of droughts is expected to increase. For the sustainability and growth of the fruit industry it is important to minimize non-beneficial water losses from the orchard floor. The aims of this study were firstly to compare the water use of cover crop species used in South African orchards for which there is currently no information. The second aim was to investigate how orchard water use (evapotranspiration) was partitioned into beneficial (tree transpiration) and non-beneficial (orchard floor evaporation) water uses for micro-sprinkler irrigated orchards with different canopy covers. This information is important in order to explore opportunities to minimize non-beneficial water losses. Six cover crop species (four exotic and two indigenous) were grown in 2 L pots in a greenhouse. Cover crop transpiration was measured using the gravimetric method on clear days. To establish how water use was partitioned in orchards, evapotranspiration (ET) was measured using an open path eddy covariance system, while tree transpiration was measured hourly throughout the season (October to June) on six trees per orchard using the heat ratio sap flow method. On selected clear days, soil evaporation was measured hourly from sunrise to sunset using six micro-lysimeters situated at different wet/dry and sun/shade positions on the orchard floor. Transpiration of cover crops was measured using miniature (2 mm Ø) stem heat balance sap flow gauges. The greenhouse study showed that exotic cover crops had significantly higher (p < 0.01) average transpiration rates (~3.7 L/m2/d) than the indigenous species (~ 2.2 L/m²/d). In young non-bearing orchards, orchard floor evaporative fluxes accounted for more than 60% of orchard ET while this ranged from 10 to 30% in mature orchards with a high canopy cover. While exotic cover crops are preferred by most farmers, this study shows that they use larger quantities of water than indigenous species. This in turn contributes to a larger orchard floor evaporation flux. In young orchards non-beneficial losses can be minimized by adopting drip or short range micro-sprinkler methods that reduce the wetted soil fraction thereby conserving water.Keywords: evapotranspiration, sap flow, soil evaporation, transpiration
Procedia PDF Downloads 3881045 Poly(propylene fumarate) Copolymers with Phosphonic Acid-based Monomers Designed as Bone Tissue Engineering Scaffolds
Authors: Görkem Cemali̇, Avram Aruh, Gamze Torun Köse, Erde Can ŞAfak
Abstract:
In order to heal bone disorders, the conventional methods which involve the use of autologous and allogenous bone grafts or permanent implants have certain disadvantages such as limited supply, disease transmission, or adverse immune response. A biodegradable material that acts as structural support to the damaged bone area and serves as a scaffold that enhances bone regeneration and guides bone formation is one desirable solution. Poly(propylene fumarate) (PPF) which is an unsaturated polyester that can be copolymerized with appropriate vinyl monomers to give biodegradable network structures, is a promising candidate polymer to prepare bone tissue engineering scaffolds. In this study, hydroxyl-terminated PPF was synthesized and thermally cured with vinyl phosphonic acid (VPA) and diethyl vinyl phosphonate (VPES) in the presence of radical initiator benzoyl peroxide (BP), with changing co-monomer weight ratios (10-40wt%). In addition, the synthesized PPF was cured with VPES comonomer at body temperature (37oC) in the presence of BP initiator, N, N-Dimethyl para-toluidine catalyst and varying amounts of Beta-tricalcium phosphate (0-20 wt% ß-TCP) as filler via radical polymerization to prepare composite materials that can be used in injectable forms. Thermomechanical properties, compressive properties, hydrophilicity and biodegradability of the PPF/VPA and PPF/VPES copolymers were determined and analyzed with respect to the copolymer composition. Biocompatibility of the resulting polymers and their composites was determined by the MTS assay and osteoblast activity was explored with von kossa, alkaline phosphatase and osteocalcin activity analysis and the effects of VPA and VPES comonomer composition on these properties were investigated. Thermally cured PPF/VPA and PPF/VPES copolymers with different compositions exhibited compressive modulus and strength values in the wide range of 10–836 MPa and 14–119 MPa, respectively. MTS assay studies showed that the majority of the tested compositions were biocompatible and the overall results indicated that PPF/VPA and PPF/VPES network polymers show significant potential for applications as bone tissue engineering scaffolds where varying PPF and co-monomer ratio provides adjustable and controllable properties of the end product. The body temperature cured PPF/VPES/ß-TCP composites exhibited significantly lower compressive modulus and strength values than the thermal cured PPF/VPES copolymers and were therefore found to be useful as scaffolds for cartilage tissue engineering applications.Keywords: biodegradable, bone tissue, copolymer, poly(propylene fumarate), scaffold
Procedia PDF Downloads 1661044 Predicting Blockchain Technology Installation Cost in Supply Chain System through Supervised Learning
Authors: Hossein Havaeji, Tony Wong, Thien-My Dao
Abstract:
1. Research Problems and Research Objectives: Blockchain Technology-enabled Supply Chain System (BT-enabled SCS) is the system using BT to drive SCS transparency, security, durability, and process integrity as SCS data is not always visible, available, or trusted. The costs of operating BT in the SCS are a common problem in several organizations. The costs must be estimated as they can impact existing cost control strategies. To account for system and deployment costs, it is necessary to overcome the following hurdle. The problem is that the costs of developing and running a BT in SCS are not yet clear in most cases. Many industries aiming to use BT have special attention to the importance of BT installation cost which has a direct impact on the total costs of SCS. Predicting BT installation cost in SCS may help managers decide whether BT is to be an economic advantage. The purpose of the research is to identify some main BT installation cost components in SCS needed for deeper cost analysis. We then identify and categorize the main groups of cost components in more detail to utilize them in the prediction process. The second objective is to determine the suitable Supervised Learning technique in order to predict the costs of developing and running BT in SCS in a particular case study. The last aim is to investigate how the running BT cost can be involved in the total cost of SCS. 2. Work Performed: Applied successfully in various fields, Supervised Learning is a method to set the data frame, treat the data, and train/practice the method sort. It is a learning model directed to make predictions of an outcome measurement based on a set of unforeseen input data. The following steps must be conducted to search for the objectives of our subject. The first step is to make a literature review to identify the different cost components of BT installation in SCS. Based on the literature review, we should choose some Supervised Learning methods which are suitable for BT installation cost prediction in SCS. According to the literature review, some Supervised Learning algorithms which provide us with a powerful tool to classify BT installation components and predict BT installation cost are the Support Vector Regression (SVR) algorithm, Back Propagation (BP) neural network, and Artificial Neural Network (ANN). Choosing a case study to feed data into the models comes into the third step. Finally, we will propose the best predictive performance to find the minimum BT installation costs in SCS. 3. Expected Results and Conclusion: This study tends to propose a cost prediction of BT installation in SCS with the help of Supervised Learning algorithms. At first attempt, we will select a case study in the field of BT-enabled SCS, and then use some Supervised Learning algorithms to predict BT installation cost in SCS. We continue to find the best predictive performance for developing and running BT in SCS. Finally, the paper will be presented at the conference.Keywords: blockchain technology, blockchain technology-enabled supply chain system, installation cost, supervised learning
Procedia PDF Downloads 1221043 Linguistic Competence Analysis and the Development of Speaking Instructional Material
Authors: Felipa M. Rico
Abstract:
Linguistic oral competence plays a vital role in attaining effective communication. Since the English language is considered as universally used language and has a high demand skill needed in the work-place, mastery is the expected output from learners. To achieve this, learners should be given integrated differentiated tasks which help them develop and strengthen the expected skills. This study aimed to develop speaking instructional supplementary material to enhance the English linguistic competence of Grade 9 students in areas of pronunciation, intonation and stress, voice projection, diction and fluency. A descriptive analysis was utilized to analyze the speaking level of performance of the students in order to employ appropriate strategies. There were two sets of respondents: 178 Grade 9 students selected through a stratified sampling and chosen at random. The other set comprised English teachers who evaluated the usefulness of the devised teaching materials. A teacher conducted a speaking test and activities were employed to analyze the speaking needs of students. Observation and recordings were also used to evaluate the students’ performance. The findings revealed that the English pronunciation of the students was slightly unclear at times, but generally fair. There were lapses but generally they rated moderate in intonation and stress, because of other language interference. In terms of voice projection, students have erratic high volume pitch. For diction, the students’ ability to produce comprehensible language is limited, and as to fluency, the choice of vocabulary and use of structure were severely limited. Based on the students’ speaking needs analyses, the supplementary material devised was based on Nunan’s IM model, incorporating context of daily life and global work settings, considering the principle that language is best learned in the actual meaningful situation. To widen the mastery of skill, a rich learning environment, filled with a variety instructional material tends to foster faster acquisition of the requisite skills for sustained learning and development. The role of IM is to encourage information to stick in the learners’ mind, as what is seen is understood more than what is heard. Teachers say they found the IM “very useful.” This implied that English teachers could adopt the materials to improve the speaking skills of students. Further, teachers should provide varied opportunities for students to get involved in real life situations where they could take turns in asking and answering questions and share information related to the activities. This would minimize anxiety among students in the use of the English language.Keywords: diction, fluency, intonation, instructional materials, linguistic competence
Procedia PDF Downloads 2411042 The Re-Emergence of Russia Foreign Policy (Case Study: Middle East)
Authors: Maryam Azish
Abstract:
Russia, as an emerging global player in recent years, has projected a special place in the Middle East. Despite all the challenges it has faced over the years, it has always considered its presence in various fields with a strategy that has defined its maneuvering power as a level of competition and even confrontation with the United States. Therefore, its current approach is considered important as an influential actor in the Middle East. After the collapse of the Soviet Union, when the Russians withdrew completely from the Middle East, the American scene remained almost unrivaled by the Americans. With the start of the US-led war in Iraq and Afghanistan and the subsequent developments that led to the US military and political defeat, a new chapter in regional security was created in which ISIL and Taliban terrorism went along with the Arab Spring to destabilize the Middle East. Because of this, the Americans took every opportunity to strengthen their military presence. Iraq, Syria and Afghanistan have always been the three areas where terrorism was shaped, and the countries of the region have each reacted to this evil phenomenon accordingly. The West dealt with this phenomenon on a case-by-case basis in the general circumstances that created the fluid situation in the Arab countries and the region. Russian President Vladimir Putin accused the US of falling asleep in the face of ISIS and terrorism in Syria. In fact, this was an opportunity for the Russians to revive their presence in Syria. This article suggests that utilizing the recognition policy along with the constructivism theory will offer a better knowledge of Russia’s endeavors to endorse its international position. Accordingly, Russia’s distinctiveness and its ambitions for a situation of great power have played a vital role in shaping national interests and, subsequently, in foreign policy, in Putin's era in particular. The focal claim of the paper is that scrutinize Russia’s foreign policy with realistic methods cannot be attained. Consequently, with an aim to fill the prevailing vacuum, this study exploits the politics of acknowledgment in the context of constructivism to examine Russia’s foreign policy in the Middle East. The results of this paper show that the key aim of Russian foreign policy discourse, accompanied by increasing power and wealth, is to recognize and reinstate the position of great power in the universal system. The Syrian crisis has created an opportunity for Russia to unite its position in the developing global and regional order after ages of dynamic and prevalent existence in the Middle East as well as contradicting US unilateralism. In the meantime, the writer thinks that the question of identifying Russia’s position in the global system by the West has played a foremost role in serving its national interests.Keywords: constructivism, foreign Policy, middle East, Russia, regionalism
Procedia PDF Downloads 1491041 Minding the Gap: Consumer Contracts in the Age of Online Information Flow
Authors: Samuel I. Becher, Tal Z. Zarsky
Abstract:
The digital world becomes part of our DNA now. The way e-commerce, human behavior, and law interact and affect one another is rapidly and significantly changing. Among others things, the internet equips consumers with a variety of platforms to share information in a volume we could not imagine before. As part of this development, online information flows allow consumers to learn about businesses and their contracts in an efficient and quick manner. Consumers can become informed by the impressions that other, experienced consumers share and spread. In other words, consumers may familiarize themselves with the contents of contracts through the experiences that other consumers had. Online and offline, the relationship between consumers and businesses are most frequently governed by consumer standard form contracts. For decades, such contracts are assumed to be one-sided and biased against consumers. Consumer Law seeks to alleviate this bias and empower consumers. Legislatures, consumer organizations, scholars, and judges are constantly looking for clever ways to protect consumers from unscrupulous firms and unfair behaviors. While consumers-businesses relationships are theoretically administered by standardized contracts, firms do not always follow these contracts in practice. At times, there is a significant disparity between what the written contract stipulates and what consumers experience de facto. That is, there is a crucial gap (“the Gap”) between how firms draft their contracts on the one hand, and how firms actually treat consumers on the other. Interestingly, the Gap is frequently manifested by deviation from the written contract in favor of consumers. In other words, firms often exercise lenient approach in spite of the stringent written contracts they draft. This essay examines whether, counter-intuitively, policy makers should add firms’ leniency to the growing list of firms suspicious behaviors. At first glance, firms should be allowed, if not encouraged, to exercise leniency. Many legal regimes are looking for ways to cope with unfair contract terms in consumer contracts. Naturally, therefore, consumer law should enable, if not encourage, firms’ lenient practices. Firms’ willingness to deviate from their strict contracts in order to benefit consumers seems like a sensible approach. Apparently, such behavior should not be second guessed. However, at times online tools, firm’s behaviors and human psychology result in a toxic mix. Beneficial and helpful online information should be treated with due respect as it may occasionally have surprising and harmful qualities. In this essay, we illustrate that technological changes turn the Gap into a key component in consumers' understanding, or misunderstanding, of consumer contracts. In short, a Gap may distort consumers’ perception and undermine rational decision-making. Consequently, this essay explores whether, counter-intuitively, consumer law should sanction firms that create a Gap and use it. It examines when firms’ leniency should be considered as manipulative or exercised in bad faith. It then investigates whether firms should be allowed to enforce the written contract even if the firms deliberately and consistently deviated from it.Keywords: consumer contracts, consumer protection, information flow, law and economics, law and technology, paper deal v firms' behavior
Procedia PDF Downloads 1981040 Weapon-Being: Weaponized Design and Object-Oriented Ontology in Hypermodern Times
Authors: John Dimopoulos
Abstract:
This proposal attempts a refabrication of Heidegger’s classic thing-being and object-being analysis in order to provide better ontological tools for understanding contemporary culture, technology, and society. In his work, Heidegger sought to understand and comment on the problem of technology in an era of rampant innovation and increased perils for society and the planet. Today we seem to be at another crossroads in this course, coming after postmodernity, during which dreams and dangers of modernity augmented with critical speculations of the post-war era take shape. The new era which we are now living in, referred to as hypermodernity by researchers in various fields such as architecture and cultural theory, is defined by the horizontal implementation of digital technologies, cybernetic networks, and mixed reality. Technology today is rapidly approaching a turning point, namely the point of no return for humanity’s supervision over its creations. The techno-scientific civilization of the 21st century creates a series of problems, progressively more difficult and complex to solve and impossible to ignore, climate change, data safety, cyber depression, and digital stress being some of the most prevalent. Humans often have no other option than to address technology-induced problems with even more technology, as in the case of neuron networks, machine learning, and AI, thus widening the gap between creating technological artifacts and understanding their broad impact and possible future development. As all technical disciplines and particularly design, become enmeshed in a matrix of digital hyper-objects, a conceptual toolbox that allows us to handle the new reality becomes more and more necessary. Weaponized design, prevalent in many fields, such as social and traditional media, urban planning, industrial design, advertising, and the internet in general, hints towards an increase in conflicts. These conflicts between tech companies, stakeholders, and users with implications in politics, work, education, and production as apparent in the cases of Amazon workers’ strikes, Donald Trump’s 2016 campaign, Facebook and Microsoft data scandals, and more are often non-transparent to the wide public’s eye, thus consolidating new elites and technocratic classes and making the public scene less and less democratic. The new category proposed, weapon-being, is outlined in respect to the basic function of reducing complexity, subtracting materials, actants, and parameters, not strictly in favor of a humanistic re-orientation but in a more inclusive ontology of objects and subjects. Utilizing insights of Object-Oriented Ontology (OOO) and its schematization of technological objects, an outline for a radical ontology of technology is approached.Keywords: design, hypermodernity, object-oriented ontology, weapon-being
Procedia PDF Downloads 152