Search results for: advanced persistent threat
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3636

Search results for: advanced persistent threat

936 Distributional and Developmental Analysis of PM2.5 in Beijing, China

Authors: Alexander K. Guo

Abstract:

PM2.5 poses a large threat to people’s health and the environment and is an issue of large concern in Beijing, brought to the attention of the government by the media. In addition, both the United States Embassy in Beijing and the government of China have increased monitoring of PM2.5 in recent years, and have made real-time data available to the public. This report utilizes hourly historical data (2008-2016) from the U.S. Embassy in Beijing for the first time. The first objective was to attempt to fit probability distributions to the data to better predict a number of days exceeding the standard, and the second was to uncover any yearly, seasonal, monthly, daily, and hourly patterns and trends that may arise to better understand of air control policy. In these data, 66,650 hours and 2687 days provided valid data. Lognormal, gamma, and Weibull distributions were fit to the data through an estimation of parameters. The Chi-squared test was employed to compare the actual data with the fitted distributions. The data were used to uncover trends, patterns, and improvements in PM2.5 concentration over the period of time with valid data in addition to specific periods of time that received large amounts of media attention, analyzed to gain a better understanding of causes of air pollution. The data show a clear indication that Beijing’s air quality is unhealthy, with an average of 94.07µg/m3 across all 66,650 hours with valid data. It was found that no distribution fit the entire dataset of all 2687 days well, but each of the three above distribution types was optimal in at least one of the yearly data sets, with the lognormal distribution found to fit recent years better. An improvement in air quality beginning in 2014 was discovered, with the first five months of 2016 reporting an average PM2.5 concentration that is 23.8% lower than the average of the same period in all years, perhaps the result of various new pollution-control policies. It was also found that the winter and fall months contained more days in both good and extremely polluted categories, leading to a higher average but a comparable median in these months. Additionally, the evening hours, especially in the winter, reported much higher PM2.5 concentrations than the afternoon hours, possibly due to the prohibition of trucks in the city in the daytime and the increased use of coal for heating in the colder months when residents are home in the evening. Lastly, through analysis of special intervals that attracted media attention for either unnaturally good or bad air quality, the government’s temporary pollution control measures, such as more intensive road-space rationing and factory closures, are shown to be effective. In summary, air quality in Beijing is improving steadily and do follow standard probability distributions to an extent, but still needs improvement. Analysis will be updated when new data become available.

Keywords: Beijing, distribution, patterns, pm2.5, trends

Procedia PDF Downloads 242
935 Dairy Wastewater Treatment by Electrochemical and Catalytic Method

Authors: Basanti Ekka, Talis Juhna

Abstract:

Dairy industrial effluents originated by the typical processing activities are composed of various organic and inorganic constituents, and these include proteins, fats, inorganic salts, antibiotics, detergents, sanitizers, pathogenic viruses, bacteria, etc. These contaminants are harmful to not only human beings but also aquatic flora and fauna. Because consisting of large classes of contaminants, the specific targeted removal methods available in the literature are not viable solutions on the industrial scale. Therefore, in this on-going research, a series of coagulation, electrochemical, and catalytic methods will be employed. The bulk coagulation and electrochemical methods can wash off most of the contaminants, but some of the harmful chemicals may slip in; therefore, specific catalysts designed and synthesized will be employed for the removal of targeted chemicals. In the context of Latvian dairy industries, presently, work is under progress on the characterization of dairy effluents by total organic carbon (TOC), Inductively Coupled Plasma Mass Spectrometry (ICP-MS)/ Inductively Coupled Plasma Optical Emission Spectrometry (ICP-OES), High-Performance Liquid Chromatography (HPLC), Gas Chromatography-Mass Spectrometry (GC-MS), and Mass Spectrometry. After careful evaluation of the dairy effluents, a cost-effective natural coagulant will be employed prior to advanced electrochemical technology such as electrocoagulation and electro-oxidation as a secondary treatment process. Finally, graphene oxide (GO) based hybrid materials will be used for post-treatment of dairy wastewater as graphene oxide has been widely applied in various fields such as environmental remediation and energy production due to the presence of various oxygen-containing groups. Modified GO will be used as a catalyst for the removal of remaining contaminants after the electrochemical process.

Keywords: catalysis, dairy wastewater, electrochemical method, graphene oxide

Procedia PDF Downloads 140
934 Sociology Perspective on Emotional Maltreatment: Retrospective Case Study in a Japanese Elementary School

Authors: Nozomi Fujisaka

Abstract:

This sociological case study analyzes a sequence of student maltreatment in an elementary school in Japan, based on narratives from former students. Among various forms of student maltreatment, emotional maltreatment has received less attention. One reason for this is that emotional maltreatment is often considered part of education and is difficult to capture in surveys. To discuss the challenge of recognizing emotional maltreatment, it's necessary to consider the social background in which student maltreatment occurs. Therefore, from the perspective of the sociology of education, this study aims to clarify the process through which emotional maltreatment was embraced by students within a Japanese classroom. The focus of this study is a series of educational interactions by a homeroom teacher with 11- or 12-year-old students at a small public elementary school approximately 10 years ago. The research employs retrospective narrative data collected through interviews and autoethnography. The semi-structured interviews, lasting one to three hours each, were conducted with 11 young people who were enrolled in the same class as the researcher during their time in elementary school. Autoethnography, as a critical research method, contributes to existing theories and studies by providing a critical representation of the researcher's own experiences. Autoethnography enables researchers to collect detailed data that is often difficult to verbalize in interviews. These research methods are well-suited for this study, which aims to shift the focus from teachers' educational intentions to students' perspectives and gain a deeper understanding of student maltreatment. The research results imply a pattern of emotional maltreatment that is challenging to differentiate from education. In this study's case, the teacher displayed calm and kind behavior toward students after a threat and an explosion of anger. Former students frequently mentioned this behavior of the teacher and perceived emotional maltreatment as part of education. It was not uncommon for former students to offer positive evaluations of the teacher despite experiencing emotional distress. These findings are analyzed and discussed in conjunction with the deschooling theory and the cycle of violence theory. The deschooling theory provides a sociological explanation for how emotional maltreatment can be overlooked in society. The cycle of violence theory, originally developed within the context of domestic violence, explains how violence between romantic partners can be tolerated due to prevailing social norms. Analyzing the case in association with these two theories highlights the characteristics of teachers' behaviors that rationalize maltreatment as education and hinder students from escaping emotional maltreatment. This study deepens our understanding of the causes of student maltreatment and provides a new perspective for future qualitative and quantitative research. Furthermore, since this research is based on the sociology of education, it has the potential to expand research in the fields of pedagogy and sociology, in addition to psychology and social welfare.

Keywords: emotional maltreatment, education, student maltreatment, Japan

Procedia PDF Downloads 74
933 Developing Digital Skills in Museum Professionals through Digital Education: International Good Practices and Effective Learning Experiences

Authors: Antonella Poce, Deborah Seid Howes, Maria Rosaria Re, Mara Valente

Abstract:

The Creative Industries education contexts, Museum Education in particular, generally presents a low emphasis on the use of new digital technologies, digital abilities and transversal skills development. The spread of the Covid-19 pandemic has underlined the importance of these abilities and skills in cultural heritage education contexts: gaining digital skills, museum professionals will improve their career opportunities with access to new distribution markets through internet access and e-commerce, new entrepreneurial tools, or adding new forms of digital expression to their work. However, the use of web, mobile, social, and analytical tools is becoming more and more essential in the Heritage field, and museums, in particular, to face the challenges posed by the current worldwide health emergency. Recent studies highlight the need for stronger partnerships between the cultural and creative sectors, social partners and education and training providers in order to provide these sectors with the combination of skills needed for creative entrepreneurship in a rapidly changing environment. Considering the above conditions, the paper presents different examples of digital learning experiences carried out in Italian and USA contexts with the aim of promoting digital skills in museum professionals. In particular, a quali-quantitative research study has been conducted on two international Postgraduate courses, “Advanced Studies in Museum Education” (2 years) and “Museum Education” (1 year), in order to identify the educational effectiveness of the online learning strategies used (e.g., OBL, Digital Storytelling, peer evaluation) for the development of digital skills and the acquisition of specific content. More than 50 museum professionals participating in the mentioned educational pathways took part in the learning activity, providing evaluation data useful for research purposes.

Keywords: digital skills, museum professionals, technology, education

Procedia PDF Downloads 171
932 Pulmonary Complication of Chronic Liver Disease and the Challenges Identifying and Managing Three Patients

Authors: Aidan Ryan, Nahima Miah, Sahaj Kaur, Imogen Sutherland, Mohamed Saleh

Abstract:

Pulmonary symptoms are a common presentation to the emergency department. Due to a lack of understanding of the underlying pathophysiology, chronic liver disease is not often considered a cause of dyspnea. We present three patients who were admitted with significant respiratory distress secondary to hepatopulmonary syndrome, portopulmonary hypertension, and hepatic hydrothorax. The first is a 27-year-old male with a 6-month history of progressive dyspnea. The patient developed a severe type 1 respiratory failure with a PaO₂ of 6.3kPa and was escalated to critical care, where he was managed with non-invasive ventilation to maintain oxygen saturation. He had an agitated saline contrast echocardiogram, which showed the presence of a possible shunt. A CT angiogram revealed significant liver cirrhosis, portal hypertension, and large para esophageal varices. Ultrasound of the abdomen showed coarse liver echo patter and enlarged spleen. Along with these imaging findings, his biochemistry demonstrated impaired synthetic liver function with an elevated international normalized ratio (INR) of 1.4 and hypoalbuminaemia of 28g/L. The patient was then transferred to a tertiary center for further management. Further investigations confirmed a shunt of 56%, and liver biopsy confirmed cirrhosis suggestive of alpha-1-antitripsyin deficiency. The findings were consistent with a diagnosis of hepatopulmonary syndrome, and the patient is awaiting a liver transplant. The second patient is a 56-year-old male with a 12-month history of worsening dyspnoea, jaundice, confusion. His medical history included liver cirrhosis, portal hypertension, and grade 1 oesophageal varices secondary to significant alcohol excess. On admission, he developed a type 1 respiratory failure with PaO₂ of 6.8kPa requiring 10L of oxygen. CT pulmonary angiogram was negative for pulmonary embolism but showed evidence of chronic pulmonary hypertension, liver cirrhosis, and portal hypertension. An echocardiogram revealed a grossly dilated right heart with reduced function, pulmonary and tricuspid regurgitation, and pulmonary artery pressures estimated at 78mmHg. His biochemical markers showed impaired synthetic liver function with an INR of 3.2, albumin of 29g/L, along with raised bilirubin of 148mg/dL. During his long admission, he was managed with diuretics with little improvement. After three weeks, he was diagnosed with portopulmonary hypertension and was commenced on terlipressin. This resulted in successfully weaning off oxygen, and he was discharged home. The third patient is a 61-year-old male who presented to the local ambulatory care unit for therapeutic paracentesis on a background of decompensated liver cirrhosis. On presenting, he complained of a 2-day history of worsening dyspnoea and a productive cough. Chest x-ray showed a large pleural effusion, increasing in size over the previous eight months, and his abdomen was visibly distended with ascitic fluid. Unfortunately, the patient deteriorated, developing a larger effusion along with an increase in oxygen demand, and passed away. Without underlying cardiorespiratory disease, in the presence of a persistent pleural effusion with underlying decompensated cirrhosis, he was diagnosed with hepatic hydrothorax. While each presented with dyspnoea, the cause and underlying pathophysiology differ significantly from case to case. By describing these complications, we hope to improve awareness and aid prompt and accurate diagnosis, vital for improving outcomes.

Keywords: dyspnea, hepatic hydrothorax, hepatopulmonary syndrome, portopulmonary syndrome

Procedia PDF Downloads 118
931 Phytochemical and Antimicrobial Properties of Zinc Oxide Nanocomposites on Multidrug-Resistant E. coli Enzyme: In-vitro and in-silico Studies

Authors: Callistus I. Iheme, Kenneth E. Asika, Emmanuel I. Ugwor, Chukwuka U. Ogbonna, Ugonna H. Uzoka, Nneamaka A. Chiegboka, Chinwe S. Alisi, Obinna S. Nwabueze, Amanda U. Ezirim, Judeanthony N. Ogbulie

Abstract:

Antimicrobial resistance (AMR) is a major threat to the global health sector. Zinc oxide nanocomposites (ZnONCs), composed of zinc oxide nanoparticles and phytochemicals from Azadirachta indica aqueous leaf extract, were assessed for their physico-chemicals, in silico and in vitro antimicrobial properties on multidrug-resistant Escherichia coli enzymes. Gas chromatography coupled with mass spectroscope (GC-MS) analysis on the ZnONCs revealed the presence of twenty volatile phytochemical compounds, among which is scoparone. Characterization of the ZnONCs was done using ultraviolet-visible spectroscopy (UV-vis), energy dispersive spectroscopy (EDX), transmission electron microscopy (TEM), scanning electron microscopy (SEM), and x-ray diffractometer (XRD). Dehydrogenase enzyme converts colorless 2,3,5-triphenyltetrazolium chloride to the red triphenyl formazan (TPF). The rate of formazan formation in the presence of ZnONCs is proportional to the enzyme activities. The color formation is extracted and determined at 500 nm, and the percentage of enzyme activity is calculated. To determine the bioactive components of the ZnONCs, characterize their binding to enzymes, and evaluate the enzyme-ligand complex stability, respectively Discrete Fourier Transform (DFT) analysis, docking, and molecular dynamics simulations will be employed. The results showed arrays of ZnONCs nanorods with maximal absorption wavelengths of 320 nm and 350 nm and thermally stable at the temperature range of 423.77 to 889.69 ℃. In vitro study assessed the dehydrogenase inhibitory properties of the ZnONCs, conjugate of ZnONCs and ampicillin (ZnONCs-amp), the aqueous leaf extract of A. indica, and ampicillin (standard drug). The findings revealed that at the concentration of 500 μm/mL, 57.89 % of the enzyme activities were inhibited by ZnONCs compared to 33.33% and 21.05% of the standard drug (Ampicillin), and the aqueous leaf extract of the A. indica respectively. The inhibition of the enzyme activities by the ZnONCs at 500 μm/mL was further enhanced to 89.74 % by conjugating with Ampicillin. In silico study on the ZnONCs revealed scoparone as the most viable competitor of nicotinamide adenine dinucleotide (NAD⁺) for the coenzyme binding pocket on E. coli malate and histidinol dehydrogenase. From the findings, it can be concluded that the scoparone components of the nanocomposites in synergy with the zinc oxide nanoparticles inhibited E. coli malate and histidinol dehydrogenase by competitively binding to the NAD⁺ pocket and that the conjugation of the ZnONCs with ampicillin further enhanced the antimicrobial efficiency of the nanocomposite against multidrug resistant E. coli.

Keywords: antimicrobial resistance, dehydrogenase activities, E. coli, zinc oxide nanocomposites

Procedia PDF Downloads 37
930 Mapping and Mitigation Strategy for Flash Flood Hazards: A Case Study of Bishoftu City

Authors: Berhanu Keno Terfa

Abstract:

Flash floods are among the most dangerous natural disasters that pose a significant threat to human existence. They occur frequently and can cause extensive damage to homes, infrastructure, and ecosystems while also claiming lives. Although flash floods can happen anywhere in the world, their impact is particularly severe in developing countries due to limited financial resources, inadequate drainage systems, substandard housing options, lack of early warning systems, and insufficient preparedness. To address these challenges, a comprehensive study has been undertaken to analyze and map flood inundation using Geographic Information System (GIS) techniques by considering various factors that contribute to flash flood resilience and developing effective mitigation strategies. Key factors considered in the analysis include slope, drainage density, elevation, Curve Number, rainfall patterns, land-use/cover classes, and soil data. These variables were computed using ArcGIS software platforms, and data from the Sentinel-2 satellite image (with a 10-meter resolution) were utilized for land-use/cover classification. Additionally, slope, elevation, and drainage density data were generated from the 12.5-meter resolution of the ALOS Palsar DEM, while other relevant data were obtained from the Ethiopian Meteorological Institute. By integrating and regularizing the collected data through GIS and employing the analytic hierarchy process (AHP) technique, the study successfully delineated flash flood hazard zones (FFHs) and generated a suitable land map for urban agriculture. The FFH model identified four levels of risk in Bishoftu City: very high (2106.4 ha), high (10464.4 ha), moderate (1444.44 ha), and low (0.52 ha), accounting for 15.02%, 74.7%, 10.1%, and 0.004% of the total area, respectively. The results underscore the vulnerability of many residential areas in Bishoftu City, particularly the central areas that have been previously developed. Accurate spatial representation of flood-prone areas and potential agricultural zones is crucial for designing effective flood mitigation and agricultural production plans. The findings of this study emphasize the importance of flood risk mapping in raising public awareness, demonstrating vulnerability, strengthening financial resilience, protecting the environment, and informing policy decisions. Given the susceptibility of Bishoftu City to flash floods, it is recommended that the municipality prioritize urban agriculture adaptation, proper settlement planning, and drainage network design.

Keywords: remote sensing, flush flood hazards, Bishoftu, GIS.

Procedia PDF Downloads 26
929 Visualization Tool for EEG Signal Segmentation

Authors: Sweeti, Anoop Kant Godiyal, Neha Singh, Sneh Anand, B. K. Panigrahi, Jayasree Santhosh

Abstract:

This work is about developing a tool for visualization and segmentation of Electroencephalograph (EEG) signals based on frequency domain features. Change in the frequency domain characteristics are correlated with change in mental state of the subject under study. Proposed algorithm provides a way to represent the change in the mental states using the different frequency band powers in form of segmented EEG signal. Many segmentation algorithms have been suggested in literature having application in brain computer interface, epilepsy and cognition studies that have been used for data classification. But the proposed method focusses mainly on the better presentation of signal and that’s why it could be a good utilization tool for clinician. Algorithm performs the basic filtering using band pass and notch filters in the range of 0.1-45 Hz. Advanced filtering is then performed by principal component analysis and wavelet transform based de-noising method. Frequency domain features are used for segmentation; considering the fact that the spectrum power of different frequency bands describes the mental state of the subject. Two sliding windows are further used for segmentation; one provides the time scale and other assigns the segmentation rule. The segmented data is displayed second by second successively with different color codes. Segment’s length can be selected as per need of the objective. Proposed algorithm has been tested on the EEG data set obtained from University of California in San Diego’s online data repository. Proposed tool gives a better visualization of the signal in form of segmented epochs of desired length representing the power spectrum variation in data. The algorithm is designed in such a way that it takes the data points with respect to the sampling frequency for each time frame and so it can be improved to use in real time visualization with desired epoch length.

Keywords: de-noising, multi-channel data, PCA, power spectra, segmentation

Procedia PDF Downloads 390
928 Identifying the Risks on Philippines’ Pre- and Post-Disaster Media Communication on Natural Hazards

Authors: Neyzielle Ronnicque Cadiz

Abstract:

The Philippine is a hotbed of disasters and is a locus of natural hazards. With an average of 20 typhoons entering the Philippine Area of Responsibility (PAR) each year, seven to eight (7-8) of which makes landfall. The country rather inevitably suffers from climate-related calamities. With this vulnerability to natural hazards, the relevant hazard-related issues that come along with the potential threat and occurrence of a disaster oftentimes garners lesser media attention than when a disaster actually occurred. Post-disaster news and events flood the content of news networks primarily focusing on, but not limited to, the efforts of the national government in resolving post-disaster displacement, and all the more on the community leaders’ incompetence in disaster mitigation-- even though the University of the Philippines’ NOAH Center work hand in hand with different stakeholders for disaster mitigation communication efforts. Disaster risk communication is actually a perennial dilemma. There are so many efforts to reach the grassroots level but emergency and disaster preparedness messages inevitably fall short.. The Philippines is very vulnerable to hazards risk and disasters but social media posts and communication efforts mostly go unnoticed, if not argued upon. This study illustrates the outcomes of a research focusing on the print, broadcast, and social media’s role on disaster communication involving the natural catastrophic events that took place in the Philippines from 2009 to present. Considering the country’s state of development, this study looks on the rapid and reliable communication between the government, and the relief/rescue workers in the affected regions; and how the media portrays these efforts effectively. Learning from the disasters that have occurred in the Philippines over the past decade, effective communication can ensure that any efforts to prepare and respond to disasters can make a significant difference. It can potentially either break or save lives. Recognizing the role of communications is not only in improving the coordination of vital services for post disaster; organizations gave priority in reexamining disaster preparedness mechanisms through the Communication with Communities (CwC) programs. This study, however, looks at the CwC efforts of the Philippine media platforms. CwC, if properly utilized by the media, is an essential tool in ensuring accountability and transparency which require effective exchange of information between disasters and survivors and responders. However, in this study, it shows that the perennial dilemma of the Philippine media is that the Disaster Risk Reduction and Management (DRRM) efforts of the country lie in the clouded judgment of political aims. This kind of habit is a multiplier of the country’s risk and insecurity. Sometimes the efforts in urging the public to take action seem useless because the challenge lies on how to achieve social, economic, and political unity using the tri-media platform.

Keywords: Philippines at risk, pre/post disaster communication, tri-media platform, UP NOAH

Procedia PDF Downloads 171
927 The Effect of Taekwondo on Plantar Pressure Distribution and Arch Index

Authors: Maryam Kakavand, Samira Entezari, Sara Khoshjamalfekri, Raghad Mimar

Abstract:

The objective of this study is 1) to compare elite female and beginner taekwondo players in terms of plantar pressure distribution, vertical ground reaction force, contact area, mean pressure, and right and left longitudinal arches, and 2) to compare preferred and non-preferred limbs among elite players. To the best of authors’ knowledge, as of yet, there is no information available about the plantar pressure distribution and arch index among taekwondo players. Material and Methods: An analytical-comparative research method is applied. Therefore seven elite athletes and eight novice athletes were selected. The emed-C50 platform was used to assess plantar pressure distribution, vertical ground reaction force, contact area, mean pressure of different areas, and planter longitudinal arch in a second step protocol. Independent t-test and dependent t-test were used at a level of 0.05 to compare the elites and beginners' right and left feet, and preferred and non-preferred limbs among elite athletes, respectively. Results: In comparing the right and left limbs of elite and beginner groups, findings indicate that there is only a significant difference in the mean pressure of the first metatarsal of the right foot. Findings also showed a significant difference in the contact area of the toes 3, 4, 5 regions between elites’ preferred and non-preferred limbs. However, no significant difference was found between the two groups’ right and left limbs and elites’ preferred and non-preferred limbs in terms of pressure distribution, vertical ground reaction force, and arch index. Conclusion: It seems that taekwondo exercises have affected pressure distribution patterns among advanced players causing some differences in their planter pressure distribution pattern when compared to that of beginners. Therefore, taekwondo exercises may be a factor contributing to asymmetry performance in preferred and non-preferred limbs.

Keywords: planter pressure, arch index, taekwondo, elite

Procedia PDF Downloads 150
926 The Role of Social Media in the Rise of Islamic State in India: An Analytical Overview

Authors: Yasmeen Cheema, Parvinder Singh

Abstract:

The evolution of Islamic State (acronym IS) has an ultimate goal of restoring the caliphate. IS threat to the global security is main concern of international community but has also raised a factual concern for India about the regular radicalization of IS ideology among Indian youth. The incident of joining Arif Ejaz Majeed, an Indian as ‘jihadist’ in IS has set strident alarm in law & enforcement agencies. On 07.03.2017, many people were injured in an Improvised Explosive Device (IED) blast on-board of Bhopal Ujjain Express. One perpetrator of this incident was killed in encounter with police. But, the biggest shock is that the conspiracy was pre-planned and the assailants who carried out the blast were influenced by the ideology perpetrated by the Islamic State. This is the first time name of IS has cropped up in a terror attack in India. It is a red indicator of violent presence of IS in India, which is spreading through social media. The IS have the capacity to influence the younger Muslim generation in India through its brutal and aggressive propaganda videos, social media apps and hatred speeches. It is a well known fact that India is on the radar of IS, as well on its ‘Caliphate Map’. IS uses Twitter, Facebook and other social media platforms constantly. Islamic State has used enticing videos, graphics, and articles on social media and try to influence persons from India & globally that their jihad is worthy. According to arrested perpetrator of IS in different cases in India, the most of Indian youths are victims to the daydreams which are fondly shown by IS. The dreams that the Muslim empire as it was before 1920 can come back with all its power and also that the Caliph and its caliphate can be re-established are shown by the IS. Indian Muslim Youth gets attracted towards these euphemistic ideologies. Islamic State has used social media for disseminating its poisonous ideology, recruitment, operational activities and for future direction of attacks. IS through social media inspired its recruits & lone wolfs to continue to rely on local networks to identify targets and access weaponry and explosives. Recently, a pro-IS media group on its Telegram platform shows Taj Mahal as the target and suggested mode of attack as a Vehicle Born Improvised Explosive Attack (VBIED). Islamic State definitely has the potential to destroy the Indian national security & peace, if timely steps are not taken. No doubt, IS has used social media as a critical mechanism for recruitment, planning and executing of terror attacks. This paper will therefore examine the specific characteristics of social media that have made it such a successful weapon for Islamic State. The rise of IS in India should be viewed as a national crisis and handled at the central level with efficient use of modern technology.

Keywords: ideology, India, Islamic State, national security, recruitment, social media, terror attack

Procedia PDF Downloads 223
925 Evaluation of Groundwater Quality and Contamination Sources Using Geostatistical Methods and GIS in Miryang City, Korea

Authors: H. E. Elzain, S. Y. Chung, V. Senapathi, Kye-Hun Park

Abstract:

Groundwater is considered a significant source for drinking and irrigation purposes in Miryang city, and it is attributed to a limited number of a surface water reservoirs and high seasonal variations in precipitation. Population growth in addition to the expansion of agricultural land uses and industrial development may affect the quality and management of groundwater. This research utilized multidisciplinary approaches of geostatistics such as multivariate statistics, factor analysis, cluster analysis and kriging technique in order to identify the hydrogeochemical process and characterizing the control factors of the groundwater geochemistry distribution for developing risk maps, exploiting data obtained from chemical investigation of groundwater samples under the area of study. A total of 79 samples have been collected and analyzed using atomic absorption spectrometer (AAS) for major and trace elements. Chemical maps using 2-D spatial Geographic Information System (GIS) of groundwater provided a powerful tool for detecting the possible potential sites of groundwater that involve the threat of contamination. GIS computer based map exhibited that the higher rate of contamination observed in the central and southern area with relatively less extent in the northern and southwestern parts. It could be attributed to the effect of irrigation, residual saline water, municipal sewage and livestock wastes. At wells elevation over than 85m, the scatter diagram represents that the groundwater of the research area was mainly influenced by saline water and NO3. Level of pH measurement revealed low acidic condition due to dissolved atmospheric CO2 in the soil, while the saline water had a major impact on the higher values of TDS and EC. Based on the cluster analysis results, the groundwater has been categorized into three group includes the CaHCO3 type of the fresh water, NaHCO3 type slightly influenced by sea water and Ca-Cl, Na-Cl types which are heavily affected by saline water. The most predominant water type was CaHCO3 in the study area. Contamination sources and chemical characteristics were identified from factor analysis interrelationship and cluster analysis. The chemical elements that belong to factor 1 analysis were related to the effect of sea water while the elements of factor 2 associated with agricultural fertilizers. The degree level, distribution, and location of groundwater contamination have been generated by using Kriging methods. Thus, geostatistics model provided more accurate results for identifying the source of contamination and evaluating the groundwater quality. GIS was also a creative tool to visualize and analyze the issues affecting water quality in the Miryang city.

Keywords: groundwater characteristics, GIS chemical maps, factor analysis, cluster analysis, Kriging techniques

Procedia PDF Downloads 165
924 A Sustainable Approach for Waste Management: Automotive Waste Transformation into High Value Titanium Nitride Ceramic

Authors: Mohannad Mayyas, Farshid Pahlevani, Veena Sahajwalla

Abstract:

Automotive shredder residue (ASR) is an industrial waste, generated during the recycling process of End-of-life vehicles. The large increasing production volumes of ASR and its hazardous content have raised concerns worldwide, leading some countries to impose more restrictions on ASR waste disposal and encouraging researchers to find efficient solutions for ASR processing. Although a great deal of research work has been carried out, all proposed solutions, to our knowledge, remain commercially and technically unproven. While the volume of waste materials continues to increase, the production of materials from new sustainable sources has become of great importance. Advanced ceramic materials such as nitrides, carbides and borides are widely used in a variety of applications. Among these ceramics, a great deal of attention has been recently paid to Titanium nitride (TiN) owing to its unique characteristics. In our study, we propose a new sustainable approach for ASR management where TiN nanoparticles with ideal particle size ranging from 200 to 315 nm can be synthesized as a by-product. In this approach, TiN is thermally synthesized by nitriding pressed mixture of automotive shredder residue (ASR) incorporated with titanium oxide (TiO2). Results indicated that TiO2 influences and catalyses degradation reactions of ASR and helps to achieve fast and full decomposition. In addition, the process resulted in titanium nitride (TiN) ceramic with several unique structures (porous nanostructured, polycrystalline, micro-spherical and nano-sized structures) that were simply obtained by tuning the ratio of TiO2 to ASR, and a product with appreciable TiN content of around 85% was achieved after only one hour nitridation at 1550 °C.

Keywords: automotive shredder residue, nano-ceramics, waste treatment, titanium nitride, thermal conversion

Procedia PDF Downloads 293
923 Synthesis of Highly Stable Pseudocapacitors From Secondary Resources

Authors: Samane Maroufi, Rasoul Khayyam Nekouei, Sajjad Mofarah

Abstract:

Fabrication of the state-of-the-art portable pseudocapacitors with the desired transparency, mechanical flexibility, capacitance, and durability is challenging. In most cases, the fabrication of such devices requires critical elements which are either under the crisis of depletion or their extraction from virgin mineral ores have sever environmental impacts. This urges the use of secondary resources instead of virgin resources in fabrication of advanced devices. In this research, ultrathin films of defect-rich Mn1−x−y(CexLay)O2−δ with controllable thicknesses in the range between 5 nm to 627 nm and transmittance (≈29–100%) have been fabricated via an electrochemical chronoamperometric deposition technique using an aqueous precursor derived during the selective purification of rare earth oxide (REOs) isolated from end-of-life nickel-metal hydride (Ni-MH) batteries. Intercalation/de-intercalation of anionic O2− through the atomic tunnels of the stratified Mn1−x−y(CexLay)O2−δ crystallites was found to be responsible for outstanding areal capacitance of 3.4 mF cm−2 of films with 86% transmittance. The intervalence charge transfer among interstitial Ce/La cations and Mn oxidation states within the Mn1−x−y(CexLay)O2−δ structure resulted in excellent capacitance retention of ≈90% after 16 000 cycles. The synthesised transparent flexible Mn1−x−y(CexLay)O2−δ full-cell pseudocapacitor device possessed the energy and power densities of 0.088 μWh cm⁻² and 843 µW cm⁻², respectively. These values show insignificant changes under vigorous twisting and bending to 45–180° confirming these value-added materials are intriguing alternatives for size-sensitive energy storage devices. This research confirms the feasibility of utilisation of secondary waste resources for the fabrication of high-quality pseudocapacitors with engineered defects with the desired flexibility, transparency, and cycling stability suitable for size-sensitive portable electronic devices.

Keywords: pseudocapacitors, energy storage devices, flexible and transparent, sustainability

Procedia PDF Downloads 82
922 Investigation of Fluid-Structure-Seabed Interaction of Gravity Anchor Under Scour, and Anchor Transportation and Installation (T&I)

Authors: Vinay Kumar Vanjakula, Frank Adam

Abstract:

The generation of electricity through wind power is one of the leading renewable energy generation methods. Due to abundant higher wind speeds far away from shore, the construction of offshore wind turbines began in the last decades. However, the installation of offshore foundation-based (monopiles) wind turbines in deep waters are often associated with technical and financial challenges. To overcome such challenges, the concept of floating wind turbines is expanded as the basis of the oil and gas industry. For such a floating system, stabilization in harsh conditions is a challenging task. For that, a robust heavy-weight gravity anchor is needed. Transportation of such anchor requires a heavy vessel that increases the cost. To lower the cost, the gravity anchor is designed with ballast chambers that allow the anchor to float while towing and filled with water when lowering to the planned seabed location. The presence of such a large structure may influence the flow field around it. The changes in the flow field include, formation of vortices, turbulence generation, waves or currents flow breaking and pressure differentials around the seabed sediment. These changes influence the installation process. Also, after installation and under operating conditions, the flow around the anchor may allow the local seabed sediment to be carried off and results in Scour (erosion). These are a threat to the structure's stability. In recent decades, rapid developments of research work and the knowledge of scouring on fixed structures (bridges and monopiles) in rivers and oceans have been carried out, and very limited research work on scouring around a bluff-shaped gravity anchor. The objective of this study involves the application of different numerical models to simulate the anchor towing under waves and calm water conditions. Anchor lowering involves the investigation of anchor movements at certain water depths under wave/current. The motions of anchor drift, heave, and pitch is of special focus. The further study involves anchor scour, where the anchor is installed in the seabed; the flow of underwater current around the anchor induces vortices mainly at the front and corners that develop soil erosion. The study of scouring on a submerged gravity anchor is an interesting research question since the flow not only passes around the anchor but also over the structure that forms different flow vortices. The achieved results and the numerical model will be a basis for the development of other designs and concepts for marine structures. The Computational Fluid Dynamics (CFD) numerical model will build in OpenFOAM and other similar software.

Keywords: anchor lowering, anchor towing, gravity anchor, computational fluid dynamics, scour

Procedia PDF Downloads 163
921 Evaluation of Functional Properties of Protein Hydrolysate from the Fresh Water Mussel Lamellidens marginalis for Nutraceutical Therapy

Authors: Jana Chakrabarti, Madhushrita Das, Ankhi Haldar, Roshni Chatterjee, Tanmoy Dey, Pubali Dhar

Abstract:

High incidences of Protein Energy Malnutrition as a consequence of low protein intake are quite prevalent among the children in developing countries. Thus prevention of under-nutrition has emerged as a critical challenge to India’s developmental Planners in recent times. Increase in population over the last decade has led to greater pressure on the existing animal protein sources. But these resources are currently declining due to persistent drought, diseases, natural disasters, high-cost of feed, and low productivity of local breeds and this decline in productivity is most evident in some developing countries. So the need of the hour is to search for efficient utilization of unconventional low-cost animal protein resources. Molluscs, as a group is regarded as under-exploited source of health-benefit molecules. Bivalve is the second largest class of phylum Mollusca. Annual harvests of bivalves for human consumption represent about 5% by weight of the total world harvest of aquatic resources. The freshwater mussel Lamellidens marginalis is widely distributed in ponds and large bodies of perennial waters in the Indian sub-continent and well accepted as food all over India. Moreover, ethno-medicinal uses of the flesh of Lamellidens among the rural people to treat hypertension have been documented. Present investigation thus attempts to evaluate the potential of Lamellidens marginalis as functional food. Mussels were collected from freshwater ponds and brought to the laboratory two days before experimentation for acclimatization in laboratory conditions. Shells were removed and fleshes were preserved at- 20oC until analysis. Tissue homogenate was prepared for proximate studies. Fatty acids and amino acids composition were analyzed. Vitamins, Minerals and Heavy metal contents were also studied. Mussel Protein hydrolysate was prepared using Alcalase 2.4 L and degree of hydrolysis was evaluated to analyze its Functional properties. Ferric Reducing Antioxidant Power (FRAP) and DPPH Antioxidant assays were performed. Anti-hypertensive property was evaluated by measuring Angiotensin Converting Enzyme (ACE) inhibition assay. Proximate analysis indicates that mussel meat contains moderate amount of protein (8.30±0.67%), carbohydrate (8.01±0.38%) and reducing sugar (4.75±0.07%), but less amount of fat (1.02±0.20%). Moisture content is quite high but ash content is very low. Phospholipid content is significantly high (19.43 %). Lipid constitutes, substantial amount of eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) which have proven prophylactic values. Trace elements are found present in substantial amount. Comparative study of proximate nutrients between Labeo rohita, Lamellidens and cow’s milk indicates that mussel meat can be used as complementary food source. Functionality analyses of protein hydrolysate show increase in Fat absorption, Emulsification, Foaming capacity and Protein solubility. Progressive anti-oxidant and anti-hypertensive properties have also been documented. Lamellidens marginalis can thus be regarded as a functional food source as this may combine effectively with other food components for providing essential elements to the body. Moreover, mussel protein hydrolysate provides opportunities for utilizing it in various food formulations and pharmaceuticals. The observations presented herein should be viewed as a prelude to what future holds.

Keywords: functional food, functional properties, Lamellidens marginalis, protein hydrolysate

Procedia PDF Downloads 415
920 Migration, Labour Market, Capital Formation, and Social Security: A Study of Livelihoods of the Urban Poor in Two Different Cities of West Bengal in India

Authors: Arup Pramanik

Abstract:

Most of the cities in the developing countries like Siliguri Municipal Corporation Area (SMCA) and Raiganj Municipality (RM) in West Bengal, India are changing typically in terms of demographic, economic and social relationship due to rapid pace of urbanization. The mushrooming growth of slums in SMCA and RM is the direct consequence of urbanization and migration due to regional imbalance, unbalanced growth process which is posing a serious threat to sustainable development of the country. Almost all the slums happen to be a breeding ground for poverty, negligence, and disease. Unpredictable growth of slums and poverty alleviation has now become a serious challenge to the global and national policy makers for the development of the slum dwellers. The ethical dimension of the poor in the cities like SMCA and RM stands on equal opportunities, inclusive and harmonious living without discrimination of any kind. But, the migrant slum dwellers in SMCA and RM do not possess high skill or education to enable them to find well paid employment in the formal sector and the surplus urban labour force is compelled to generate its own means of employment and survival in the informal sector. The survey data of the households has been analysedin terms of percentage, descriptive statistics which includes mean, Standard Deviation (SD), ANOVA (Mean Difference) etc., to analyse the socio economic variables of the households. The study shows that the migrant labour forces living in the slums are derived from the social security measures in both the municipal areas of SMCA and RM. The urban poor in the cities of SMCA and RM rely heavily on social capital amongst all the capital assets to help them ‘get by’ and ‘get ahead’. Despite, the slum dwellers in the study areas are vulnerable with respect to other determinants of capital assets. It is noteworthy that Indian plans of anti-poverty programmes was in a proper place even after the neo-liberal regime, where the basic idea behind the massive shift of various welfare and service oriented strategy to poverty reduction strategy for the benefit of the urban poor with the trickle down effects. But, the overall impact of the trickledown effect was unsatisfactory. The objective of the Paper is to assess the magnitude of migration and absorption in the urban labour market. Issues relating to capital formation, social security measures and the support of the Welfare State in order to meet 'Sustainable Development Goals'. This study also highlights the quality of life of urban poor migrants in terms of capital formation and livelihoods.

Keywords: migration, slums, labour market, capital formation, social security

Procedia PDF Downloads 112
919 Evaluation of Pragmatic Information in an English Textbook: Focus on Requests

Authors: Israa A. Qari

Abstract:

Learning to request in a foreign language is a key ability within pragmatics language teaching. This paper examines how requests are taught in English Unlimited Book 3 (Cambridge University Press), an EFL textbook series employed by King Abdulaziz University in Jeddah, Saudi Arabia to teach advanced foundation year students English. The focus of analysis is the evaluation of the request linguistic strategies present in the textbook, frequency of the use of these strategies, and the contextual information provided on the use of these linguistic forms. The researcher collected all the linguistic forms which consisted of the request speech act and divided them into levels employing the CCSARP request coding manual. Findings demonstrated that simple and commonly employed request strategies are introduced. Looking closely at the exercises throughout the chapters, it was noticeable that the book exclusively employed the most direct form of requesting (the imperative) when giving learners instructions: e.g. listen, write, ask, answer, read, look, complete, choose, talk, think, etc. The book also made use of some other request strategies such as ‘hedged performatives’ and ‘query preparatory’. However, it was also found that many strategies were not dealt with in the book, specifically strategies with combined functions (e.g. possibility, ability). On a sociopragmatic level, a strong focus was found to exist on standard situations in which relations between the requester and requestee are clear. In general, contextual information was communicated implicitly only. The textbook did not seem to differentiate between formal and informal request contexts (register) which might consequently impel students to overgeneralize. The paper closes with some recommendations for textbook and curriculum designers. Findings are also contrasted with previous results from similar body of research on EFL requests.

Keywords: EFL, requests, saudi, speech acts, textbook evaluation

Procedia PDF Downloads 132
918 Alternating Expectation-Maximization Algorithm for a Bilinear Model in Isoform Quantification from RNA-Seq Data

Authors: Wenjiang Deng, Tian Mou, Yudi Pawitan, Trung Nghia Vu

Abstract:

Estimation of isoform-level gene expression from RNA-seq data depends on simplifying assumptions, such as uniform reads distribution, that are easily violated in real data. Such violations typically lead to biased estimates. Most existing methods provide a bias correction step(s), which is based on biological considerations, such as GC content–and applied in single samples separately. The main problem is that not all biases are known. For example, new technologies such as single-cell RNA-seq (scRNA-seq) may introduce new sources of bias not seen in bulk-cell data. This study introduces a method called XAEM based on a more flexible and robust statistical model. Existing methods are essentially based on a linear model Xβ, where the design matrix X is known and derived based on the simplifying assumptions. In contrast, XAEM considers Xβ as a bilinear model with both X and β unknown. Joint estimation of X and β is made possible by simultaneous analysis of multi-sample RNA-seq data. Compared to existing methods, XAEM automatically performs empirical correction of potentially unknown biases. XAEM implements an alternating expectation-maximization (AEM) algorithm, alternating between estimation of X and β. For speed XAEM utilizes quasi-mapping for read alignment, thus leading to a fast algorithm. Overall XAEM performs favorably compared to other recent advanced methods. For simulated datasets, XAEM obtains higher accuracy for multiple-isoform genes, particularly for paralogs. In a differential-expression analysis of a real scRNA-seq dataset, XAEM achieves substantially greater rediscovery rates in an independent validation set.

Keywords: alternating EM algorithm, bias correction, bilinear model, gene expression, RNA-seq

Procedia PDF Downloads 137
917 Meet Automotive Software Safety and Security Standards Expectations More Quickly

Authors: Jean-François Pouilly

Abstract:

This study addresses the growing complexity of embedded systems and the critical need for secure, reliable software. Traditional cybersecurity testing methods, often conducted late in the development cycle, struggle to keep pace. This talk explores how formal methods, integrated with advanced analysis tools, empower C/C++ developers to 1) Proactively address vulnerabilities and bugs, which includes formal methods and abstract interpretation techniques to identify potential weaknesses early in the development process, reducing the reliance on penetration and fuzz testing in later stages. 2) Streamline development by focusing on bugs that matter, with close to no false positives and catching flaws earlier, the need for rework and retesting is minimized, leading to faster development cycles, improved efficiency and cost savings. 3) Enhance software dependability which includes combining static analysis using abstract interpretation with full context sensitivity, with hardware memory awareness allows for a more comprehensive understanding of potential vulnerabilities, leading to more dependable and secure software. This approach aligns with industry best practices (ISO2626 or ISO 21434) and empowers C/C++ developers to deliver robust, secure embedded systems that meet the demands of today's and tomorrow's applications. We will illustrate this approach with the TrustInSoft analyzer to show how it accelerates verification for complex cases, reduces user fatigue, and improves developer efficiency, cost-effectiveness, and software cybersecurity. In summary, integrating formal methods and sound Analyzers enhances software reliability and cybersecurity, streamlining development in an increasingly complex environment.

Keywords: safety, cybersecurity, ISO26262, ISO24434, formal methods

Procedia PDF Downloads 11
916 Dynamic Thin Film Morphology near the Contact Line of a Condensing Droplet: Nanoscale Resolution

Authors: Abbasali Abouei Mehrizi, Hao Wang

Abstract:

The thin film region is so important in heat transfer process due to its low thermal resistance. On the other hand, the dynamic contact angle is crucial boundary condition in numerical simulations. While different modeling contains different assumption of the microscopic contact angle, none of them has experimental evidence for their assumption, and the contact line movement mechanism still remains vague. The experimental investigation in complete wetting is more popular than partial wetting, especially in nanoscale resolution when there is sharp variation in thin film profile in partial wetting. In the present study, an experimental investigation of water film morphology near the triple phase contact line during the condensation is performed. The state-of-the-art tapping-mode atomic force microscopy (TM-AFM) was used to get the high-resolution film profile goes down to 2 nm from the contact line. The droplet was put in saturated chamber. The pristine silicon wafer was used as a smooth substrate. The substrate was heated by PI film heater. So the chamber would be over saturated by droplet evaporation. By turning off the heater, water vapor gradually started condensing on the droplet and the droplet advanced. The advancing speed was less than 20 nm/s. The dominant results indicate that in contrast to nonvolatile liquid, the film profile goes down straightly to the surface till 2 nm from the substrate. However, small bending has been observed below 20 nm, occasionally. So, it can be claimed that for the low condensation rate the microscopic contact angle equals to the optically detectable macroscopic contact angle. This result can be used to simplify the heat transfer modeling in partial wetting. The experimental result of the equality of microscopic and macroscopic contact angle can be used as a solid evidence for using this boundary condition in numerical simulation.

Keywords: advancing, condensation, microscopic contact angle, partial wetting

Procedia PDF Downloads 291
915 Boussinesq Model for Dam-Break Flow Analysis

Authors: Najibullah M, Soumendra Nath Kuiry

Abstract:

Dams and reservoirs are perceived for their estimable alms to irrigation, water supply, flood control, electricity generation, etc. which civilize the prosperity and wealth of society across the world. Meantime the dam breach could cause devastating flood that can threat to the human lives and properties. Failures of large dams remain fortunately very seldom events. Nevertheless, a number of occurrences have been recorded in the world, corresponding in an average to one to two failures worldwide every year. Some of those accidents have caused catastrophic consequences. So it is decisive to predict the dam break flow for emergency planning and preparedness, as it poses high risk to life and property. To mitigate the adverse impact of dam break, modeling is necessary to gain a good understanding of the temporal and spatial evolution of the dam-break floods. This study will mainly deal with one-dimensional (1D) dam break modeling. Less commonly used in the hydraulic research community, another possible option for modeling the rapidly varied dam-break flows is the extended Boussinesq equations (BEs), which can describe the dynamics of short waves with a reasonable accuracy. Unlike the Shallow Water Equations (SWEs), the BEs taken into account the wave dispersion and non-hydrostatic pressure distribution. To capture the dam-break oscillations accurately it is very much needed of at least fourth-order accurate numerical scheme to discretize the third-order dispersion terms present in the extended BEs. The scope of this work is therefore to develop an 1D fourth-order accurate in both space and time Boussinesq model for dam-break flow analysis by using finite-volume / finite difference scheme. The spatial discretization of the flux and dispersion terms achieved through a combination of finite-volume and finite difference approximations. The flux term, was solved using a finite-volume discretization whereas the bed source and dispersion term, were discretized using centered finite-difference scheme. Time integration achieved in two stages, namely the third-order Adams Basforth predictor stage and the fourth-order Adams Moulton corrector stage. Implementation of the 1D Boussinesq model done using PYTHON 2.7.5. Evaluation of the performance of the developed model predicted as compared with the volume of fluid (VOF) based commercial model ANSYS-CFX. The developed model is used to analyze the risk of cascading dam failures similar to the Panshet dam failure in 1961 that took place in Pune, India. Nevertheless, this model can be used to predict wave overtopping accurately compared to shallow water models for designing coastal protection structures.

Keywords: Boussinesq equation, Coastal protection, Dam-break flow, One-dimensional model

Procedia PDF Downloads 227
914 Quantitative Evaluation of Supported Catalysts Key Properties from Electron Tomography Studies: Assessing Accuracy Using Material-Realistic 3D-Models

Authors: Ainouna Bouziane

Abstract:

The ability of Electron Tomography to recover the 3D structure of catalysts, with spatial resolution in the subnanometer scale, has been widely explored and reviewed in the last decades. A variety of experimental techniques, based either on Transmission Electron Microscopy (TEM) or Scanning Transmission Electron Microscopy (STEM) have been used to reveal different features of nanostructured catalysts in 3D, but High Angle Annular Dark Field imaging in STEM mode (HAADF-STEM) stands out as the most frequently used, given its chemical sensitivity and avoidance of imaging artifacts related to diffraction phenomena when dealing with crystalline materials. In this regard, our group has developed a methodology that combines image denoising by undecimated wavelet transforms (UWT) with automated, advanced segmentation procedures and parameter selection methods using CS-TVM (Compressed Sensing-total variation minimization) algorithms to reveal more reliable quantitative information out of the 3D characterization studies. However, evaluating the accuracy of the magnitudes estimated from the segmented volumes is also an important issue that has not been properly addressed yet, because a perfectly known reference is needed. The problem particularly complicates in the case of multicomponent material systems. To tackle this key question, we have developed a methodology that incorporates volume reconstruction/segmentation methods. In particular, we have established an approach to evaluate, in quantitative terms, the accuracy of TVM reconstructions, which considers the influence of relevant experimental parameters like the range of tilt angles, image noise level or object orientation. The approach is based on the analysis of material-realistic, 3D phantoms, which include the most relevant features of the system under analysis.

Keywords: electron tomography, supported catalysts, nanometrology, error assessment

Procedia PDF Downloads 78
913 The Destruction of Memory: Ataturk Cultural Centre

Authors: Birge Yildirim Okta

Abstract:

This paper aims to narrate the story of Atatürk Cultural Center in Taksim Square, which was demolished in 2018, and discuss its architectonic as a social place of memory and its existence and demolishment as the space of politics. Focusing on the timeline starting from early republican period till today, the paper uses narrative discourse analysis to research Atatürk Cultural Center as a place of memory and a space of politics in its existence. After the establishment of Turkish Republic, one of most important implementation in Taksim Square, reflecting the internationalist style, was the construction of Opera Building in Prost Plan. The first design of the opera building belonged to Aguste Perret, which could not be implemented due to economic hardship during World War II. Later the project was designed by architects Feridun Kip and Rüknettin Güney in 1946 but could not be completed due to 1960 military coup. Later the project was shifted to another architect Hayati Tabanlıoglu, with a change in its function as a cultural center. Eventually, the construction of the building was completed in 1969 in a completely different design. AKM became a symbol of republican modernism not only with its modern architectural style but also with it is function as the first opera building of the republic, reflecting the western, modern cultural heritage by professional groups, artists and the intelligentsia. In 2005, Istanbul’s council for the protection of cultural heritage decided to list AKM as a grade 1 cultural heritage, ending a period of controversy which saw calls for the demolition of the center as it was claimed it ended its useful lifespan. In 2008 the building was announced to be closed for repairs and restoration. Over the following years, the building was demolished piece by piece silently while Taksim mosque has been built just in front of Atatürk Cultural Center. Belonging to the early republican period, AKM was a representation of a cultural production of a modern society for the emergence and westward looking, secular public space in Turkey. Its erasure from Taksim scene under the rule of the conservative government, Justice and Development Party and the construction of Taksim mosque in front of AKM’s parcel is also representational. The question of governing the city through space has always been an important aspect for governments, those holding political power since cities are the chaotic environments that are seen as a threat for the governments, carrying the tensions of proletariat or the contradictory groups. The story of AKM as a dispositive or a regulatory apparatus demonstrates how space itself is becoming a political medium, to transform the socio-political condition. The article aims to discuss the existence and demolishment of Atatürk Cultural Center by discussing the constructed and demolished building as a place of memory and a space of politics.

Keywords: space of politics, place of memory, atatürk cultural center, taksim square

Procedia PDF Downloads 76
912 The Status of Precision Agricultural Technology Adoption on Row Crop Farms vs. Specialty Crop Farms

Authors: Shirin Ghatrehsamani

Abstract:

Higher efficiency and lower environmental impact are the consequence of using advanced technology in farming. They also help to decrease yield variability by diminishing weather variability impact, optimizing nutrient and pest management as well as reducing competition from weeds. A better understanding of the pros and cons of applying technology and finding the main reason for preventing the utilization of the technology has a significant impact on developing technology adoption among farmers and producers in the digital agriculture era. The results from two surveys carried out in 2019 and 2021 were used to investigate whether the crop types had an impact on the willingness to utilize technology on the farms. The main focus of the questionnaire was on utilizing precision agriculture (PA) technologies among farmers in some parts of the united states. Collected data was analyzed to determine the practical application of various technologies. The survey results showed more similarities in the main reason not to use PA between the two crop types, but the present application of using technology in specialty crops is generally five times larger than in row crops. GPS receiver applications were reported similar for both types of crops. Lack of knowledge and high cost of data handling were cited as the main problems. The most significant difference was among using variable rate technology, which was 43% for specialty crops while was reported 0% for row crops. Pest scouting and mapping were commonly used for specialty crops, while they were rarely applied for row crops. Survey respondents found yield mapping, soil sampling map, and irrigation scheduling were more valuable for specialty crops than row crops in management decisions. About 50% of the respondents would like to share the PA data in both types of crops. Almost 50 % of respondents got their PA information from retailers in both categories, and as the second source, using extension agents were more common in specialty crops than row crops.

Keywords: precision agriculture, smart farming, digital agriculture, technology adoption

Procedia PDF Downloads 108
911 Development of a Large-Scale Cyclic Shear Testing Machine Under Constant Normal Stiffness

Authors: S. M. Mahdi Niktabara, K. Seshagiri Raob, Amit Kumar Shrivastavac, Jiří Ščučkaa

Abstract:

The presence of the discontinuity in the form of joints is one of the most significant factors causing instability in the rock mass. On the other hand, dynamic loads, including earthquake and blasting induce cyclic shear loads along the joints in rock masses; therefore, failure of rock mass exacerbates along the joints due to changing shear resistance. Joints are under constant normal load (CNL) and constant normal stiffness (CNS) conditions. Normal stiffness increases on the joints with increasing depth, and it can affect shear resistance. For correct assessment of joint shear resistance under varying normal stiffness and number of cycles, advanced laboratory shear machine is essential for the shear test. Conventional direct shear equipment has limitations such as boundary conditions, working under monotonic movements only, or cyclic shear loads with constant frequency and amplitude of shear loads. Hence, a large-scale servo-controlled direct shear testing machine was designed and fabricated to perform shear test under the both CNL and CNS conditions with varying normal stiffness at different frequencies and amplitudes of shear loads. In this study, laboratory cyclic shear tests were conducted on non-planar joints under varying normal stiffness. In addition, the effects of different frequencies and amplitudes of shear loads were investigated. The test results indicate that shear resistance increases with increasing normal stiffness at the first cycle, but the influence of normal stiffness significantly decreases with an increase in the number of shear cycles. The frequency of shear load influences on shear resistance, i.e. shear resistance increases with increasing frequency. However, at low shear amplitude the number of cycles does not affect shear resistance on the joints, but it decreases with higher amplitude.

Keywords: cyclic shear load, frequency of load, amplitude of displacement, normal stiffness

Procedia PDF Downloads 145
910 Evaluating Structural Crack Propagation Induced by Soundless Chemical Demolition Agent Using an Energy Release Rate Approach

Authors: Shyaka Eugene

Abstract:

The efficient and safe demolition of structures is a critical challenge in civil engineering and construction. This study focuses on the development of optimal demolition strategies by investigating the crack propagation behavior in beams induced by soundless cracking agents. It is commonly used in controlled demolition and has gained prominence due to its non-explosive and environmentally friendly nature. This research employs a comprehensive experimental and computational approach to analyze the crack initiation, propagation, and eventual failure in beams subjected to soundless cracking agents. Experimental testing involves the application of various cracking agents under controlled conditions to understand their effects on the structural integrity of beams. High-resolution imaging and strain measurements are used to capture the crack propagation process. In parallel, numerical simulations are conducted using advanced finite element analysis (FEA) techniques to model crack propagation in beams, considering various parameters such as cracking agent composition, loading conditions, and beam properties. The FEA models are validated against experimental results, ensuring their accuracy in predicting crack propagation patterns. The findings of this study provide valuable insights into optimizing demolition strategies, allowing engineers and demolition experts to make informed decisions regarding the selection of cracking agents, their application techniques, and structural reinforcement methods. Ultimately, this research contributes to enhancing the safety, efficiency, and sustainability of demolition practices in the construction industry, reducing environmental impact and ensuring the protection of adjacent structures and the surrounding environment.

Keywords: expansion pressure, energy release rate, soundless chemical demolition agent, crack propagation

Procedia PDF Downloads 56
909 Natural Mexican Zeolite Modified with Iron to Remove Arsenic Ions from Water Sources

Authors: Maritza Estela Garay-Rodriguez, Mirella Gutierrez-Arzaluz, Miguel Torres-Rodriguez, Violeta Mugica-Alvarez

Abstract:

Arsenic is an element present in the earth's crust and is dispersed in the environment through natural processes and some anthropogenic activities. Naturally released into the environment through the weathering and erosion of sulphides mineral, some activities such as mining, the use of pesticides or wood preservatives potentially increase the concentration of arsenic in air, water, and soil. The natural arsenic release of a geological material is a threat to the world's drinking water sources. In aqueous phase is found in inorganic form, as arsenate and arsenite mainly, the contamination of groundwater by salts of this element originates what is known as endemic regional hydroarsenicism. The International Agency for Research on Cancer (IARC) categorizes the inorganic As within group I, as a substance with proven carcinogenic action for humans. It has been found the presence of As in groundwater in several countries such as Argentina, Mexico, Bangladesh, Canada and the United States. Regarding the concentration of arsenic in drinking water according to the World Health Organization (WHO) and the Environmental Protection Agency (EPA) establish maximum concentrations of 10 μg L⁻¹. In Mexico, in some states as Hidalgo, Morelos and Michoacán concentrations of arsenic have been found in bodies of water around 1000 μg L⁻¹, a concentration that is well above what is allowed by Mexican regulations with the NOM-127- SSA1-1994 that establishes a limit of 25 μg L⁻¹. Given this problem in Mexico, this research proposes the use of a natural Mexican zeolite (clinoptilolite type) native to the district of Etla in the central valley region of Oaxaca, as an adsorbent for the removal of arsenic. The zeolite was subjected to a conditioning with iron oxide by the precipitation-impregnation method with 0.5 M iron nitrate solution, in order to increase the natural adsorption capacity of this material. The removal of arsenic was carried out in a column with a fixed bed of conditioned zeolite, since it combines the advantages of a conventional filter with those of a natural adsorbent medium, providing a continuous treatment, of low cost and relatively easy to operate, for its implementation in marginalized areas. The zeolite was characterized by XRD, SEM/EDS, and FTIR before and after the arsenic adsorption tests, the results showed that the modification methods used are adequate to prepare adsorbent materials since it does not modify its structure, the results showed that with a particle size of 1.18 mm, an initial concentration of As (V) ions of 1 ppm, a pH of 7 and at room temperature, a removal of 98.7% was obtained with an adsorption capacity of 260 μg As g⁻¹ zeolite. The results obtained indicated that the conditioned zeolite is favorable for the elimination of arsenate in water containing up to 1000 μg As L⁻¹ and could be suitable for removing arsenate from pits of water.

Keywords: adsorption, arsenic, iron conditioning, natural zeolite

Procedia PDF Downloads 169
908 Influence of a Company’s Dynamic Capabilities on Its Innovation Capabilities

Authors: Lovorka Galetic, Zeljko Vukelic

Abstract:

The advanced concepts of strategic and innovation management in the sphere of company dynamic and innovation capabilities, and achieving their mutual alignment and a synergy effect, are important elements in business today. This paper analyses the theory and empirically investigates the influence of a company’s dynamic capabilities on its innovation capabilities. A new multidimensional model of dynamic capabilities is presented, consisting of five factors appropriate to real time requirements, while innovation capabilities are considered pursuant to the official OECD and Eurostat standards. After examination of dynamic and innovation capabilities indicated their theoretical links, the empirical study testing the model and examining the influence of a company’s dynamic capabilities on its innovation capabilities showed significant results. In the study, a research model was posed to relate company dynamic and innovation capabilities. One side of the model features the variables that are the determinants of dynamic capabilities defined through their factors, while the other side features the determinants of innovation capabilities pursuant to the official standards. With regard to the research model, five hypotheses were set. The study was performed in late 2014 on a representative sample of large and very large Croatian enterprises with a minimum of 250 employees. The research instrument was a questionnaire administered to company top management. For both variables, the position of the company was tested in comparison to industry competitors, on a fivepoint scale. In order to test the hypotheses, correlation tests were performed to determine whether there is a correlation between each individual factor of company dynamic capabilities with the existence of its innovation capabilities, in line with the research model. The results indicate a strong correlation between a company’s possession of dynamic capabilities in terms of their factors, due to the new multi-dimensional model presented in this paper, with its possession of innovation capabilities. Based on the results, all five hypotheses were accepted. Ultimately, it was concluded that there is a strong association between the dynamic and innovation capabilities of a company. 

Keywords: dynamic capabilities, innovation capabilities, competitive advantage, business results

Procedia PDF Downloads 300
907 Uterine Cervical Cancer; Early Treatment Assessment with T2- And Diffusion-Weighted MRI

Authors: Susanne Fridsten, Kristina Hellman, Anders Sundin, Lennart Blomqvist

Abstract:

Background: Patients diagnosed with locally advanced cervical carcinoma are treated with definitive concomitant chemo-radiotherapy. Treatment failure occurs in 30-50% of patients with very poor prognoses. The treatment is standardized with risk for both over-and undertreatment. Consequently, there is a great need for biomarkers able to predict therapy outcomes to allow for individualized treatment. Aim: To explore the role of T2- and diffusion-weighted magnetic resonance imaging (MRI) for early prediction of therapy outcome and the optimal time point for assessment. Methods: A pilot study including 15 patients with cervical carcinoma stage IIB-IIIB (FIGO 2009) undergoing definitive chemoradiotherapy. All patients underwent MRI four times, at baseline, 3 weeks, 5 weeks, and 12 weeks after treatment started. Tumour size, size change (∆size), visibility on diffusion-weighted imaging (DWI), apparent diffusion coefficient (ADC) and change of ADC (∆ADC) at the different time points were recorded. Results: 7/15 patients relapsed during the study period, referred to as "poor prognosis", PP, and the remaining eight patients are referred to "good prognosis", GP. The tumor size was larger at all time points for PP than for GP. The ∆size between any of the four-time points was the same for PP and GP patients. The sensitivity and specificity to predict prognostic group depending on a remaining tumor on DWI were highest at 5 weeks and 83% (5/6) and 63% (5/8), respectively. The combination of tumor size at baseline and remaining tumor on DWI at 5 weeks in ROC analysis reached an area under the curve (AUC) of 0.83. After 12 weeks, no remaining tumor was seen on DWI among patients with GP, as opposed to 2/7 PP patients. Adding ADC to the tumor size measurements did not improve the predictive value at any time point. Conclusion: A large tumor at baseline MRI combined with a remaining tumor on DWI at 5 weeks predicted a poor prognosis.

Keywords: chemoradiotherapy, diffusion-weighted imaging, magnetic resonance imaging, uterine cervical carcinoma

Procedia PDF Downloads 136