Search results for: digital forensics capacity and capability
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8074

Search results for: digital forensics capacity and capability

604 Education as a Factor Which Reduces Poverty

Authors: E. V. Fakhrutdinova, Y. S. Kolesnikova, E. A. Karasik, V. M. Zagidullina

Abstract:

Poverty as the social and economic phenomenon exists in any society and represents a many-sided problem. In this sense it is universal and for many centuries serves as a research objects for scientists. Special attention to a problem of poverty in Russia is caused, first of all, by the critical growth of inequality and by scales of expansion of poverty, considerable decrease in the level and quality of life of the population, decrease in availability of education during the period of reforming. The expansion of poverty on the working members of society, youth, which has to provide reproduction of the population is alarming. As poverty is the reason of weakening of national security of the country, degradation of the population, decline in the quality of the human capital, complication of a demographic situation, strengthening of social contradictions in society, so far as the reduction of poverty, so, the increase in production. Poverty: the characteristic of an economic situation of the individual or social group at which they can't satisfy certain minimum requirements necessary for life, preservations of working capacity and reproduction. Poverty became one of the critical factors expelling people from the system of the institutional interactions reducing social space in which their relations were building breaking their social identity. Complication of the problem of poverty in modern society happened due to penetration of the related relations into many spheres of life. It is known that negative consequences of poverty display not only at the personal level of the poor person, but also at the level of interpersonal social interactions, decline in the quality and level of development of the human capital, and also at social and economic system in general. We conducted a research on the influence of education on the change of poverty level of the population. We consider education as a resource for an increase of the income and social mobility. Dependence of the income of the population on the level of education, availability of education (level of education and quality of education) on the level of income of families is found. Differentiation of quality and number of educational services for children depending on the level of the income of families is revealed. Influence of a factor of poverty on the availability of education is also studied. We consider expenses on education as the limiter of access to education. We consider education as a factor of fixation and aggravation of a property inequality. In the solution of problems of poverty the defining condition is the state regulation of social and economic development by means of creation of the effective institutional environment. The state has to develop measures for an increase of availability of various services to all categories of citizens, in particular services of health care and education, especially for poor citizens enters. The special attention regarding an increase of availability of education services has to be paid to creation of system of social elevators.

Keywords: poverty, education, human capital, quality of life

Procedia PDF Downloads 323
603 Combined Effect of Zinc Supplementation and Ascaridia galli Infection on Oxidative Status in Broiler Chicks

Authors: Veselin Nanev, Margarita Gabrashanska, Neli Tsocheva-Gaytandzieva

Abstract:

Ascaridiasis in chicks is one of the major causes for the reduction in body weights, higher mortality, and reduction in egg production, worse meat quantity, pathological lesions, blood losses, and secondary infections. It is responsible for economic losses to the poultry. Despite being economically important parasite, little work has been carried out on the role of antioxidants in the pathogenesis of ascaridiasis. Zinc is a trace elements with multiple functions and one of them is its antioxidant ability. The aim of this study was to investigate the combined effect of organic zinc compound (2Gly.ZnCl22H20) and Ascaridia galli infection on the antioxidant status of broiler chicks. The activity of antioxidant enzymes superoxide dismutase, glutathione peroxidase, the level of lipid peroxidation, expressed by malonyl dialdexyde and plasma zinc in chicks experimentally infected with Ascaridia galli was investigated. Parasite burden was studied as well. The study was performed on 80 broiler chicks, Cobb 500 hybrids. They were divided into four groups – 1st group – control (non-treated and non-infected, 2nd group – infected with embryonated eggs of A. galli and without treatment, 3rd group- only treated with 2Gly.ZnCl22H20 compound and gr. 4 - infected and supplemented with Zn-compound. The chicks in gr. 2 and 4 were infected orally with 450 embryonated eggs of A.galli on day 14 post infection. The chicks from gr. 3 and 4 received 40 mg Zn compound /kg of feed after the 1st week of age during 10 days. All chicks were similarly fed, managed and killed at 60 day p.i. Helminthological, biochemical and statistical methods were applied. Reduced plasma Zn content was observed in the infected chicks compared to controls. Zinc supplementation did not restored the lower Zn content. Cu, Zn-SOD was decreased significantly in the infected chicks compared to controls. The GPx – activity was significantly increased in the infected chicks than the controls. Increased GPx activity together with decreased Cu/ZnSOD activity revealed unbalanced antioxidant defense capacity. The increased MDA level in chicks and changes in the activity of the enzymes showed a development of oxidative stress during the infection with A.galli. Zn compound supplementation has been shown to influence the activity of both antioxidant enzymes (SOD, GPx) and reduced MDA in the infected chicks. Organic zinc supplementation improved the antioxidant defense and protect hosts from oxidant destruction, but without any effect on the parasite burden. The number of helminths was similar in both groups. Zn supplementation did not changed the number of parasites. Administration of oral 2Gly.ZnCl22H20 compound has been shown to be useful in chicks infected with A. galli by its improvement of their antioxidant potential.

Keywords: Ascaridia galli, antioxidants, broiler chicks, zinc supplementation

Procedia PDF Downloads 138
602 XAI Implemented Prognostic Framework: Condition Monitoring and Alert System Based on RUL and Sensory Data

Authors: Faruk Ozdemir, Roy Kalawsky, Peter Hubbard

Abstract:

Accurate estimation of RUL provides a basis for effective predictive maintenance, reducing unexpected downtime for industrial equipment. However, while models such as the Random Forest have effective predictive capabilities, they are the so-called ‘black box’ models, where interpretability is at a threshold to make critical diagnostic decisions involved in industries related to aviation. The purpose of this work is to present a prognostic framework that embeds Explainable Artificial Intelligence (XAI) techniques in order to provide essential transparency in Machine Learning methods' decision-making mechanisms based on sensor data, with the objective of procuring actionable insights for the aviation industry. Sensor readings have been gathered from critical equipment such as turbofan jet engine and landing gear, and the prediction of the RUL is done by a Random Forest model. It involves steps such as data gathering, feature engineering, model training, and evaluation. These critical components’ datasets are independently trained and evaluated by the models. While suitable predictions are served, their performance metrics are reasonably good; such complex models, however obscure reasoning for the predictions made by them and may even undermine the confidence of the decision-maker or the maintenance teams. This is followed by global explanations using SHAP and local explanations using LIME in the second phase to bridge the gap in reliability within industrial contexts. These tools analyze model decisions, highlighting feature importance and explaining how each input variable affects the output. This dual approach offers a general comprehension of the overall model behavior and detailed insight into specific predictions. The proposed framework, in its third component, incorporates the techniques of causal analysis in the form of Granger causality tests in order to move beyond correlation toward causation. This will not only allow the model to predict failures but also present reasons, from the key sensor features linked to possible failure mechanisms to relevant personnel. The causality between sensor behaviors and equipment failures creates much value for maintenance teams due to better root cause identification and effective preventive measures. This step contributes to the system being more explainable. Surrogate Several simple models, including Decision Trees and Linear Models, can be used in yet another stage to approximately represent the complex Random Forest model. These simpler models act as backups, replicating important jobs of the original model's behavior. If the feature explanations obtained from the surrogate model are cross-validated with the primary model, the insights derived would be more reliable and provide an intuitive sense of how the input variables affect the predictions. We then create an iterative explainable feedback loop, where the knowledge learned from the explainability methods feeds back into the training of the models. This feeds into a cycle of continuous improvement both in model accuracy and interpretability over time. By systematically integrating new findings, the model is expected to adapt to changed conditions and further develop its prognosis capability. These components are then presented to the decision-makers through the development of a fully transparent condition monitoring and alert system. The system provides a holistic tool for maintenance operations by leveraging RUL predictions, feature importance scores, persistent sensor threshold values, and autonomous alert mechanisms. Since the system will provide explanations for the predictions given, along with active alerts, the maintenance personnel can make informed decisions on their end regarding correct interventions to extend the life of the critical machinery.

Keywords: predictive maintenance, explainable artificial intelligence, prognostic, RUL, machine learning, turbofan engines, C-MAPSS dataset

Procedia PDF Downloads 12
601 MigrationR: An R Package for Analyzing Bird Migration Data Based on Satellite Tracking

Authors: Xinhai Li, Huidong Tian, Yumin Guo

Abstract:

Bird migration is fantastic natural phenomenon. In recent years, the use of GPS transmitters has generated a vast amount of data, and the Movebank platform has made these data publicly accessible. For researchers, what they need are data analysis tools. Although there are approximately 90 R packages dedicated to animal movement analysis, the capacity for comprehensive processing of bird migration data remains limited. Hence, we introduce a novel package called migrationR. This package enables the calculation of movement speed, direction, changes in direction, flight duration, daily and annual movement distances. Furthermore, it can pinpoint the starting and ending dates of migration, estimate nest site locations and stopovers, and visualize movement trajectories at various time scales. migrationR distinguishes individuals through NMDS (non-metric multidimensional scaling) coordinates based on movement variables such as speed, flight duration, path tortuosity, and migration timing. A distinctive aspect of the package is the development of a hetero-occurrences species distribution model that takes into account the daily rhythm of individual birds across different landcover types. Habitat use for foraging and roosting differs significantly for many waterbirds. For example, White-naped Cranes at Poyang Lake in China typically forage in croplands and roost in shallow water areas. Both of these occurrence types are of equal importance. Optimal habitats consist of a combination of crop lands and shallow waters, whereas suboptimal habitats lack both, which necessitates birds to fly extensively. With migrationR, we conduct species distribution modeling for foraging and roosting separately and utilize the moving distance between crop lands and shallow water areas as an index of overall habitat suitability. This approach offers a more nuanced understanding of the habitat requirements for migratory birds and enhances our ability to analyze and interpret their movement patterns effectively. The functions of migrationR are demonstrated using our own tracking data of 78 White-naped Crane individuals from 2014 to 2023, comprising over one million valid locations in total. migrationR can be installed from a GitHub repository by executing the following command: remotes::install_github("Xinhai-Li/migrationR").

Keywords: bird migration, hetero-occurrences species distribution model, migrationR, R package, satellite telemetry

Procedia PDF Downloads 73
600 Spectroscopic Autoradiography of Alpha Particles on Geologic Samples at the Thin Section Scale Using a Parallel Ionization Multiplier Gaseous Detector

Authors: Hugo Lefeuvre, Jerôme Donnard, Michael Descostes, Sophie Billon, Samuel Duval, Tugdual Oger, Herve Toubon, Paul Sardini

Abstract:

Spectroscopic autoradiography is a method of interest for geological sample analysis. Indeed, researchers may face different issues such as radioelement identification and quantification in the field of environmental studies. Imaging gaseous ionization detectors find their place in geosciences for conducting specific measurements of radioactivity to improve the monitoring of natural processes using naturally-occurring radioactive tracers, but also for the nuclear industry linked to the mining sector. In geological samples, the location and identification of the radioactive-bearing minerals at the thin-section scale remains a major challenge as the detection limit of the usual elementary microprobe techniques is far higher than the concentration of most of the natural radioactive decay products. The spatial distribution of each decay product in the case of uranium in a geomaterial is interesting for relating radionuclides concentration to the mineralogy. The present study aims to provide spectroscopic autoradiography analysis method for measuring the initial energy of alpha particles with a parallel ionization multiplier gaseous detector. The analysis method has been developed thanks to Geant4 modelling of the detector. The track of alpha particles recorded in the gas detector allow the simultaneous measurement of the initial point of emission and the reconstruction of the initial particle energy by a selection based on the linear energy distribution. This spectroscopic autoradiography method was successfully used to reproduce the alpha spectra from a 238U decay chain on a geological sample at the thin-section scale. The characteristics of this measurement are an energy spectrum resolution of 17.2% (FWHM) at 4647 keV and a spatial resolution of at least 50 µm. Even if the efficiency of energy spectrum reconstruction is low (4.4%) compared to the efficiency of a simple autoradiograph (50%), this novel measurement approach offers the opportunity to select areas on an autoradiograph to perform an energy spectrum analysis within that area. This opens up possibilities for the detailed analysis of heterogeneous geological samples containing natural alpha emitters such as uranium-238 and radium-226. This measurement will allow the study of the spatial distribution of uranium and its descendants in geo-materials by coupling scanning electron microscope characterizations. The direct application of this dual modality (energy-position) of analysis will be the subject of future developments. The measurement of the radioactive equilibrium state of heterogeneous geological structures, and the quantitative mapping of 226Ra radioactivity are now being actively studied.

Keywords: alpha spectroscopy, digital autoradiography, mining activities, natural decay products

Procedia PDF Downloads 155
599 The Rite of Jihadification in ISIS Modified Video Games: Mass Deception and Dialectic of Religious Regression in Technological Progression

Authors: Venus Torabi

Abstract:

ISIS, the terrorist organization, modified two videogames, ARMA III and Grand Theft Auto 5 (2013) as means of online recruitment and ideological propaganda. The urge to study the mechanism at work, whether it has been successful or not, derives (Digital) Humanities experts to explore how codes of terror, Islamic ideology and recruitment strategies are incorporated into the ludic mechanics of videogames. Another aspect of the significance lies in the fact that this is a latent problem that has not been fully addressed in an interdisciplinary framework prior to this study, to the best of the researcher’s knowledge. Therefore, due to the complexity of the subject, the present paper entangles with game studies, philosophical and religious poles to form the methodology of conducting the research. As a contextualized epistemology of such exploitation of videogames, the core argument is building on the notion of “Culture Industry” proposed by Theodore W. Adorno and Max Horkheimer in Dialectic of Enlightenment (2002). This article posits that the ideological underpinnings of ISIS’s cause corroborated by the action-bound mechanics of the videogames are in line with adhering to the Islamic Eschatology as a furnishing ground and an excuse in exercising terrorism. It is an account of ISIS’s modification of the videogames, a tool of technological progression to practice online radicalization. Dialectically, this practice is packed up in rhetoric for recognizing a religious myth (the advent of a savior), as a hallmark of regression. The study puts forth that ISIS’s wreaking havoc on the world, both in reality and within action videogames, is negotiating the process of self-assertion in the players of such videogames (by assuming one’s self a member of terrorists) that leads to self-annihilation. It tries to unfold how ludic Mod videogames are misused as tools of mass deception towards ethnic cleansing in reality and line with the distorted Eschatological myth. To conclude, this study posits videogames to be a new avenue of mass deception in the framework of the Culture Industry. Yet, this emerges as a two-edged sword of mass deception in ISIS’s modification of videogames. It shows that ISIS is not only trying to hijack the minds through online/ludic recruitment, it potentially deceives the Muslim communities or those prone to radicalization into believing that it's terrorist practices are preparing the world for the advent of a religious savior based on Islamic Eschatology. This is to claim that the harsh actions of the videogames are potentially breeding minds by seeds of terrorist propaganda and numbing them to violence. The real world becomes an extension of that harsh virtual environment in a ludic/actual continuum, the extension that is contributing to the mass deception mechanism of the terrorists, in a clandestine trend.

Keywords: culture industry, dialectic, ISIS, islamic eschatology, mass deception, video games

Procedia PDF Downloads 140
598 The Role of Goal Orientation on the Structural-Psychological Empowerment Link in the Public Sector

Authors: Beatriz Garcia-Juan, Ana B. Escrig-Tena, Vicente Roca-Puig

Abstract:

The aim of this article is to conduct a theoretical and empirical study in order to examine how the goal orientation (GO) of public employees affects the relationship between the structural and psychological empowerment that they experience at their workplaces. In doing so, we follow structural empowerment (SE) and psychological empowerment (PE) conceptualizations, and relate them to the public administration framework. Moreover, we review arguments from GO theories, and previous related contributions. Empowerment has emerged as an important issue in the public sector organization setting in the wake of mainstream New Public Management (NPM), the new orientation in the public sector that aims to provide a better service for citizens. It is closely linked to the drive to improve organizational effectiveness through the wise use of human resources. Nevertheless, it is necessary to combine structural (managerial) and psychological (individual) approaches in an integrative study of empowerment. SE refers to a set of initiatives that aim the transference of power from managerial positions to the rest of employees. PE is defined as psychological state of competence, self-determination, impact, and meaning that an employee feels at work. Linking these two perspectives will lead to arrive at a broader understanding of the empowerment process. Specifically in the public sector, empirical contributions on this relationship are therefore important, particularly as empowerment is a very useful tool with which to face the challenges of the new public context. There is also a need to examine the moderating variables involved in this relationship, as well as to extend research on work motivation in public management. It is proposed the study of the effect of individual orientations, such as GO. GO concept refers to the individual disposition toward developing or confirming one’s capacity in achievement situations. Employees’ GO may be a key factor at work and in workforce selection processes, since it explains the differences in personal work interests, and in receptiveness to and interpretations of professional development activities. SE practices could affect PE feelings in different ways, depending on employees’ GO, since they perceive and respond differently to such practices, which is likely to yield distinct PE results. The model is tested on a sample of 521 Spanish local authority employees. Hierarchical regression analysis was conducted to test the research hypotheses using SPSS 22 computer software. The results do not confirm the direct link between SE and PE, but show that learning goal orientation has considerable moderating power in this relationship, and its interaction with SE affects employees’ PE levels. Therefore, the combination of SE practices and employees’ high levels of LGO are important factors for creating psychologically empowered staff in public organizations.

Keywords: goal orientation, moderating effect, psychological empowerment, structural empowerment

Procedia PDF Downloads 286
597 Biosorption of Nickel by Penicillium simplicissimum SAU203 Isolated from Indian Metalliferous Mining Overburden

Authors: Suchhanda Ghosh, A. K. Paul

Abstract:

Nickel, an industrially important metal is not mined in India, due to the lack of its primary mining resources. But, the chromite deposits occurring in the Sukinda and Baula-Nuasahi region of Odhisa, India, is reported to contain around 0.99% of nickel entrapped in the goethite matrix of the lateritic iron rich ore. Weathering of the dumped chromite mining overburden often leads to the contamination of the ground as well as the surface water with toxic nickel. Microbes inherent to this metal contaminated environment are reported to be capable of removal as well as detoxification of various metals including nickel. Nickel resistant fungal isolates obtained in pure form from the metal rich overburden were evaluated for their potential to biosorb nickel by using their dried biomass. Penicillium simplicissimum SAU203 was the best nickel biosorbant among the 20 fungi tested and was capable to sorbing 16.85 mg Ni/g biomass from a solution containing 50 mg/l of Ni. The identity of the isolate was confirmed using 18S rRNA gene analysis. The sorption capacity of the isolate was further standardized following Langmuir and Freundlich adsorption isotherm models and the results reflected energy efficient sorption. Fourier-transform infrared spectroscopy studies of the nickel loaded and control biomass in a comparative basis revealed the involvement of hydroxyl, amine and carboxylic groups in Ni binding. The sorption process was also optimized for several standard parameters like initial metal ion concentration, initial sorbet concentration, incubation temperature and pH, presence of additional cations and pre-treatment of the biomass by different chemicals. Optimisation leads to significant improvements in the process of nickel biosorption on to the fungal biomass. P. simplicissimum SAU203 could sorb 54.73 mg Ni/g biomass with an initial Ni concentration of 200 mg/l in solution and 21.8 mg Ni/g biomass with an initial biomass concentration of 1g/l solution. Optimum temperature and pH for biosorption was recorded to be 30°C and pH 6.5 respectively. Presence of Zn and Fe ions improved the sorption of Ni(II), whereas, cobalt had a negative impact. Pre-treatment of biomass with various chemical and physical agents has affected the proficiency of Ni sorption by P. simplicissimum SAU203 biomass, autoclaving as well as treatment of biomass with 0.5 M sulfuric acid and acetic acid reduced the sorption as compared to the untreated biomass, whereas, NaOH and Na₂CO₃ and Twin 80 (0.5 M) treated biomass resulted in augmented metal sorption. Hence, on the basis of the present study, it can be concluded that P. simplicissimum SAU203 has the potential for the removal as well as detoxification of nickel from contaminated environments in general and particularly from the chromite mining areas of Odhisa, India.

Keywords: nickel, fungal biosorption, Penicillium simplicissimum SAU203, Indian chromite mines, mining overburden

Procedia PDF Downloads 193
596 The Roman Fora in North Africa Towards a Supportive Protocol to the Decision for the Morphological Restitution

Authors: Dhouha Laribi Galalou, Najla Allani Bouhoula, Atef Hammouda

Abstract:

This research delves into the fundamental question of the morphological restitution of built archaeology in order to place it in its paradigmatic context and to seek answers to it. Indeed, the understanding of the object of the study, its analysis, and the methodology of solving the morphological problem posed, are manageable aspects only by means of a thoughtful strategy that draws on well-defined epistemological scaffolding. In this stream, the crisis of natural reasoning in archaeology has generated multiple changes in this field, ranging from the use of new tools to the integration of an archaeological information system where urbanization involves the interplay of several disciplines. The built archaeological topic is also an architectural and morphological object. It is also a set of articulated elementary data, the understanding of which is about to be approached from a logicist point of view. Morphological restitution is no exception to the rule, and the inter-exchange between the different disciplines uses the capacity of each to frame the reflection on the incomplete elements of a given architecture or on its different phases and multiple states of existence. The logicist sequence is furnished by the set of scattered or destroyed elements found, but also by what can be called a rule base which contains the set of rules for the architectural construction of the object. The knowledge base built from the archaeological literature also provides a reference that enters into the game of searching for forms and articulations. The choice of the Roman Forum in North Africa is justified by the great urban and architectural characteristics of this entity. The research on the forum involves both a fairly large knowledge base but also provides the researcher with material to study - from a morphological and architectural point of view - starting from the scale of the city down to the architectural detail. The experimentation of the knowledge deduced on the paradigmatic level, as well as the deduction of an analysis model, is then carried out on the basis of a well-defined context which contextualises the experimentation from the elaboration of the morphological information container attached to the rule base and the knowledge base. The use of logicist analysis and artificial intelligence has allowed us to first question the aspects already known in order to measure the credibility of our system, which remains above all a decision support tool for the morphological restitution of Roman Fora in North Africa. This paper presents a first experimentation of the model elaborated during this research, a model framed by a paradigmatic discussion and thus trying to position the research in relation to the existing paradigmatic and experimental knowledge on the issue.

Keywords: classical reasoning, logicist reasoning, archaeology, architecture, roman forum, morphology, calculation

Procedia PDF Downloads 152
595 Consumer Cognitive Models of Vaccine Attitudes: Behavioral Informed Strategies Promoting Vaccination Policy in Greece

Authors: Halkiopoulos Constantinos, Koutsopoulou Ioanna, Gkintoni Evgenia, Antonopoulou Hera

Abstract:

Immunization appears to be an essential part of health care service in times of pandemics such as covid-19 and aims not only to protect the health of the population but also the health and sustainability of the economies of the countries affected. It is reported that more than 3.44 billion doses have been administered so far, which accounts for 45 doses for 100 people. Vaccination programs in various countries have been promoted and accepted by people differently and therefore they proceeded in different ways and speed; most countries directing them towards people with vulnerable chronic or recent health statuses. Large scale restriction measures or lockdown, personal protection measures such as masks and gloves and a decrease in leisure and sports activities were also implemented around the world as part of the protection health strategies against the covid-19 pandemic. This research aims to present an analysis based on variations on people’s attitudes towards vaccination based on demographic, social and epidemiological characteristics, and health status on the one hand and perception of health, health satisfaction, pain, and quality of life on the other hand. 1500 Greek e-consumers participated in the research, mainly through social media who took part in an online-based survey voluntarily. The questionnaires included demographic, social and medical characteristics of the participants, and questions asking people’s willingness to be vaccinated and their opinion on whether there should be a vaccine against covid-19. Other stressor factors were also reported in the questionnaires and participants’ loss of someone close due to covid-19, or staying at home quarantine due to being infected from covid-19. WHOQUOL-BREF and GLOBAL PSYCHOTRAUMA SCREEN- GPS were used with kind permission from WHO and from the International Society for Traumatic Stress Studies in this study. Attitudes towards vaccination varied significantly related to aging, level of education, health status and consumer behavior. Health professionals’ attitudes also varied in relation to age, level of education, profession, health status and consumer needs. Vaccines have been the most common technological aid of human civilization so far in the fight against viruses. The results of this study can be used for health managers and digital marketers of pharmaceutical companies and also other staff involved in vaccination programs and for designing health policy immunization strategies during pandemics in order to achieve positive attitudes towards vaccination and larger populations being vaccinated in shorter periods of time after the break out of pandemic. Health staff needs to be trained, aided and supervised to go through with vaccination programs and to be protected through vaccination programs themselves. Feedback in each country’s vaccination program, short backs, deficiencies and delays should be addressed and worked out.

Keywords: consumer behavior, cognitive models, vaccination policy, pandemic, Covid-19, Greece

Procedia PDF Downloads 189
594 Investigating Best Practice Energy Efficiency Policies and Programs, and Their Replication Potential for Residential Sector of Saudi Arabia

Authors: Habib Alshuwaikhat, Nahid Hossain

Abstract:

Residential sector consumes more than half of the produced electricity in Saudi Arabia, and fossil fuel is the main source of energy to meet growing household electricity demand in the Kingdom. Several studies forecasted and expressed concern that unless the domestic energy demand growth is controlled, it will reduce Saudi Arabia’s crude oil export capacity within a decade and the Kingdom is likely to be incapable of exporting crude oil within next three decades. Though the Saudi government has initiated to address the domestic energy demand growth issue, the demand side energy management policies and programs are focused on industrial and commercial sectors. It is apparent that there is an urgent need to develop a comprehensive energy efficiency strategy for addressing efficient energy use in residential sector in the Kingdom. Then again as Saudi Arabia is at its primary stage in addressing energy efficiency issues in its residential sector, there is a scope for the Kingdom to learn from global energy efficiency practices and design its own energy efficiency policies and programs. However, in order to do that sustainable, it is essential to address local contexts of energy efficiency. It is also necessary to find out the policies and programs that will fit to the local contexts. Thus the objective of this study was set to identify globally best practice energy efficiency policies and programs in residential sector that have replication potential in Saudi Arabia. In this regard two sets of multi-criteria decision analysis matrices were developed to evaluate the energy efficiency policies and programs. The first matrix was used to evaluate the global energy efficiency policies and programs, and the second matrix was used to evaluate the replication potential of global best practice energy efficiency policies and programs for Saudi Arabia. Wuppertal Institute’s guidelines for energy efficiency policy evaluation were used to develop the matrices, and the different attributes of the matrices were set through available literature review. The study reveals that the best practice energy efficiency policies and programs with good replication potential for Saudi Arabia are those which have multiple components to address energy efficiency and are diversified in their characteristics. The study also indicates the more diversified components are included in a policy and program, the more replication potential it has for the Kingdom. This finding is consistent with other studies, where it is observed that in order to be successful in energy efficiency practices, it is required to introduce multiple policy components in a cluster rather than concentrate on a single policy measure. The developed multi-criteria decision analysis matrices for energy efficiency policy and program evaluation could be utilized to assess the replication potential of other globally best practice energy efficiency policies and programs for the residential sector of the Kingdom. In addition it has potential to guide Saudi policy makers to adopt and formulate its own energy efficiency policies and programs for Saudi Arabia.

Keywords: Saudi Arabia, residential sector, energy efficiency, policy evaluation

Procedia PDF Downloads 498
593 Hybrid GNN Based Machine Learning Forecasting Model For Industrial IoT Applications

Authors: Atish Bagchi, Siva Chandrasekaran

Abstract:

Background: According to World Bank national accounts data, the estimated global manufacturing value-added output in 2020 was 13.74 trillion USD. These manufacturing processes are monitored, modelled, and controlled by advanced, real-time, computer-based systems, e.g., Industrial IoT, PLC, SCADA, etc. These systems measure and manipulate a set of physical variables, e.g., temperature, pressure, etc. Despite the use of IoT, SCADA etc., in manufacturing, studies suggest that unplanned downtime leads to economic losses of approximately 864 billion USD each year. Therefore, real-time, accurate detection, classification and prediction of machine behaviour are needed to minimise financial losses. Although vast literature exists on time-series data processing using machine learning, the challenges faced by the industries that lead to unplanned downtimes are: The current algorithms do not efficiently handle the high-volume streaming data from industrial IoTsensors and were tested on static and simulated datasets. While the existing algorithms can detect significant 'point' outliers, most do not handle contextual outliers (e.g., values within normal range but happening at an unexpected time of day) or subtle changes in machine behaviour. Machines are revamped periodically as part of planned maintenance programmes, which change the assumptions on which original AI models were created and trained. Aim: This research study aims to deliver a Graph Neural Network(GNN)based hybrid forecasting model that interfaces with the real-time machine control systemand can detect, predict machine behaviour and behavioural changes (anomalies) in real-time. This research will help manufacturing industries and utilities, e.g., water, electricity etc., reduce unplanned downtimes and consequential financial losses. Method: The data stored within a process control system, e.g., Industrial-IoT, Data Historian, is generally sampled during data acquisition from the sensor (source) and whenpersistingin the Data Historian to optimise storage and query performance. The sampling may inadvertently discard values that might contain subtle aspects of behavioural changes in machines. This research proposed a hybrid forecasting and classification model which combines the expressive and extrapolation capability of GNN enhanced with the estimates of entropy and spectral changes in the sampled data and additional temporal contexts to reconstruct the likely temporal trajectory of machine behavioural changes. The proposed real-time model belongs to the Deep Learning category of machine learning and interfaces with the sensors directly or through 'Process Data Historian', SCADA etc., to perform forecasting and classification tasks. Results: The model was interfaced with a Data Historianholding time-series data from 4flow sensors within a water treatment plantfor45 days. The recorded sampling interval for a sensor varied from 10 sec to 30 min. Approximately 65% of the available data was used for training the model, 20% for validation, and the rest for testing. The model identified the anomalies within the water treatment plant and predicted the plant's performance. These results were compared with the data reported by the plant SCADA-Historian system and the official data reported by the plant authorities. The model's accuracy was much higher (20%) than that reported by the SCADA-Historian system and matched the validated results declared by the plant auditors. Conclusions: The research demonstrates that a hybrid GNN based approach enhanced with entropy calculation and spectral information can effectively detect and predict a machine's behavioural changes. The model can interface with a plant's 'process control system' in real-time to perform forecasting and classification tasks to aid the asset management engineers to operate their machines more efficiently and reduce unplanned downtimes. A series of trialsare planned for this model in the future in other manufacturing industries.

Keywords: GNN, Entropy, anomaly detection, industrial time-series, AI, IoT, Industry 4.0, Machine Learning

Procedia PDF Downloads 152
592 Cognitive Performance and Everyday Functionality in Healthy Greek Seniors

Authors: George Pavlidis, Ana Vivas

Abstract:

The demographic change into an aging population has stimulated the examination of seniors’ mental health and ability to live independently. The corresponding literature depicts the relation between cognitive decline and everyday functionality with aging, focusing largely in individuals that are reaching or have bridged the threshold of various forms of neuropathology and disability. In this context, recent meta-analysis depicts a moderate relation between cognitive performance and everyday functionality in AD sufferers. However, there has not been an analogous effort for the examination of this relation in the healthy spectrum of aging (i.e, in samples that are not challenged from a neurodegenerative disease). There is a consensus that the assessment tools designed to detect neuropathology with those that assess cognitive performance in healthy adults are distinct, thus their universal use in cognitively challenged and in healthy adults is not always valid. The same accounts for the assessment of everyday functionality. In addition, it is argued that everyday functionality should be examined with cultural adjusted assessment tools, since many vital everyday tasks are heterotypical among distinct cultures. Therefore, this study was set out to examine the relation between cognitive performance and everyday functionality a) in the healthy spectrum of aging and b) by adjusting the everyday functionality tools EPT and OTDL-R in the Greek cultural context. In Greece, 107 cognitively healthy seniors ( Mage = 62.24) completed a battery of neuropsychological tests and everyday functionality tests. Both were carefully chosen to be sensitive in fluctuations of performance in the healthy spectrum of cognitive performance and everyday functionality. The everyday functionality assessment tools were modified to reflect the local cultural context (i.e., EPT-G and OTDL-G). The results depicted that performance in all everyday functionality measures decline with age (.197 < r > .509). Statistically significant correlations emerged between cognitive performance and everyday functionality assessments that range from r =0.202 to r=0.510. A series of independent regression analysis including the scores of cognitive assessments has yield statistical significant models that explained 20.9 < AR2 > 32.4 of the variance in everyday functionality scored indexes. All everyday functionality measures were independently predicted by the TMT B-A index, and indicator of executive function. Stepwise regression analyses depicted that TMT B-A and age were statistically significant independent predictors of EPT-G and OTDL-G. It was concluded that everyday functionality is declining with age and that cognitive performance and everyday functional may be related in the healthy spectrum of aging. Age seems not to be the sole contributing factor in everyday functionality decline, rather executive control as well. Moreover, it was concluded that the EPT-G and OTDL-G are valuable tools to assess everyday functionality in Greek seniors that are not cognitively challenged, especially for research purposes. Future research should examine the contributing factors of a better cognitive vitality especially in executive control, as vital for the maintenance of independent living capacity with aging.

Keywords: cognition, everyday functionality, aging, cognitive decline, healthy aging, Greece

Procedia PDF Downloads 528
591 The Influence of Microsilica on the Cluster Cracks' Geometry of Cement Paste

Authors: Maciej Szeląg

Abstract:

The changing nature of environmental impacts, in which cement composites are operating, are causing in the structure of the material a number of phenomena, which result in volume deformation of the composite. These strains can cause composite cracking. Cracks are merging by propagation or intersect to form a characteristic structure of cracks known as the cluster cracks. This characteristic mesh of cracks is crucial to almost all building materials, which are working in service loads conditions. Particularly dangerous for a cement matrix is a sudden load of elevated temperature – the thermal shock. Resulting in a relatively short period of time a large value of a temperature gradient between the outer surface and the material’s interior can result in cracks formation on the surface and in the volume of the material. In the paper, in order to analyze the geometry of the cluster cracks of the cement pastes, the image analysis tools were used. Tested were 4 series of specimens made of two different Portland cement. In addition, two series include microsilica as a substitute for the 10% of the cement. Within each series, specimens were performed in three w/b indicators (water/binder): 0.4; 0.5; 0.6. The cluster cracks were created by sudden loading the samples by elevated temperature of 250°C. Images of the cracked surfaces were obtained via scanning at 2400 DPI. Digital processing and measurements were performed using ImageJ v. 1.46r software. To describe the structure of the cluster cracks three stereological parameters were proposed: the average cluster area - A ̅, the average length of cluster perimeter - L ̅, and the average opening width of a crack between clusters - I ̅. The aim of the study was to identify and evaluate the relationships between measured stereological parameters, and the compressive strength and the bulk density of the modified cement pastes. The tests of the mechanical and physical feature have been carried out in accordance with EN standards. The curves describing the relationships have been developed using the least squares method, and the quality of the curve fitting to the empirical data was evaluated using three diagnostic statistics: the coefficient of determination – R2, the standard error of estimation - Se, and the coefficient of random variation – W. The use of image analysis allowed for a quantitative description of the cluster cracks’ geometry. Based on the obtained results, it was found a strong correlation between the A ̅ and L ̅ – reflecting the fractal nature of the cluster cracks formation process. It was noted that the compressive strength and the bulk density of cement pastes decrease with an increase in the values of the stereological parameters. It was also found that the main factors, which impact on the cluster cracks’ geometry are the cement particles’ size and the general content of the binder in a volume of the material. The microsilica caused the reduction in the A ̅, L ̅ and I ̅ values compared to the values obtained by the classical cement paste’s samples, which is caused by the pozzolanic properties of the microsilica.

Keywords: cement paste, cluster cracks, elevated temperature, image analysis, microsilica, stereological parameters

Procedia PDF Downloads 248
590 Fairer Public Benefit in Copyright Law

Authors: Amanda Levendowski

Abstract:

In 1966, a court considered expressly whether a secondary use of copyrighted works served a public benefit. While public benefit has become a subfactor of the fair use doctrine, it remains undefined, uncodified, and undertheorized. After the recent Supreme Court decision in Google v. Oracle, however, it is also unavoidable: the Court stated that “we must take into account the public benefits the copying will likely produce.” Previously, courts invoked public benefit with some predictability in pivotal cases involving novel technologies, from home video recorders to digital libraries to algorithms. A hand-coded dataset of nineteen U.S. technology-related public benefit cases from 1966-2023 reveals five values that emerge from those cases: expression, knowledge, entertainment, competition, and/or efficiency. Forthcoming judicial decisions about the latest novel technology, artificial intelligence (AI), will be shaped by this precedent. However, a series of mid-aughts decisions about algorithms exposed an FU long lurking in fair use: name aside, there is nothing particularly fair about it. Those cases excused invasive, coercive, and biased AI systems as efficient “public benefits” when finding them to be fair use. Many scholars have written about the unfairness of fair use, and this article contributes to those conversations by using a feminist cyberlaw lens to critique the practice of dubbing technologies public benefits without acknowledging, let alone assessing, countervailing public harms. A public benefit that ignores public harm is incomplete. Purported fair uses, particularly those underpinning AI systems, can amplify bias, dis/misinformation, and environmental destruction -harms that are predictable, preventable, and passed over by public benefit presently. This article responds by recalibrating public benefits to better account for these and other public harms. It defines a fairer public benefit and develops a framework for realizing it. The latter poses challenges. In courts, public harm has already happened when matters are litigated, placing a premium on compensation rather than prevention. Congress could codify public benefit, but it is unlikely that Congress could agree upon a satisfactory definition. To further complicate matters, neither judges nor legislators have duties of sociotechnical competency. But lawyers do. Client-centered counseling could facilitate a fairer public benefit if there were a framework for doing so. This article proposes one: FAIRR (pronounced “fairer”), a mnemonic for formalizing purposes, assessing benefits, identifying harms, reconsidering those benefits in light of those harms, and reporting to the client. Inspired by computer science’s threat modeling methodology, FAIRR represents a rigorous, repeatable method for analyzing how infringement liability, public perception, and social progress are affected by public benefits and public harms. By deconstructing the inequities embedded in public benefit as they exist now and developing a fairer alternative for the future, this article helps lawyers shape better technologies.

Keywords: intellectual property, copyright, fair use, public benefit, technology, artificial intelligence, bias, disinformation, environmental destruction

Procedia PDF Downloads 5
589 Ground Improvement Using Deep Vibro Techniques at Madhepura E-Loco Project

Authors: A. Sekhar, N. Ramakrishna Raju

Abstract:

This paper is a result of ground improvement using deep vibro techniques with combination of sand and stone columns performed on a highly liquefaction susceptible site (70 to 80% sand strata and balance silt) with low bearing capacities due to high settlements located (earth quake zone V as per IS code) at Madhepura, Bihar state in northern part of India. Initially, it was envisaged with bored cast in-situ/precast piles, stone/sand columns. However, after detail analysis to address both liquefaction and improve bearing capacities simultaneously, it was analyzed the deep vibro techniques with combination of sand and stone columns is excellent solution for given site condition which may be first time in India. First after detail soil investigation, pre eCPT test was conducted to evaluate the potential depth of liquefaction to densify silty sandy soils to improve factor of safety against liquefaction. Then trail test were being carried out at site by deep vibro compaction technique with sand and stone columns combination with different spacings of columns in triangular shape with different timings during each lift of vibro up to ground level. Different spacings and timing was done to obtain the most effective spacing and timing with vibro compaction technique to achieve maximum densification of saturated loose silty sandy soils uniformly for complete treated area. Then again, post eCPT test and plate load tests were conducted at all trail locations of different spacings and timing of sand and stone columns to evaluate the best results for obtaining the required factor of safety against liquefaction and the desired bearing capacities with reduced settlements for construction of industrial structures. After reviewing these results, it was noticed that the ground layers are densified more than the expected with improved factor of safety against liquefaction and achieved good bearing capacities for a given settlements as per IS codal provisions. It was also worked out for cost-effectiveness of lightly loaded single storied structures by using deep vibro technique with sand column avoiding stone. The results were observed satisfactory for resting the lightly loaded foundations. In this technique, the most important is to mitigating liquefaction with improved bearing capacities and reduced settlements to acceptable limits as per IS: 1904-1986 simultaneously up to a depth of 19M. To our best knowledge it was executed first time in India.

Keywords: ground improvement, deep vibro techniques, liquefaction, bearing capacity, settlement

Procedia PDF Downloads 199
588 Analyzing Data Protection in the Era of Big Data under the Framework of Virtual Property Layer Theory

Authors: Xiaochen Mu

Abstract:

Data rights confirmation, as a key legal issue in the development of the digital economy, is undergoing a transition from a traditional rights paradigm to a more complex private-economic paradigm. In this process, data rights confirmation has evolved from a simple claim of rights to a complex structure encompassing multiple dimensions of personality rights and property rights. Current data rights confirmation practices are primarily reflected in two models: holistic rights confirmation and process rights confirmation. The holistic rights confirmation model continues the traditional "one object, one right" theory, while the process rights confirmation model, through contractual relationships in the data processing process, recognizes rights that are more adaptable to the needs of data circulation and value release. In the design of the data property rights system, there is a hierarchical characteristic aimed at decoupling from raw data to data applications through horizontal stratification and vertical staging. This design not only respects the ownership rights of data originators but also, based on the usufructuary rights of enterprises, constructs a corresponding rights system for different stages of data processing activities. The subjects of data property rights include both data originators, such as users, and data producers, such as enterprises, who enjoy different rights at different stages of data processing. The intellectual property rights system, with the mission of incentivizing innovation and promoting the advancement of science, culture, and the arts, provides a complete set of mechanisms for protecting innovative results. However, unlike traditional private property rights, the granting of intellectual property rights is not an end in itself; the purpose of the intellectual property system is to balance the exclusive rights of the rights holders with the prosperity and long-term development of society's public learning and the entire field of science, culture, and the arts. Therefore, the intellectual property granting mechanism provides both protection and limitations for the rights holder. This perfectly aligns with the dual attributes of data. In terms of achieving the protection of data property rights, the granting of intellectual property rights is an important institutional choice that can enhance the effectiveness of the data property exchange mechanism. Although this is not the only path, the granting of data property rights within the framework of the intellectual property rights system helps to establish fundamental legal relationships and rights confirmation mechanisms and is more compatible with the classification and grading system of data. The modernity of the intellectual property rights system allows it to adapt to the needs of big data technology development through special clauses or industry guidelines, thus promoting the comprehensive advancement of data intellectual property rights legislation. This paper analyzes data protection under the virtual property layer theory and two-fold virtual property rights system. Based on the “bundle of right” theory, this paper establishes specific three-level data rights. This paper analyzes the cases: Google v. Vidal-Hall, Halliday v Creation Consumer Finance, Douglas v Hello Limited, Campbell v MGN and Imerman v Tchenquiz. This paper concluded that recognizing property rights over personal data and protecting data under the framework of intellectual property will be beneficial to establish the tort of misuse of personal information.

Keywords: data protection, property rights, intellectual property, Big data

Procedia PDF Downloads 44
587 A Conceptual Framework of the Individual and Organizational Antecedents to Knowledge Sharing

Authors: Muhammad Abdul Basit Memon

Abstract:

The importance of organizational knowledge sharing and knowledge management has been documented in numerous research studies in available literature, since knowledge sharing has been recognized as a founding pillar for superior organizational performance and a source of gaining competitive advantage. Built on this, most of the successful organizations perceive knowledge management and knowledge sharing as a concern of high strategic importance and spend huge amounts on the effective management and sharing of organizational knowledge. However, despite some very serious endeavors, many firms fail to capitalize on the benefits of knowledge sharing because of being unaware of the individual characteristics, interpersonal, organizational and contextual factors that influence knowledge sharing; simply the antecedent to knowledge sharing. The extant literature on antecedents to knowledge sharing, offers a range of antecedents mentioned in a number of research articles and research studies. Some of the previous studies about antecedents to knowledge sharing, studied antecedents to knowledge sharing regarding inter-organizational knowledge transfer; others focused on inter and intra organizational knowledge sharing and still others investigated organizational factors. Some of the organizational antecedents to KS can relate to the characteristics and underlying aspects of knowledge being shared e.g., specificity and complexity of the underlying knowledge to be transferred; others relate to specific organizational characteristics e.g., age and size of the organization, decentralization and absorptive capacity of the firm and still others relate to the social relations and networks of organizations such as social ties, trusting relationships, and value systems. In the same way some researchers have highlighted on only one aspect like organizational commitment, transformational leadership, knowledge-centred culture, learning and performance orientation and social network-based relationships in the organizations. A bulk of the existing research articles on antecedents to knowledge sharing has mainly discussed organizational or environmental factors affecting knowledge sharing. However, the focus, later on, shifted towards the analysis of individuals or personal determinants as antecedents for the individual’s engagement in knowledge sharing activities, like personality traits, attitude and self efficacy etc. For example, employees’ goal orientations (i.e. learning orientation or performance orientation is an important individual antecedent of knowledge sharing behaviour. While being consistent with the existing literature therefore, the antecedents to knowledge sharing can be classified as being individual and organizational. This paper is an endeavor to discuss a conceptual framework of the individual and organizational antecedents to knowledge sharing in the light of the available literature and empirical evidence. This model not only can help in getting familiarity and comprehension on the subject matter by presenting a holistic view of the antecedents to knowledge sharing as discussed in the literature, but can also help the business managers and especially human resource managers to find insights about the salient features of organizational knowledge sharing. Moreover, this paper can help provide a ground for research students and academicians to conduct both qualitative as well and quantitative research and design an instrument for conducting survey on the topic of individual and organizational antecedents to knowledge sharing.

Keywords: antecedents to knowledge sharing, knowledge management, individual and organizational, organizational knowledge sharing

Procedia PDF Downloads 327
586 Public Values in Service Innovation Management: Case Study in Elderly Care in Danish Municipality

Authors: Christian T. Lystbaek

Abstract:

Background: The importance of innovation management has traditionally been ascribed to private production companies, however, there is an increasing interest in public services innovation management. One of the major theoretical challenges arising from this situation is to understand public values justifying public services innovation management. However, there is not single and stable definition of public value in the literature. The research question guiding this paper is: What is the supposed added value operating in the public sphere? Methodology: The study takes an action research strategy. This is highly contextualized methodology, which is enacted within a particular set of social relations into which on expects to integrate the results. As such, this research strategy is particularly well suited for its potential to generate results that can be applied by managers. The aim of action research is to produce proposals with a creative dimension capable of compelling actors to act in a new and pertinent way in relation to the situations they encounter. The context of the study is a workshop on public services innovation within elderly care. The workshop brought together different actors, such as managers, personnel and two groups of users-citizens (elderly clients and their relatives). The process was designed as an extension of the co-construction methods inherent in action research. Scenario methods and focus groups were applied to generate dialogue. The main strength of these techniques is to gather and exploit as much data as possible by exposing the discourse of justification used by the actors to explain or justify their points of view when interacting with others on a given subject. The approach does not directly interrogate the actors on their values, but allows their values to emerge through debate and dialogue. Findings: The public values related to public services innovation management in elderly care were identified in two steps. In the first step, identification of values, values were identified in the discussions. Through continuous analysis of the data, a network of interrelated values was developed. In the second step, tracking group consensus, we then ascertained the degree to which the meaning attributed to the value was common to the participants, classifying the degree of consensus as high, intermediate or low. High consensus corresponds to strong convergence in meaning, intermediate to generally shared meanings between participants, and low to divergences regarding the meaning between participants. Only values with high or intermediate degree of consensus were retained in the analysis. Conclusion: The study shows that the fundamental criterion for justifying public services innovation management is the capacity for actors to enact public values in their work. In the workshop, we identified two categories of public values, intrinsic value and behavioural values, and a list of more specific values.

Keywords: public services innovation management, public value, co-creation, action research

Procedia PDF Downloads 281
585 Challenges & Barriers for Neuro Rehabilitation in Developing Countries

Authors: Muhammad Naveed Babur, Maria Liaqat

Abstract:

Background & Objective: People with disabilities especially neurological disabilities have many unmet health and rehabilitation needs, face barriers in accessing mainstream health-care services, and consequently have poor health. There are not sufficient epidemiological studies from Pakistan which assess barriers to neurorehabilitation and ways to counter it. Objectives: The objective of the study was to determine the challenges and to evaluate the barriers for neuro-rehabilitation services in developing countries. Methods: This is Exploratory sequential qualitative study based on the Panel discussion forum in International rehabilitation sciences congress and national rehabilitation conference 2017. Panel group discussion has been conducted in February 2017 with a sample size of eight professionals including Rehabilitation medicine Physician, Physical Therapist, Speech Language therapist, Occupational Therapist, Clinical Psychologist and rehabilitation nurse working in multidisciplinary/Interdisciplinary team. A comprehensive audio-videography have been developed, recorded, transcripted and documented. Data was transcribed and thematic analysis along with characteristics was drawn manually. Data verification was done with the help of two separate coders. Results: After extraction of two separate coders following results are emerged. General category themes are disease profile, demographic profile, training and education, research, barriers, governance, global funding, informal care, resources and cultural beliefs and public awareness. Barriers identified at the level are high cost, stigma, lengthy course of recovery. Hospital related barriers are lack of social support and individually tailored goal setting processes. Organizational barriers identified are lack of basic diagnostic facilities, lack of funding and human resources. Recommendations given by panelists were investment in education, capacity building, infrastructure, governance support, strategies to promote communication and realistic goals. Conclusion: It is concluded that neurorehabilitation in developing countries need attention in following categories i.e. disease profile, demographic profile, training and education, research, barriers, governance, global funding, informal care, resources and cultural beliefs and public awareness. This study also revealed barriers at the level of patient, hospital, organization. Recommendations were also given by panelists.

Keywords: disability, neurorehabilitation, telerehabilitation, disability

Procedia PDF Downloads 196
584 The Quantum Theory of Music and Languages

Authors: Mballa Abanda Serge, Henda Gnakate Biba, Romaric Guemno Kuate, Akono Rufine Nicole, Petfiang Sidonie, Bella Sidonie

Abstract:

The main hypotheses proposed around the definition of the syllable and of music, of the common origin of music and language, should lead the reader to reflect on the cross-cutting questions raised by the debate on the notion of universals in linguistics and musicology. These are objects of controversy, and there lies its interest: the debate raises questions that are at the heart of theories on language. It is an inventive, original and innovative research thesis. The main hypotheses proposed around the definition of the syllable and of music, of the common origin of music and language, should lead the reader to reflect on the cross-cutting questions raised by the debate on the notion of universals in linguistics and musicology. These are objects of controversy, and there lies its interest: the debate raises questions that are at the heart of theories on language. It is an inventive, original and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation and the question of modeling in the human sciences: mathematics, computer science, translation automation and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal and random music. The experimentation confirming the theorization, It designed a semi-digital, semi-analog application which translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music and deterministic and random music). To test this application, I use a music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). Translation is done (from writing to writing, from writing to speech and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz and world music or variety etc. The software runs, giving you the option to choose harmonies, and then you select your melody.

Keywords: music, entanglement, langauge, science

Procedia PDF Downloads 86
583 Biostimulant Activity of Chitooligomers: Effect of Different Degrees of Acetylation and Polymerization on Wheat Seedlings under Salt Stress

Authors: Xiaoqian Zhang, Ping Zou, Pengcheng Li

Abstract:

Salt stress is one of the most serious abiotic stresses, and it can lead to the reduction of agricultural productivity. High salt concentration makes it more difficult for roots to absorb water and disturbs the homeostasis of cellular ions resulting in osmotic stress, ion toxicity and generation of reactive oxygen species (ROS). Compared with the normal physiological conditions, salt stress could inhibit the photosynthesis, break metabolic balance and damage cellular structures, and ultimately results in the reduction of crop yield. Therefore it is vital to develop practical methods for improving the salt tolerance of plants. Chitooligomers (COS) is partially depolymerized products of chitosan, which is consisted of D-glucosamine and N-acetyl-D-glucosamine. In agriculture, COS has the ability to promote plant growth and induce plant innate immunity. The bioactivity of COS closely related to its degree of polymerization (DP) and acetylation (DA). However, most of the previous reports fail to mention the function of COS with different DP and DAs in improving the capacity of plants against salt stress. Accordingly, in this study, chitooligomers (COS) with different degrees of DAs were used to test wheat seedlings response to salt stress. In addition, the determined degrees of polymerization (DPs) COS(DP 4-12) and a heterogeneous COS mixture were applied to explore the relationship between the DP of COSs and its effect on the growth of wheat seedlings in response to salt stress. It showed that COSs, the exogenous elicitor, could promote the growth of wheat seedling, reduce the malondialdehyde (MDA) concentration, and increase the activities of antioxidant enzymes. The results of mRNA expression level test for salt stress-responsive genes indicated that COS keep plants away from being hurt by the salt stress via the regulation of the concentration and the increased antioxidant enzymes activities. Moreover, it was found that the activities of COS was closely related to its Das and COS (DA: 50%) displayed the best salt resistance activity to wheat seedlings. The results also showed that COS with different DP could promote the growth of wheat seedlings under salt stress. COS with a DP (6-8) showed better activities than the other tested samples, implied its activity had a close relationship with its DP. After treatment with chitohexaose, chitoheptaose, and chitooctaose, the photosynthetic parameters were improved obviously. The soluble sugar and proline contents were improved by 26.7%-53.3% and 43.6.0%-70.2%, respectively, while the concentration of malondialdehyde (MDA) was reduced by 36.8% - 49.6%. In addition, the antioxidant enzymes activities were clearly activated. At the molecular level, the results revealed that they could obviously induce the expression of Na+/H+ antiporter genes. In general, these results were fundamental to the study of action mechanism of COS on promoting plant growth under salt stress and the preparation of plant growth regulator.

Keywords: chitooligomers (COS), degree of polymerization (DP), degree of acetylation (DA), salt stress

Procedia PDF Downloads 177
582 The Use of Image Analysis Techniques to Describe a Cluster Cracks in the Cement Paste with the Addition of Metakaolinite

Authors: Maciej Szeląg, Stanisław Fic

Abstract:

The impact of elevated temperatures on the construction materials manifests in change of their physical and mechanical characteristics. Stresses and thermal deformations that occur inside the volume of the material cause its progressive degradation as temperature increase. Finally, the reactions and transformations of multiphase structure of cementitious composite cause its complete destruction. A particularly dangerous phenomenon is the impact of thermal shock – a sudden high temperature load. The thermal shock leads to a high value of the temperature gradient between the outer surface and the interior of the element in a relatively short time. The result of mentioned above process is the formation of the cracks and scratches on the material’s surface and inside the material. The article describes the use of computer image analysis techniques to identify and assess the structure of the cluster cracks on the surfaces of modified cement pastes, caused by thermal shock. Four series of specimens were tested. Two Portland cements were used (CEM I 42.5R and CEM I 52,5R). In addition, two of the series contained metakaolinite as a replacement for 10% of the cement content. Samples in each series were made in combination of three w/b (water/binder) indicators of respectively 0.4; 0.5; 0.6. Surface cracks of the samples were created by a sudden temperature load at 200°C for 4 hours. Images of the cracked surfaces were obtained via scanning at 1200 DPI; digital processing and measurements were performed using ImageJ v. 1.46r software. In order to examine the cracked surface of the cement paste as a system of closed clusters – the dispersal systems theory was used to describe the structure of cement paste. Water is used as the dispersing phase, and the binder is used as the dispersed phase – which is the initial stage of cement paste structure creation. A cluster itself is considered to be the area on the specimen surface that is limited by cracks (created by sudden temperature loading) or by the edge of the sample. To describe the structure of cracks two stereological parameters were proposed: A ̅ – the cluster average area, L ̅ – the cluster average perimeter. The goal of this study was to compare the investigated stereological parameters with the mechanical properties of the tested specimens. Compressive and tensile strength testes were carried out according to EN standards. The method used in the study allowed the quantitative determination of defects occurring in the examined modified cement pastes surfaces. Based on the results, it was found that the nature of the cracks depends mainly on the physical parameters of the cement and the intermolecular interactions on the dispersal environment. Additionally, it was noted that the A ̅/L ̅ relation of created clusters can be described as one function for all tested samples. This fact testifies about the constant geometry of the thermal cracks regardless of the presence of metakaolinite, the type of cement and the w/b ratio.

Keywords: cement paste, cluster cracks, elevated temperature, image analysis, metakaolinite, stereological parameters

Procedia PDF Downloads 392
581 A Paradigm Shift into the Primary Teacher Education Program in Bangladesh

Authors: Happy Kumar Das, Md. Shahriar Shafiq

Abstract:

This paper portrays an assumed change in the primary teacher education program in Bangladesh. An initiative has been taken with a vision to ensure an integrated approach to developing trainee teachers’ knowledge and understanding about learning at a deeper level, and with that aim, the Diploma in Primary Education (DPEd) program replaces the Certificate-in-Education (C-in-Ed) program in Bangladeshi context for primary teachers. The stated professional values of the existing program such as ‘learner-centered’, ‘reflective’ approach to pedagogy tend to contradict the practice exemplified through the delivery mechanism. To address the challenges, through the main two components (i) Training Institute-based learning and (ii) School-based learning, the new program tends to cover knowledge and value that underpin the actual practice of teaching. These two components are given approximately equal weighting within the program in terms of both time, content and assessment as the integration seeks to combine theoretical knowledge with practical knowledge and vice versa. The curriculum emphasizes a balance between the taught modules and the components of the practicum. For example, the theories of formative and summative assessment techniques are elaborated through focused reflection on case studies as well as observation and teaching practice in the classroom. The key ideology that is reflected through this newly developed program is teacher’s belief in ‘holistic education’ that can lead to creating opportunities for skills development in all three (Cognitive, Social and Affective) domains simultaneously. The proposed teacher education program aims to address these areas of generic skill development alongside subject-specific learning outcomes. An exploratory study has been designed in this regard where 7 Primary Teachers’ Training Institutes (PTIs) in 7 divisions of Bangladesh was used for experimenting DPEd program. The analysis was done based on document analysis, periodical monitoring report and empirical data gathered from the experimental PTIs. The findings of the study revealed that the intervention brought positive change in teachers’ professional beliefs, attitude and skills along with improvement of school environment. Teachers in training schools work together for collective professional development where they support each other through lesson study, action research, reflective journals, group sharing and so on. Although the DPEd program addresses the above mentioned factors, one of the challenges of the proposed program is the issue of existing capacity and capabilities of the PTIs towards its effective implementation.

Keywords: Bangladesh, effective implementation, primary teacher education, reflective approach

Procedia PDF Downloads 219
580 Survey of Prevalence of Noise Induced Hearing Loss in Hawkers and Shopkeepers in Noisy Areas of Mumbai City

Authors: Hitesh Kshayap, Shantanu Arya, Ajay Basod, Sachin Sakhuja

Abstract:

This study was undertaken to measure the overall noise levels in different locations/zones and to estimate the prevalence of Noise induced hearing loss in Hawkers & Shopkeepers in Mumbai, India. The Hearing Test developed by American Academy Of Otolaryngology, translated from English to Hindi, and validated is used as a screening tool for hearing sensitivity was employed. The tool is having 14 items. Each item is scored on a scale 0, 1, 2 and 3. The score 6 and above indicated some difficulty or definite difficulty in hearing in daily activities and low score indicated lesser difficulty or normal hearing. The subjects who scored 6 or above or having tinnitus were made to undergo hearing evaluation by Pure tone audiometer. Further, the environmental noise levels were measured from Morning to Evening at road side at different Location/Hawking zones in Mumbai city using SLM9 Agronic 8928B & K type Digital Sound Level Meter) in dB (A). The maximum noise level of 100.0 dB (A) was recorded during evening hours from Chattrapati Shivaji Terminal to Colaba with overall noise level of 79.0 dB (A). However, the minimum noise level in this area was 72.6 dB (A) at any given point of time. Further, 54.6 dB (A) was recorded as minimum noise level during 8-9 am at Sion Circle. Further, commencement of flyovers with 2-tier traffic, sky walks, increasing number of vehicular traffic at road, high rise buildings and other commercial & urbanization activities in the Mumbai city most probably have resulted in increasing the overall environmental noise levels. Trees which acted as noise absorbers have been cut owing to rapid construction. The study involved 100 participants in the age range of 18 to 40 years of age, with the mean age of 29 years (S.D. =6.49). 46 participants having tinnitus or have obtained the score of 6 were made to undergo Pure Tone Audiometry and it was found that the prevalence rate of hearing loss in hawkers & shopkeepers is 19% (10% Hawkers and 9 % Shopkeepers). The results found indicates that 29 (42.6%) out of 64 Hawkers and 17 (47.2%) out of 36 Shopkeepers who underwent PTA had no significant difference in percentage of Noise Induced Hearing loss. The study results also reveal that participants who exhibited tinnitus 19 (41.30%) out of 46 were having mild to moderate sensorineural hearing loss between 3000Hz to 6000Hz. The Pure tone Audiogram pattern revealed Hearing loss at 4000 Hz and 6000 Hz while hearing at adjacent frequencies were nearly normal. 7 hawkers and 8 shopkeepers had mild notch while 3 hawkers and 1 shopkeeper had a moderate degree of notch. It is thus inferred that tinnitus is a strong indicator for presence of hearing loss and 4/6 KHz notch is a strong marker for road/traffic/ environmental noise as an occupational hazard for hawkers and shopkeepers. Mass awareness about these occupational hazards, regular hearing check up, early intervention along with sustainable development juxtaposed with social and urban forestry can help in this regard.

Keywords: NIHL, noise, sound level meter, tinnitus

Procedia PDF Downloads 207
579 A Spatial Repetitive Controller Applied to an Aeroelastic Model for Wind Turbines

Authors: Riccardo Fratini, Riccardo Santini, Jacopo Serafini, Massimo Gennaretti, Stefano Panzieri

Abstract:

This paper presents a nonlinear differential model, for a three-bladed horizontal axis wind turbine (HAWT) suited for control applications. It is based on a 8-dofs, lumped parameters structural dynamics coupled with a quasi-steady sectional aerodynamics. In particular, using the Euler-Lagrange Equation (Energetic Variation approach), the authors derive, and successively validate, such model. For the derivation of the aerodynamic model, the Greenbergs theory, an extension of the theory proposed by Theodorsen to the case of thin airfoils undergoing pulsating flows, is used. Specifically, in this work, the authors restricted that theory under the hypothesis of low perturbation reduced frequency k, which causes the lift deficiency function C(k) to be real and equal to 1. Furthermore, the expressions of the aerodynamic loads are obtained using the quasi-steady strip theory (Hodges and Ormiston), as a function of the chordwise and normal components of relative velocity between flow and airfoil Ut, Up, their derivatives, and section angular velocity ε˙. For the validation of the proposed model, the authors carried out open and closed-loop simulations of a 5 MW HAWT, characterized by radius R =61.5 m and by mean chord c = 3 m, with a nominal angular velocity Ωn = 1.266rad/sec. The first analysis performed is the steady state solution, where a uniform wind Vw = 11.4 m/s is considered and a collective pitch angle θ = 0.88◦ is imposed. During this step, the authors noticed that the proposed model is intrinsically periodic due to the effect of the wind and of the gravitational force. In order to reject this periodic trend in the model dynamics, the authors propose a collective repetitive control algorithm coupled with a PD controller. In particular, when the reference command to be tracked and/or the disturbance to be rejected are periodic signals with a fixed period, the repetitive control strategies can be applied due to their high precision, simple implementation and little performance dependency on system parameters. The functional scheme of a repetitive controller is quite simple and, given a periodic reference command, is composed of a control block Crc(s) usually added to an existing feedback control system. The control block contains and a free time-delay system eτs in a positive feedback loop, and a low-pass filter q(s). It should be noticed that, while the time delay term reduces the stability margin, on the other hand the low pass filter is added to ensure stability. It is worth noting that, in this work, the authors propose a phase shifting for the controller and the delay system has been modified as e^(−(T−γk)), where T is the period of the signal and γk is a phase shifting of k samples of the same periodic signal. It should be noticed that, the phase shifting technique is particularly useful in non-minimum phase systems, such as flexible structures. In fact, using the phase shifting, the iterative algorithm could reach the convergence also at high frequencies. Notice that, in our case study, the shifting of k samples depends both on the rotor angular velocity Ω and on the rotor azimuth angle Ψ: we refer to this controller as a spatial repetitive controller. The collective repetitive controller has also been coupled with a C(s) = PD(s), in order to dampen oscillations of the blades. The performance of the spatial repetitive controller is compared with an industrial PI controller. In particular, starting from wind speed velocity Vw = 11.4 m/s the controller is asked to maintain the nominal angular velocity Ωn = 1.266rad/s after an instantaneous increase of wind speed (Vw = 15 m/s). Then, a purely periodic external disturbance is introduced in order to stress the capabilities of the repetitive controller. The results of the simulations show that, contrary to a simple PI controller, the spatial repetitive-PD controller has the capability to reject both external disturbances and periodic trend in the model dynamics. Finally, the nominal value of the angular velocity is reached, in accordance with results obtained with commercial software for a turbine of the same type.

Keywords: wind turbines, aeroelasticity, repetitive control, periodic systems

Procedia PDF Downloads 256
578 Ammonia Bunkering Spill Scenarios: Modelling Plume’s Behaviour and Potential to Trigger Harmful Algal Blooms in the Singapore Straits

Authors: Bryan Low

Abstract:

In the coming decades, the global maritime industry will face a most formidable environmental challenge -achieving net zero carbon emissions by 2050. To meet this target, the Maritime Port Authority of Singapore (MPA) has worked to establish green shipping and digital corridors with ports of several other countries around the world where ships will use low-carbon alternative fuels such as ammonia for power generation. While this paradigm shift to the bunkering of greener fuels is encouraging, fuels like ammonia will also introduce a new and unique type of environmental risk in the unlikely scenario of a spill. While numerous modelling studies have been conducted for oil spills and their associated environmental impact on coastal and marine ecosystems, ammonia spills are comparatively less well understood. For example, there is a knowledge gap regarding how the complex hydrodynamic conditions of the Singapore Straits may influence the dispersion of a hypothetical ammonia plume, which has different physical and chemical properties compared to an oil slick. Chemically, ammonia can be absorbed by phytoplankton, thus altering the balance of the marine nitrogen cycle. Biologically, ammonia generally serves the role of a nutrient in coastal ecosystems at lower concentrations. However, at higher concentrations, it has been found to be toxic to many local species. It may also have the potential to trigger eutrophication and harmful algal blooms (HABs) in coastal waters, depending on local hydrodynamic conditions. Thus, the key objective of this research paper is to support the development of a model-based forecasting system that can predict ammonia plume behaviour in coastal waters, given prevailing hydrodynamic conditions and their environmental impact. This will be essential as ammonia bunkering becomes more commonplace in Singapore’s ports and around the world. Specifically, this system must be able to assess the HAB-triggering potential of an ammonia plume, as well as its lethal and sub-lethal toxic effects on local species. This will allow the relevant authorities to better plan risk mitigation measures or choose a time window with the ideal hydrodynamic conditions to conduct ammonia bunkering operations with minimal risk. In this paper, we present the first part of such a forecasting system: a jointly coupled hydrodynamic-water quality model that can capture how advection-diffusion processes driven by ocean currents influence plume behaviour and how the plume interacts with the marine nitrogen cycle. The model is then applied to various ammonia spill scenarios where the results are discussed in the context of current ammonia toxicity guidelines, impact on local ecosystems, and mitigation measures for future bunkering operations conducted in the Singapore Straits.

Keywords: ammonia bunkering, forecasting, harmful algal blooms, hydrodynamics, marine nitrogen cycle, oceanography, water quality modeling

Procedia PDF Downloads 89
577 Subjective Probability and the Intertemporal Dimension of Probability to Correct the Misrelation Between Risk and Return of a Financial Asset as Perceived by Investors. Extension of Prospect Theory to Better Describe Risk Aversion

Authors: Roberta Martino, Viviana Ventre

Abstract:

From a theoretical point of view, the relationship between the risk associated with an investment and the expected value are directly proportional, in the sense that the market allows a greater result to those who are willing to take a greater risk. However, empirical evidence proves that this relationship is distorted in the minds of investors and is perceived exactly the opposite. To deepen and understand the discrepancy between the actual actions of the investor and the theoretical predictions, this paper analyzes the essential parameters used for the valuation of financial assets with greater attention to two elements: probability and the passage of time. Although these may seem at first glance to be two distinct elements, they are closely related. In particular, the error in the theoretical description of the relationship between risk and return lies in the failure to consider the impatience that is generated in the decision-maker when events that have not yet happened occur in the decision-making context. In this context, probability loses its objective meaning and in relation to the psychological aspects of the investor, it can only be understood as the degree of confidence that the investor has in the occurrence or non-occurrence of an event. Moreover, the concept of objective probability does not consider the inter-temporality that characterizes financial activities and does not consider the condition of limited cognitive capacity of the decision maker. Cognitive psychology has made it possible to understand that the mind acts with a compromise between quality and effort when faced with very complex choices. To evaluate an event that has not yet happened, it is necessary to imagine that it happens in your head. This projection into the future requires a cognitive effort and is what differentiates choices under conditions of risk and choices under conditions of uncertainty. In fact, since the receipt of the outcome in choices under risk conditions is imminent, the mechanism of self-projection into the future is not necessary to imagine the consequence of the choice and the decision makers dwell on the objective analysis of possibilities. Financial activities, on the other hand, develop over time and the objective probability is too static to consider the anticipatory emotions that the self-projection mechanism generates in the investor. Assuming that uncertainty is inherent in valuations of events that have not yet occurred, the focus must shift from risk management to uncertainty management. Only in this way the intertemporal dimension of the decision-making environment and the haste generated by the financial market can be cautioned and considered. The work considers an extension of the prospectus theory with the temporal component with the aim of providing a description of the attitude towards risk with respect to the passage of time.

Keywords: impatience, risk aversion, subjective probability, uncertainty

Procedia PDF Downloads 111
576 Risks beyond Cyber in IoT Infrastructure and Services

Authors: Mattias Bergstrom

Abstract:

Significance of the Study: This research will provide new insights into the risks with digital embedded infrastructure. Through this research, we will analyze each risk and its potential negation strategies, especially for AI and autonomous automation. Moreover, the analysis that is presented in this paper will convey valuable information for future research that can create more stable, secure, and efficient autonomous systems. To learn and understand the risks, a large IoT system was envisioned, and risks with hardware, tampering, and cyberattacks were collected, researched, and evaluated to create a comprehensive understanding of the potential risks. Potential solutions have then been evaluated on an open source IoT hardware setup. This list shows the identified passive and active risks evaluated in the research. Passive Risks: (1) Hardware failures- Critical Systems relying on high rate data and data quality are growing; SCADA systems for infrastructure are good examples of such systems. (2) Hardware delivers erroneous data- Sensors break, and when they do so, they don’t always go silent; they can keep going, just that the data they deliver is garbage, and if that data is not filtered out, it becomes disruptive noise in the system. (3) Bad Hardware injection- Erroneous generated sensor data can be pumped into a system by malicious actors with the intent to create disruptive noise in critical systems. (4) Data gravity- The weight of the data collected will affect Data-Mobility. (5) Cost inhibitors- Running services that need huge centralized computing is cost inhibiting. Large complex AI can be extremely expensive to run. Active Risks: Denial of Service- It is one of the most simple attacks, where an attacker just overloads the system with bogus requests so that valid requests disappear in the noise. Malware- Malware can be anything from simple viruses to complex botnets created with specific goals, where the creator is stealing computer power and bandwidth from you to attack someone else. Ransomware- It is a kind of malware, but it is so different in its implementation that it is worth its own mention. The goal with these pieces of software is to encrypt your system so that it can only be unlocked with a key that is held for ransom. DNS spoofing- By spoofing DNS calls, valid requests and data dumps can be sent to bad destinations, where the data can be extracted for extortion or to corrupt and re-inject into a running system creating a data echo noise loop. After testing multiple potential solutions. We found that the most prominent solution to these risks was to use a Peer 2 Peer consensus algorithm over a blockchain to validate the data and behavior of the devices (sensors, storage, and computing) in the system. By the devices autonomously policing themselves for deviant behavior, all risks listed above can be negated. In conclusion, an Internet middleware that provides these features would be an easy and secure solution to any future autonomous IoT deployments. As it provides separation from the open Internet, at the same time, it is accessible over the blockchain keys.

Keywords: IoT, security, infrastructure, SCADA, blockchain, AI

Procedia PDF Downloads 108
575 Studies of Heavy Metal Ions Removal Efficiency in the Presence of Anionic Surfactant Using Ion Exchangers

Authors: Anna Wolowicz, Katarzyna Staszak, Zbigniew Hubicki

Abstract:

Nowadays heavy metal ions as well as surfactants are widely used throughout the world due to their useful properties. The consequence of such widespread use is their significant production. On the other hand, the increasing demand for surfactants and heavy metal ions results in production of large amounts of wastewaters which are discharged to the environment from mining, metal plating, pharmaceutical, cosmetic, fertilizer, paper, pesticide and electronic industries, pigments producing, petroleum refining and from autocatalyst, fibers, food, polymer industries etc. Heavy metal ions are non-biodegradable in the environment, cable of accumulation in living organisms and organs, toxic and carcinogenic. On the other hand, not only heavy metal ions but also surfactants affect the purity of water and soils. Some of surfactants are also toxic, harmful and dangerous because they are able to penetrate into surface waters causing foaming, blocked diffusion of oxygen from the atmosphere and act as emulsifiers of hydrophobic substances and increase solubility of many the dangerous pollutants. Among surfactants the anionic ones dominate and their share in the global production of surfactants is around 50 ÷ 60%. Due to the negative impact of heavy metals and surfactants on aquatic ecosystems and living organisms, removal and monitoring of their concentration in the environment is extremely important. Surfactants and heavy metal ions removal can be achieved by different biological and physicochemical methods. The adsorption as well as the ion-exchange methods play here a significant role. The aim of this study was heavy metal ions removal from aqueous solutions using different types of ion exchangers in the presence of anionic surfactants. Preliminary studies of copper(II), nickel(II), zinc(II) and cobalt(II) removal from acidic solutions using ion exchangers (Lewatit MonoPlus TP 220, Lewatit MonoPlus SR 7, Purolite A 400 TL, Purolite A 830, Purolite S 984, Dowex PSR 2, Dowex PSR3, Lewatit AF-5) allowed to select the most effective ones for the above mentioned sorbates and then to checking their removal efficiency in the presence of anionic surfactants. As it was found out Lewatit MonoPlus TP 220 of the chelating type, show the highest sorption capacities for copper(II) ions in comparison with the other ion exchangers under discussion, e.g. 9.98 mg/g (0.1 M HCl); 9.12 mg/g (6 M HCl). Moreover, cobalt(II) removal efficiency was the highest in 0.1 M HCl using also Lewatit MonoPlus TP 220 (6.9 mg/g) similar to zinc(II) (9.1 mg/g) and nickiel(II) (6.2 mg/g). As the anionic surfactant sodium dodecyl sulphate (SDS) was used and surfactant parameters such as viscosity (η), density (ρ) and critical micelle concentration (CMC) were obtained: η = 1.13 ± 0,01 mPa·s; ρ = 999.76 mg/cm3; CMC = 2.26 g/cm3. The studies of copper(II) removal from acidic solutions in the presence of SDS of different concentration show negligible effects on copper(II) removal efficiency. The sorption capacity of Cu(II) from 0.1 M acidic solution of 500 mg/L initial concentration was equal to 46.8 mg/g whereas in the presence of SDS 45.3 mg/g (0.1 mg SDS/L), 47.1 mg/g (0.5 mg SDS/L), 46.6 mg/g (1 mg SDS/L).

Keywords: anionic surfactant, heavy metal ions, ion exchanger, removal

Procedia PDF Downloads 145