Search results for: universal random phenomenon
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4724

Search results for: universal random phenomenon

224 Comparison of Sediment Rating Curve and Artificial Neural Network in Simulation of Suspended Sediment Load

Authors: Ahmad Saadiq, Neeraj Sahu

Abstract:

Sediment, which comprises of solid particles of mineral and organic material are transported by water. In river systems, the amount of sediment transported is controlled by both the transport capacity of the flow and the supply of sediment. The transport of sediment in rivers is important with respect to pollution, channel navigability, reservoir ageing, hydroelectric equipment longevity, fish habitat, river aesthetics and scientific interests. The sediment load transported in a river is a very complex hydrological phenomenon. Hence, sediment transport has attracted the attention of engineers from various aspects, and different methods have been used for its estimation. So, several experimental equations have been submitted by experts. Though the results of these methods have considerable differences with each other and with experimental observations, because the sediment measures have some limits, these equations can be used in estimating sediment load. In this present study, two black box models namely, an SRC (Sediment Rating Curve) and ANN (Artificial Neural Network) are used in the simulation of the suspended sediment load. The study is carried out for Seonath subbasin. Seonath is the biggest tributary of Mahanadi river, and it carries a vast amount of sediment. The data is collected for Jondhra hydrological observation station from India-WRIS (Water Resources Information System) and IMD (Indian Meteorological Department). These data include the discharge, sediment concentration and rainfall for 10 years. In this study, sediment load is estimated from the input parameters (discharge, rainfall, and past sediment) in various combination of simulations. A sediment rating curve used the water discharge to estimate the sediment concentration. This estimated sediment concentration is converted to sediment load. Likewise, for the application of these data in ANN, they are normalised first and then fed in various combinations to yield the sediment load. RMSE (root mean square error) and R² (coefficient of determination) between the observed load and the estimated load are used as evaluating criteria. For an ideal model, RMSE is zero and R² is 1. However, as the models used in this study are black box models, they don’t carry the exact representation of the factors which causes sedimentation. Hence, a model which gives the lowest RMSE and highest R² is the best model in this study. The lowest values of RMSE (based on normalised data) for sediment rating curve, feed forward back propagation, cascade forward back propagation and neural network fitting are 0.043425, 0.00679781, 0.0050089 and 0.0043727 respectively. The corresponding values of R² are 0.8258, 0.9941, 0.9968 and 0.9976. This implies that a neural network fitting model is superior to the other models used in this study. However, a drawback of neural network fitting is that it produces few negative estimates, which is not at all tolerable in the field of estimation of sediment load, and hence this model can’t be crowned as the best model among others, based on this study. A cascade forward back propagation produces results much closer to a neural network model and hence this model is the best model based on the present study.

Keywords: artificial neural network, Root mean squared error, sediment, sediment rating curve

Procedia PDF Downloads 301
223 Barriers for Sustainable Consumption of Antifouling Products in the Baltic Sea

Authors: Bianca Koroschetz, Emma Mäenpää

Abstract:

The purpose of this paper is to study consumer practices and meanings of different antifouling methods in order to identify the main barriers for sustainable consumption of antifouling products in the Baltic Sea. The Baltic Sea is considered to be an important tourism area. More than 3.5 million leisure boaters use the sea for recreational boating. Most leisure boat owners use toxic antifouling paint to keep barnacles from attaching to the hull. Attached barnacles limit maneuverability and add drag which in turn increases fuel costs. Antifouling paint used to combat barnacles causes particular problems, as the use of these products continuously adds to the distribution of biocides in the coastal ecosystem and leads to the death of marine organisms. To keep the Baltic Sea as an attractive tourism area measures need to be undertaken to stop the pollution coming from toxic antifouling paints. The antifouling market contains a wide range of environment-friendly alternative products such as a brush wash for boats, hand scrubbing devices, hull covers and boat lifts. Unfortunately, not a lot of boat owners use these environment-friendly alternatives and instead prefer the use of the traditional toxic copper paints. We ask “Why is the unsustainable consumption of toxic paints still predominant when there is a big range of environment-friendly alternatives available? What are the barriers for sustainable consumption?” Environmental psychology has concentrated on developing models of human behavior, including the main factors that influence pro-environmental behavior. The main focus of these models was directed to the individual’s attitudes, principals, and beliefs. However, social practice theory emphasizes the importance to study practices, as they have a stronger explanatory power than attitude-behavior to explain unsustainable consumer behavior. Thus, the study focuses on describing the material, meaning and competence of antifouling practice in order to understand the social and cultural embeddedness of the practice. Phenomenological interviews were conducted with boat owners using antifouling products such as paints and alternative methods. This data collection was supplemented with participant observations in marinas. Preliminary results indicate that different factors such as costs, traditions, advertising, frequency of use, marinas and application of method impact on the consumption of antifouling products. The findings have shown that marinas have a big influence on the consumption of antifouling goods. Some marinas are very active in supporting the sustainable consumption of antifouling products as for example in Stockholm area several marinas subsidize costs for using environmental friendly alternatives or even forbid toxic paints. Furthermore the study has revealed that environmental friendly methods are very effective and do not have to be more expensive than painting with toxic paints. This study contributes to a broader understanding why the unsustainable consumption of toxic paints is still predominant when a big range of environment-friendly alternatives exist. Answers to this phenomenon will be gained by studying practices instead of attitudes offering a new perspective on environmental issues.

Keywords: antifouling paint, Baltic Sea, boat tourism, sustainable consumption

Procedia PDF Downloads 166
222 Locating the Role of Informal Urbanism in Building Sustainable Cities: Insights from Ghana

Authors: Gideon Abagna Azunre

Abstract:

Informal urbanism is perhaps the most ubiquitous urban phenomenon in sub-Saharan Africa (SSA) and Ghana specifically. Estimates suggest that about two-fifths of urban dwellers (37.9%) in Ghana live in informal settlements, while two-thirds of the working labour force are within the informal economy. This makes Ghana invariably an ‘informal country.’ Informal urbanism involves economic and housing activities that are – in law or in practice – not covered (or insufficiently covered) by formal regulations. Many urban folks rely on informal urbanism as a survival strategy due to limited formal waged employment opportunities or rising home prices in the open market. In an era of globalizing neoliberalism, this struggle to survive in cities resonates with several people globally. For years now, there have been intense debates on the utility of informal urbanism – both its economic and housing dimensions – in developing sustainable cities. While some scholars believe that informal urbanism is beneficial to the sustainable city development agenda, others argue that it generates unbearable negative consequences and it symbolizes lawlessness and squalor. Consequently, the main aim of this research was to dig below the surface of the narratives to locate the role of informal urbanism in the quest for sustainable cities. The research geographically focused on Ghana and its burgeoning informal sector. Also, both primary and secondary data were utilized for the analysis; Secondary data entailed a synthesis of the fragmented literature on informal urbanism in Ghana, while primary data entailed interviews with informal stakeholders (such as informal settlement dwellers), city authorities, and planners. These two data sets were weaved together to discover the nexus between informal urbanism and the tripartite dimensions of sustainable cities – economic, social, and environmental. The results from the research showed a two-pronged relationship between informal urbanism and the three dimensions of sustainable city development. In other words, informal urbanism was identified to both positively and negatively affect the drive for sustainable cities. On the one hand, it provides employment (particularly to women), supplies households’ basic needs (shelter, health, water, and waste management), and enhances civic engagement. However, on the other hand, it perpetuates social and gender inequalities, insecurity, congestion, and pollution. The research revealed that a ‘black and white’ interpretation and policy approach is incapable of capturing the complexities of informal urbanism. Therefore, trying to eradicate or remove it from the urbanscape because it exhibits some negative consequences means cities will lose their positive contributions. The inverse also holds true. A careful balancing act is necessary to maximize the benefits and minimize the costs. Overall, the research presented a de-colonial theorization of informal urbanism and thus followed post-colonial scholars’ clarion call to African cities to embrace the paradox of informality and find ways to integrate it into the city-building process.

Keywords: informal urbanism, sustainable city development, economic sustainability, social sustainability, environmental sustainability, Ghana

Procedia PDF Downloads 78
221 An in silico Approach for Exploring the Intercellular Communication in Cancer Cells

Authors: M. Cardenas-Garcia, P. P. Gonzalez-Perez

Abstract:

Intercellular communication is a necessary condition for cellular functions and it allows a group of cells to survive as a population. Throughout this interaction, the cells work in a coordinated and collaborative way which facilitates their survival. In the case of cancerous cells, these take advantage of intercellular communication to preserve their malignancy, since through these physical unions they can send signs of malignancy. The Wnt/β-catenin signaling pathway plays an important role in the formation of intercellular communications, being also involved in a large number of cellular processes such as proliferation, differentiation, adhesion, cell survival, and cell death. The modeling and simulation of cellular signaling systems have found valuable support in a wide range of modeling approaches, which cover a wide spectrum ranging from mathematical models; e.g., ordinary differential equations, statistical methods, and numerical methods– to computational models; e.g., process algebra for modeling behavior and variation in molecular systems. Based on these models, different simulation tools have been developed from mathematical ones to computational ones. Regarding cellular and molecular processes in cancer, its study has also found a valuable support in different simulation tools that, covering a spectrum as mentioned above, have allowed the in silico experimentation of this phenomenon at the cellular and molecular level. In this work, we simulate and explore the complex interaction patterns of intercellular communication in cancer cells using the Cellulat bioinformatics tool, a computational simulation tool developed by us and motivated by two key elements: 1) a biochemically inspired model of self-organizing coordination in tuple spaces, and 2) the Gillespie’s algorithm, a stochastic simulation algorithm typically used to mimic systems of chemical/biochemical reactions in an efficient and accurate way. The main idea behind the Cellulat simulation tool is to provide an in silico experimentation environment that complements and guides in vitro experimentation in intra and intercellular signaling networks. Unlike most of the cell signaling simulation tools, such as E-Cell, BetaWB and Cell Illustrator which provides abstractions to model only intracellular behavior, Cellulat is appropriate for modeling both intracellular signaling and intercellular communication, providing the abstractions required to model –and as a result, simulate– the interaction mechanisms that involve two or more cells, that is essential in the scenario discussed in this work. During the development of this work we made evident the application of our computational simulation tool (Cellulat) for the modeling and simulation of intercellular communication between normal and cancerous cells, and in this way, propose key molecules that may prevent the arrival of malignant signals to the cells that surround the tumor cells. In this manner, we could identify the significant role that has the Wnt/β-catenin signaling pathway in cellular communication, and therefore, in the dissemination of cancer cells. We verified, using in silico experiments, how the inhibition of this signaling pathway prevents that the cells that surround a cancerous cell are transformed.

Keywords: cancer cells, in silico approach, intercellular communication, key molecules, modeling and simulation

Procedia PDF Downloads 230
220 Influence of Temperature and Immersion on the Behavior of a Polymer Composite

Authors: Quentin C.P. Bourgogne, Vanessa Bouchart, Pierre Chevrier, Emmanuel Dattoli

Abstract:

This study presents an experimental and theoretical work conducted on a PolyPhenylene Sulfide reinforced with 40%wt of short glass fibers (PPS GF40) and its matrix. Thermoplastics are widely used in the automotive industry to lightweight automotive parts. The replacement of metallic parts by thermoplastics is reaching under-the-hood parts, near the engine. In this area, the parts are subjected to high temperatures and are immersed in cooling liquid. This liquid is composed of water and glycol and can affect the mechanical properties of the composite. The aim of this work was thus to quantify the evolution of mechanical properties of the thermoplastic composite, as a function of temperature and liquid aging effects, in order to develop a reliable design of parts. An experimental campaign in the tensile mode was carried out at different temperatures and for various glycol proportions in the cooling liquid, for monotonic and cyclic loadings on a neat and a reinforced PPS. The results of these tests allowed to highlight some of the main physical phenomena occurring during these solicitations under tough hydro-thermal conditions. Indeed, the performed tests showed that temperature and liquid cooling aging can affect the mechanical behavior of the material in several ways. The more the cooling liquid contains water, the more the mechanical behavior is affected. It was observed that PPS showed a higher sensitivity to absorption than to chemical aggressiveness of the cooling liquid, explaining this dominant sensitivity. Two kinds of behaviors were noted: an elasto-plastic type under the glass transition temperature and a visco-pseudo-plastic one above it. It was also shown that viscosity is the leading phenomenon above the glass transition temperature for the PPS and could also be important under this temperature, mostly under cyclic conditions and when the stress rate is low. Finally, it was observed that soliciting this composite at high temperatures is decreasing the advantages of the presence of fibers. A new phenomenological model was then built to take into account these experimental observations. This new model allowed the prediction of the evolution of mechanical properties as a function of the loading environment, with a reduced number of parameters compared to precedent studies. It was also shown that the presented approach enables the description and the prediction of the mechanical response with very good accuracy (2% of average error at worst), over a wide range of hydrothermal conditions. A temperature-humidity equivalence principle was underlined for the PPS, allowing the consideration of aging effects within the proposed model. Then, a limit of improvement of the reachable accuracy was determinate for all models using this set of data by the application of an artificial intelligence-based model allowing a comparison between artificial intelligence-based models and phenomenological based ones.

Keywords: aging, analytical modeling, mechanical testing, polymer matrix composites, sequential model, thermomechanical

Procedia PDF Downloads 92
219 Removal of Heavy Metals by Ultrafiltration Assisted with Chitosan or Carboxy-Methyl Cellulose

Authors: Boukary Lam, Sebastien Deon, Patrick Fievet, Nadia Crini, Gregorio Crini

Abstract:

Treatment of heavy metal-contaminated industrial wastewater has become a major challenge over the last decades. Conventional processes for the treatment of metal-containing effluents do not always simultaneously satisfy both legislative and economic criteria. In this context, coupling of processes can then be a promising alternative to the conventional approaches used by industry. The polymer-assisted ultrafiltration (PAUF) process is one of these coupling processes. Its principle is based on a sequence of steps with reaction (e.g., complexation) between metal ions and a polymer and a step involving the rejection of the formed species by means of a UF membrane. Unlike free ions, which can cross the UF membrane due to their small size, the polymer/ion species, the size of which is larger than pore size, are rejected. The PAUF process was deeply investigated herein in the case of removal of nickel ions by adding chitosan and carboxymethyl cellulose (CMC). Experiments were conducted with synthetic solutions containing 1 to 100 ppm of nickel ions with or without the presence of NaCl (0.05 to 0.2 M), and an industrial discharge water (containing several metal ions) with and without polymer. Chitosan with a molecular weight of 1.8×105 g mol⁻¹ and a degree of acetylation close to 15% was used. CMC with a degree of substitution of 0.7 and a molecular weight of 9×105 g mol⁻¹ was employed. Filtration experiments were performed under cross-flow conditions with a filtration cell equipped with a polyamide thin film composite flat-sheet membrane (3.5 kDa). Without the step of polymer addition, it was found that nickel rejection decreases from 80 to 0% with increasing metal ion concentration and salt concentration. This behavior agrees qualitatively with the Donnan exclusion principle: the increase in the electrolyte concentration screens the electrostatic interaction between ions and the membrane fixed the charge, which decreases their rejection. It was shown that addition of a sufficient amount of polymer (greater than 10⁻² M of monomer unit) can offset this decrease and allow good metal removal. However, the permeation flux was found to be somewhat reduced due to the increase in osmotic pressure and viscosity. It was also highlighted that the increase in pH (from 3 to 9) has a strong influence on removal performances: the higher pH value, the better removal performance. The two polymers have shown similar performance enhancement at natural pH. However, chitosan has proved more efficient in slightly basic conditions (above its pKa) whereas CMC has demonstrated very weak rejection performances when pH is below its pKa. In terms of metal rejection, chitosan is thus probably the better option for basic or strongly acid (pH < 4) conditions. Nevertheless, CMC should probably be preferred to chitosan in natural conditions (5 < pH < 8) since its impact on the permeation flux is less significant. Finally, ultrafiltration of an industrial discharge water has shown that the increase in metal ion rejection induced by the polymer addition is very low due to the competing phenomenon between the various ions present in the complex mixture.

Keywords: carboxymethyl cellulose, chitosan, heavy metals, nickel ion, polymer-assisted ultrafiltration

Procedia PDF Downloads 135
218 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images

Authors: Elham Bagheri, Yalda Mohsenzadeh

Abstract:

Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.

Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception

Procedia PDF Downloads 47
217 Combating the Practice of Open Defecation through Appropriate Communication Strategies in Rural India

Authors: Santiagomani Alex Parimalam

Abstract:

Lack of awareness on the consequences of open defecation and myths and misconceptions related to use of toilets have led to the continued practice of open defecation in India. Government of India initiated a multi-pronged intensive communication campaign against the practice of open defecation in the last few years. The primary vision of this communication campaign was to provide increased demand for toilets and to ensure that all have access to safe sanitation. The campaign strategy included the use of mass media, group and folk media, and interpersonal communication to expedite achieving its objectives. The campaign included the use of various media such as posters, wall writings, slides in cinema theatres, kiosks, pamphlets, newsletters, flip charts and folk media to bring behavioural changes in the communities. The author did a concurrent monitoring and process documentation of the campaigns initiated by the state of Tamilnandu, India between 2013 and 2016 commissioned by UNICEF India. The study was carried out to assess the effectiveness of the communication campaigns in combating the practice of open defecation and promote construction of toilets in the state of Tamilnadu, India. Initial findings revealed the gap in understanding the audience and the use of appropriate media. The first phase of the communication campaign by name as Chi Chi Chollapa (bringing shame concept) also revealed that use of interpersonal communication, group and community media were the most effective strategy in reaching the rural masses. The failure of various other media used especially the print media (poster, handbills, newsletter, kiosks) provides insights as to where the government needs to invest its resources in bringing health-seeking behaviour in the community. The findings shared with the government enabled to strengthen the campaign resulting in improved response. Taking cues from the study, the government understood the potency of the women, school children, youth and community leaders as the effective carriers of the message. The government narrowed down its focus and invested on the voluntary workers (village poverty reduction committee workers VPRCs) in the community. The effectiveness of interpersonal communication and peer education by the credible community worker threw light on the need for localising the content and communicator. From this study, we could derive that only community and group media are preferred by the people in the rural community. Children, youth, women, and credible local leaders are proved to be ambassadors in behaviour change communication. This study discloses the lacunae involved in the communication campaign and points out that the state should have carried out a proper communication need analysis and piloting. The study used a survey method with random sampling. The study used both quantitative and qualitative tools such as interview schedules, in-depth interviews, and focus group discussions in rural areas of Tamilnadu in phases. The findings of the study would provide directions to future campaigns to any campaign concerning health and rural development.

Keywords: appropriate, communication, combating, open defecation

Procedia PDF Downloads 103
216 Case-Based Reasoning for Modelling Random Variables in the Reliability Assessment of Existing Structures

Authors: Francesca Marsili

Abstract:

The reliability assessment of existing structures with probabilistic methods is becoming an increasingly important and frequent engineering task. However probabilistic reliability methods are based on an exhaustive knowledge of the stochastic modeling of the variables involved in the assessment; at the moment standards for the modeling of variables are absent, representing an obstacle to the dissemination of probabilistic methods. The framework according to probability distribution functions (PDFs) are established is represented by the Bayesian statistics, which uses Bayes Theorem: a prior PDF for the considered parameter is established based on information derived from the design stage and qualitative judgments based on the engineer past experience; then, the prior model is updated with the results of investigation carried out on the considered structure, such as material testing, determination of action and structural properties. The application of Bayesian statistics arises two different kind of problems: 1. The results of the updating depend on the engineer previous experience; 2. The updating of the prior PDF can be performed only if the structure has been tested, and quantitative data that can be statistically manipulated have been collected; performing tests is always an expensive and time consuming operation; furthermore, if the considered structure is an ancient building, destructive tests could compromise its cultural value and therefore should be avoided. In order to solve those problems, an interesting research path is represented by investigating Artificial Intelligence (AI) techniques that can be useful for the automation of the modeling of variables and for the updating of material parameters without performing destructive tests. Among the others, one that raises particular attention in relation to the object of this study is constituted by Case-Based Reasoning (CBR). In this application, cases will be represented by existing buildings where material tests have already been carried out and an updated PDFs for the material mechanical parameters has been computed through a Bayesian analysis. Then each case will be composed by a qualitative description of the material under assessment and the posterior PDFs that describe its material properties. The problem that will be solved is the definition of PDFs for material parameters involved in the reliability assessment of the considered structure. A CBR system represent a good candi¬date in automating the modelling of variables because: 1. Engineers already draw an estimation of the material properties based on the experience collected during the assessment of similar structures, or based on similar cases collected in literature or in data-bases; 2. Material tests carried out on structure can be easily collected from laboratory database or from literature; 3. The system will provide the user of a reliable probabilistic description of the variables involved in the assessment that will also serve as a tool in support of the engineer’s qualitative judgments. Automated modeling of variables can help in spreading probabilistic reliability assessment of existing buildings in the common engineering practice, and target at the best intervention and further tests on the structure; CBR represents a technique which may help to achieve this.

Keywords: reliability assessment of existing buildings, Bayesian analysis, case-based reasoning, historical structures

Procedia PDF Downloads 317
215 Benefits of The ALIAmide Palmitoyl-Glucosamine Co-Micronized with Curcumin for Osteoarthritis Pain: A Preclinical Study

Authors: Enrico Gugliandolo, Salvatore Cuzzocrea, Rosalia Crupi

Abstract:

Osteoarthritis (OA) is one of the most common chronic pain conditions in dogs and cats. OA pain is currently viewed as a mixed phenomenon involving both inflammatory and neuropathic mechanisms at the peripheral (joint) and central (spinal and supraspinal) levels. Oxidative stress has been implicated in OA pain. Although nonsteroidal anti-inflammatory drugs are commonly prescribed for OA pain, they should be used with caution in pets because of adverse effects in the long term and controversial efficacy on neuropathic pain. An unmet need remains for safe and effective long-term treatments for OA pain. Palmitoyl-glucosamine (PGA) is an analogue of the ALIAamide palmitoylethanolamide, i.e., a body’s own endocannabinoid-like compound playing a sentinel role in nociception. PGA, especially in the micronized formulation, was shown safe and effective in OA pain. The aim of this study was to investigate the effect of a co-micronized formulation of PGA with the natural antioxidant curcumin (PGA-cur) on OA pain. Ten Sprague-Dawley male rats were used for each treatment group. The University of Messina Review Board for the care and use of animals authorized the study. On day 0, rats were anesthetized (5.0% isoflurane in 100% O2) and received intra-articular injection of MIA (3 mg in 25 μl saline) in the right knee joint, with the left being injected an equal volume of saline. Starting the third day after MIA injection, treatments were administered orally three times per week for 21 days, at the following doses: PGA 20 mg/kg, curcumin 10 mg/kg, PGA-cur (2:1 ratio) 30 mg/kg. On day 0 and 3, 7, 14 and 21 days post-injection, mechanical allodynia was measured using a dynamic plantar Von Frey hair aesthesiometer and expressed as paw withdrawal threshold (PWT) and latency (PWL). Motor functional recovery of the rear limb was evaluated on the same time points by walking track analysis using the sciatic functional index. On day 21 post-MIA injection, the concentration of the following inflammatory and nociceptive mediators was measured in serum using commercial ELISA kits: tumor necrosis factor alpha (TNF-α), interleukin-1 beta (IL-1β), nerve growth factor (NGF) and matrix metalloproteinase-1-3-9 (MMP-1, MMP-3, MMP-9). The results were analyzed by ANOVA followed by Bonferroni post-hoc test for multiple comparisons. Micronized PGA reduced neuropathic pain, as shown by the significant higher PWT and PWL values compared to vehicle group (p < 0.0001 for all the evaluated time points). The effect of PGA-cur was superior at all time points (p < 0.005). PGA-cur restored motor function already on day 14 (p < 0.005), while micronized PGA was effective a week later (D21). MIA-induced increase in the serum levels of all the investigated mediators was inhibited by PGA-cur (p < 0.01). PGA was also effective, except on IL-1 and MMP-3. Curcumin alone was inactive in all the experiments at any time point. The encouraging results suggest that PGA-cur may represent a valuable option in OA pain management and warrant further confirmation in well-powered clinical trials.

Keywords: ALIAmides, curcumin, osteoarthritis, palmitoyl-glucosamine

Procedia PDF Downloads 85
214 Oil-price Volatility and Economic Prosperity in Nigeria: Empirical Evidence

Authors: Yohanna Panshak

Abstract:

The impact of macroeconomic instability on economic growth and prosperity has been at forefront in many discourses among researchers and policy makers and has generated a lot of controversies over the years. This has generated series of research efforts towards understanding the remote causes of this phenomenon; its nature, determinants and how it can be targeted and mitigated. While others have opined that the root cause of macroeconomic flux in Nigeria is attributed to Oil-Price volatility, others viewed the issue as resulting from some constellation of structural constraints both within and outside the shores of the country. Research works of scholars such as [Akpan (2009), Aliyu (2009), Olomola (2006), etc] argue that oil volatility can determine economic growth or has the potential of doing so. On the contrary, [Darby (1982), Cerralo (2005) etc] share the opinion that it can slow down growth. The earlier argument rest on the understanding that for a net balance of oil exporting economies, price upbeat directly increases real national income through higher export earnings, whereas, the latter allude to the case of net-oil importing countries (which experience price rises, increased input costs, reduced non-oil demand, low investment, fall in tax revenues and ultimately an increase in budget deficit which will further reduce welfare level). Therefore, assessing the precise impact of oil price volatility on virtually any economy is a function of whether it is an oil-exporting or importing nation. Research on oil price volatility and its outcome on the growth of the Nigerian economy are evolving and in a march towards resolving Nigeria’s macroeconomic instability as long as oil revenue still remain the mainstay and driver of socio-economic engineering. Recently, a major importer of Nigeria’s oil- United States made a historic breakthrough in more efficient source of energy for her economy with the capacity of serving significant part of the world. This undoubtedly suggests a threat to the exchange earnings of the country. The need to understand fluctuation in its major export commodity is critical. This paper leans on the Renaissance growth theory with greater focus on theoretical work of Lee (1998); a leading proponent of this school who makes a clear cut of difference between oil price changes and oil price volatility. Based on the above background, the research seeks to empirically examine the impact oil-price volatility on government expenditure using quarterly time series data spanning 1986:1 to 2014:4. Vector Auto Regression (VAR) econometric approach shall be used. The structural properties of the model shall be tested using Augmented Dickey-Fuller and Phillips-Perron. Relevant diagnostics tests of heteroscedasticity, serial correlation and normality shall also be carried out. Policy recommendation shall be offered on the empirical findings and believes it assist policy makers not only in Nigeria but the world-over.

Keywords: oil-price, volatility, prosperity, budget, expenditure

Procedia PDF Downloads 249
213 Numerical Analyses of Dynamics of Deployment of PW-Sat2 Deorbit Sail Compared with Results of Experiment under Micro-Gravity and Low Pressure Conditions

Authors: P. Brunne, K. Ciechowska, K. Gajc, K. Gawin, M. Gawin, M. Kania, J. Kindracki, Z. Kusznierewicz, D. Pączkowska, F. Perczyński, K. Pilarski, D. Rafało, E. Ryszawa, M. Sobiecki, I. Uwarowa

Abstract:

Big amount of space debris constitutes nowadays a real thread for operating space crafts; therefore the main purpose of PW-Sat2’ team was to create a system that could help cleanse the Earth’s orbit after each small satellites’ mission. After 4 years of development, the motorless, low energy consumption and low weight system has been created. During series of tests, the system has shown high reliable efficiency. The PW-Sat2’s deorbit system is a square-shaped sail which covers an area of 4m². The sail surface is made of 6 μm aluminized Mylar film which is stretched across 4 diagonally placed arms, each consisting of two C-shaped flat springs and enveloped in Mylar sleeves. The sail is coiled using a special, custom designed folding stand that provides automation and repeatability of the sail unwinding tests and placed in a container with inner diameter of 85 mm. In the final configuration the deorbit system weights ca. 600 g and occupies 0.6U (in accordance with CubeSat standard). The sail’s releasing system requires minimal amount of power based on thermal knife that burns out the Dyneema wire, which holds the system before deployment. The Sail is being pushed out of the container within a safe distance (20 cm away) from the satellite. The energy for the deployment is completely assured by coiled C-shaped flat springs, which during the release, unfold the sail surface. To avoid dynamic effects on the satellite’s structure, there is the rotational link between the sail and satellite’s main body. To obtain complete knowledge about complex dynamics of the deployment, a number of experiments have been performed in varied environments. The numerical model of the dynamics of the Sail’s deployment has been built and is still under continuous development. Currently, the integration of the flight model and Deorbit Sail is performed. The launch is scheduled for February 2018. At the same time, in cooperation with United Nations Office for Outer Space Affairs, sail models and requested facilities are being prepared for the sail deployment experiment under micro-gravity and low pressure conditions at Bremen Drop Tower, Germany. Results of those tests will provide an ultimate and wide knowledge about deployment in space environment to which system will be exposed during its mission. Outcomes of the numerical model and tests will be compared afterwards and will help the team in building a reliable and correct model of a very complex phenomenon of deployment of 4 c-shaped flat springs with surface attached. The verified model could be used inter alia to investigate if the PW-Sat2’s sail is scalable and how far is it possible to go with enlarging when creating systems for bigger satellites.

Keywords: cubesat, deorbitation, sail, space, debris

Procedia PDF Downloads 266
212 Comparative Review Of Models For Forecasting Permanent Deformation In Unbound Granular Materials

Authors: Shamsulhaq Amin

Abstract:

Unbound granular materials (UGMs) are pivotal in ensuring long-term quality, especially in the layers under the surface of flexible pavements and other constructions. This study seeks to better understand the behavior of the UGMs by looking at popular models for predicting lasting deformation under various levels of stresses and load cycles. These models focus on variables such as the number of load cycles, stress levels, and features specific to materials and were evaluated on the basis of their ability to accurately predict outcomes. The study showed that these factors play a crucial role in how well the models work. Therefore, the research highlights the need to look at a wide range of stress situations to more accurately predict how much the UGMs bend or shift. The research looked at important factors, like how permanent deformation relates to the number of times a load is applied, how quickly this phenomenon happens, and the shakedown effect, in two different types of UGMs: granite and limestone. A detailed study was done over 100,000 load cycles, which provided deep insights into how these materials behave. In this study, a number of factors, such as the level of stress applied, the number of load cycles, the density of the material, and the moisture present were seen as the main factors affecting permanent deformation. It is vital to fully understand these elements for better designing pavements that last long and handle wear and tear. A series of laboratory tests were performed to evaluate the mechanical properties of materials and acquire model parameters. The testing included gradation tests, CBR tests, and Repeated load triaxial tests. The repeated load triaxial tests were crucial for studying the significant components that affect deformation. This test involved applying various stress levels to estimate model parameters. In addition, certain model parameters were established by regression analysis, and optimization was conducted to improve outcomes. Afterward, the material parameters that were acquired were used to construct graphs for each model. The graphs were subsequently compared to the outcomes obtained from the repeated load triaxial testing. Additionally, the models were evaluated to determine if they demonstrated the two inherent deformation behaviors of materials when subjected to repetitive load: the initial phase, post-compaction, and the second phase volumetric changes. In this study, using log-log graphs was key to making the complex data easier to understand. This method made the analysis clearer and helped make the findings easier to interpret, adding both precision and depth to the research. This research provides important insight into picking the right models for predicting how these materials will act under expected stress and load conditions. Moreover, it offers crucial information regarding the effect of load cycle and permanent deformation as well as the shakedown effect on granite and limestone UGMs.

Keywords: permanent deformation, unbound granular materials, load cycles, stress level

Procedia PDF Downloads 14
211 Case Study of Migrants, Cultures and Environmental Crisis

Authors: Christina Y. P. Ting

Abstract:

Migration is a global phenomenon with movements of migrants from developed and developing countries to the host societies. Migrants have changed the host countries’ demography – its population structure and also its ethnic cultural diversity. Acculturation of migrants in terms of their adoption of the host culture is seen as important to ensure that they ‘fit into’ their adopted country so as to participate in everyday public life. However, this research found that the increase of the China-born migrants’ post-migration consumption level had impact on Australia’s environment reflected not only because of their adoption of elements of the host culture, but also retention of aspects of Chinese culture – indicating that the influence of bi-culturalism was in operation. This research, which was based on the face-to-face interview with 61 China-born migrants in the suburb of Box Hill, Melbourne, investigated the pattern of change in the migrants’ consumption upon their settlement in Australia. Using an ecological footprint calculator, their post-migration footprints were found to be larger than pre-migration footprint. The uniquely-derived CALD (Culturally and Linguistically Diverse) Index was used to measure individuals’ strength of connectedness to ethnic culture. Multi-variant analysis was carried out to understand which independent factors that influence consumption best explain the change in footprint (which is the difference between pre-and post-migration footprints, as a dependent factor). These independent factors ranged from socio-economic and demographics to the cultural context, that is, the CALD Index and indicators of acculturation. The major findings from the analysis were: Chinese culture (as measured by the CALD Index) and indicators of acculturation such as length of residency and using English in communications besides the traditional factors such as age, income and education level made significant contributions to the large increase in the China-born group’s post-migration consumption level. This paper as part of a larger study found that younger migrants’ large change in their footprint were related to high income and low level of education. This group of migrants also practiced bi-cultural consumption in retaining ethnic culture and adopting the host culture. These findings have importantly highlighted that for a host society to tackle environmental crisis, governments need not only to understand the relationship between age and consumption behaviour, but also to understand and embrace the migrants’ ethnic cultures, which may act as bridges and/or fences in relationships. In conclusion, for governments to deal with national issues such as environmental crisis within a cultural diverse population, it necessitates an understanding of age and aspects of ethnic culture that may act as bridges and fences. This understanding can aid in putting in place policies that enable the co-existence of a hybrid of the ethnic and host cultures in order to create and maintain a harmonious and secured living environment for population groups.

Keywords: bicultural consumer, CALD index, consumption, ethnic culture, migrants

Procedia PDF Downloads 216
210 Differences Between Mother and Father Perpetrators on Child Maltreatment Foster Care Outcomes: An Emphasis on Hispanic and Native American Families

Authors: Yadira Tejeda, Wynette Whitegoat, Dylan Jones, Brett Drake

Abstract:

Background and Purpose: Hispanic and American Indian/Alaska Native (AI/AN) families impacted by child protective services (CPS) continue to be a population in literature where little is known. There is less known about the fathers of these children and the safety or risk factors attributed to child maltreatment and case outcomes. However, it is known that involving fathers in children’s lives is needed for healthy development, academic achievement, and cognitive development. The few articles that have studied the impacts of engaging fathers in the CPS have found that children in general experience shorter times in foster care, are likely to reunify with their biological family, and overall have better case outcomes. The purpose of this study is to determine whether perpetrators identified as the mother, father, or both impact foster care placement in Hispanic and AI/AN families in CPS. Methods: Using NCANDS Child File data, the selected reports submitted in FY2017 with at least one substantiated allegation, i.e. those with perpetrator information. Reports were categorized into one of three categories: mom-perpetrator-only, father-perpetrator-only, and both. Reports that did not fall into any one of these three categorizations were omitted (<18%). Lastly, only reports where the mother and father self-identified as Hispanic or AI/AN were kept. Foster care placement was measured if any child in the report was placed within three months of the report date. Multilevel Logistic Regression models (random intercepts at the state and county) were used to model the relationship between report-parent type and foster care placement. Controls included Maltreatment types, number of children, any prior reports, and age of the youngest child. Results: For AI/AN reports, 64% were mom-perpetrator-only, 20% were father-perpetrator-only, and 16% both. Father-perpetrator-only reports had 60% lower odds of placement than mom-perpetrator-only, and both had 35% greater odds than mom-only. For Hispanics, 51% were mom-perpetrator-only, 30% father-perpetrator-only, and 19% both. Father-perpetrator-only reports had 74% lower odds than mom-perpetrator-only, and both had 55% greater odds than mom-perpetrator-only. Conclusion and Implications: Fatherhood research focused on prevention and intervention services should include Hispanic and AI/AN fathers to create culturally relevant and tailored services for both groups. By identifying differences in children’s CPS trajectories conditional on fathers’ involvement as a perpetrator, this analysis helps to inform where and how prevention efforts should be focused when considering variation in parental involvement for both populations. The findings indicate that the father’s involvement predicts substantial differences in the probability of future placement, with the direction depending on the mother’s joint involvement. Future research should investigate mediating pathways of these relationships while accounting for the unique experiences of AI/AN and Hispanic families. Each of these racial groups faces unique and differing challenges related to CPS, yet both groups have a shared understanding of the importance of fatherhood in the lives of children. Developing a better understanding of what is happening with Hispanic and AI/AN fathers as it relates to children's CPS experiences may result in new tools to reduce child maltreatment rates in these communities.

Keywords: child Abuse, child maltreatment, NDACAN, latino, native American

Procedia PDF Downloads 11
209 Mapping Vulnerabilities: A Social and Political Study of Disasters in Eastern Himalayas, Region of Darjeeling

Authors: Shailendra M. Pradhan, Upendra M. Pradhan

Abstract:

Disasters are perennial features of human civilization. The recurring earthquakes, floods, cyclones, among others, that result in massive loss of lives and devastation, is a grim reminder of the fact that, despite all our success stories of development, and progress in science and technology, human society is perennially at risk to disasters. The apparent threat of climate change and global warming only severe our disaster risks. Darjeeling hills, situated along Eastern Himalayan region of India, and famous for its three Ts – tea, tourism and toy-train – is also equally notorious for its disasters. The recurring landslides and earthquakes, the cyclone Aila, and the Ambootia landslides, considered as the largest landslide in Asia, are strong evidence of the vulnerability of Darjeeling hills to natural disasters. Given its geographical location along the Hindu-Kush Himalayas, the region is marked by rugged topography, geo-physically unstable structure, high-seismicity, and fragile landscape, making it prone to disasters of different kinds and magnitudes. Most of the studies on disasters in Darjeeling hills are, however, scientific and geographical in orientation that focuses on the underlying geological and physical processes to the neglect of social and political conditions. This has created a tendency among the researchers and policy-makers to endorse and promote a particular type of discourse that does not consider the social and political aspects of disasters in Darjeeling hills. Disaster, this paper argues, is a complex phenomenon, and a result of diverse factors, both physical and human. The hazards caused by the physical and geological agents, and the vulnerabilities produced and rooted in political, economic, social and cultural structures of a society, together result in disasters. In this sense, disasters are as much a result of political and economic conditions as it is of physical environment. The human aspect of disasters, therefore, compels us to address intricating social and political challenges that ultimately determine our resilience and vulnerability to disasters. Set within the above milieu, the aims of the paper are twofold: a) to provide a political and sociological account of disasters in Darjeeling hills; and, b) to identify and address the root causes of its vulnerabilities to disasters. In situating disasters in Darjeeling Hills, the paper adopts the Pressure and Release Model (PAR) that provides a theoretical insight into the study of social and political aspects of disasters, and to examine myriads of other related issues therein. The PAR model conceptualises risk as a complex combination of vulnerabilities, on the one hand, and hazards, on the other. Disasters, within the PAR framework, occur when hazards interact with vulnerabilities. The root causes of vulnerability, in turn, could be traced to social and political structures such as legal definitions of rights, gender relations, and other ideological structures and processes. In this way, the PAR model helps the present study to identify and unpack the root causes of vulnerabilities and disasters in Darjeeling hills that have largely remained neglected in dominant discourses, thereby providing a more nuanced and sociologically sensitive understanding of disasters.

Keywords: Darjeeling, disasters, PAR, vulnerabilities

Procedia PDF Downloads 251
208 Assessing the Plant Diversity's Quality, Threats and Opportunities for the Support of Sustainable City Development of the City Raipur, India

Authors: Katharina Lapin, Debashis Sanyal

Abstract:

Worldwide urban areas are growing. Urbanization has a great impact on social and economic development and ecosystem services. This global trend of urbanization also has significant impact on habitat and biodiversity. The impact of urbanization on the biodiversity of cities in Europe and North America is well studied, while there is a lack of data from cities in currently fast growing urban areas. Indian cities are expanding. The scientific community and the governmental authorities are facing the ongoing urbanization process as an opportunity for the environment. This case study supports the evaluation of urban biodiversity of the city Raipur in the North-West of India. The aim of this study is to assess the overview of the environmental and ecological implications of urbanization. The collected data and analysis was used to discuss the challenges for the sustainable city development. Vascular plants were chosen as an appropriate indicator for the assessment of local biodiversity changes. On the one hand, the vegetation cover is sensible to anthropogenic influence, and in the other hand, the local species composition is comparable to changes at the regional and national scale, using the plant index of India. Further information of abiotic situation can be gathered with the determination of indicator species. In order to calculate the influence of urbanization on the native plant diversity, the Shannon diversity index H´ was chosen. The Pielou`s pooled quadrate method was used for estimating diversity when a random sample is not expected. It was used to calculate the Pilou´s index of evenness. The estimated species coverage was used for calculating the H´ and J. Pearson correlation was performed to test the relationship between urbanization pattern and plant diversity. Further, a SWOT analysis was used in for analyzing internal and external factors impinging on a decision making process. The city of Raipur (21.25°N 81.63°E) has a population of 1,010,087 inhabitants living in an urban area of 226km², in the district of the Indian state of Chhattisgarh. Within the last decade, the urban area of Raipur increased. The results show that various novel ecosystems exist in the urban area of Raipur. The high amount of native flora is mainly to find at the shore of urban lakes and along the river Karun. These areas of high Biodiversity Index are to protect as urban biodiversity hot spots. The governmental authorities are well informed about the environmental challenges for the sustainable development of the city. Together with the scientific community of the Technical University of Raipur many engineering solutions are discussed for implementation of the future. The case study helped to point out the importance environmental measures that support the ecosystem services of green infrastructure. The fast process of urbanization is difficult to control. Uncontrolled creation of urban housing leads to difficulties in unsustainable use of natural resources. This is the major threat for the urban biodiversity.

Keywords: India, novel ecosystems, plant diversity, urban ecology

Procedia PDF Downloads 253
207 Green Space and Their Possibilities of Enhancing Urban Life in Dhaka City, Bangladesh

Authors: Ummeh Saika, Toshio Kikuchi

Abstract:

Population growth and urbanization is a global phenomenon. As the rapid progress of technology, many cities in the international community are facing serious problems of urbanization. There is no doubt that the urbanization will proceed to have significant impact on the ecology, economy and society at local, regional, and global levels. The inhabitants of Dhaka city suffer from lack of proper urban facilities. The green spaces are needed for different functional and leisure activities of the urban dwellers. Again growing densification, a number of green space are transferred into open space in the Dhaka city. As a result greenery of the city's decreases gradually. Moreover, the existing green space is frequently threatened by encroachment. The role of green space, both at community and city level, is important to improve the natural environment and social ties for future generations. Therefore, it seems that the green space needs to be more effective for public interaction. The main objective of this study is to address the effectiveness of urban green space (Urban Park) of Dhaka City. Two approaches are selected to fulfill the study. Firstly, analyze the long-term spatial changes of urban green space using GIS and secondly, investigate the relationship of urban park network with physical and social environment. The case study site covers eight urban parks of Dhaka metropolitan area of Bangladesh. Two aspects (Physical and Social) are applied for this study. For physical aspect, satellite images and aerial photos of different years are used to find out the changes of urban parks. And for social aspect, methods are used as questionnaire survey, interview, observation, photographs, sketch and previous information of parks to analyze about the social environment of parks. After calculation of all data by descriptive statistics, result is shown by maps using GIS. According to physical size, parks of Dhaka city are classified into four types: Small, Medium, Large and Extra Large parks. The observed result showed that the physical and social environment of urban parks varies with their size. In small size parks physical environment is moderate by newly tree plantation and area expansion. However, in medium size parks physical environment are poor, example- tree decrease, exposed soil increase. On the other hand, physical environment of large size and extra large size parks are in good condition, because of plenty of vegetation and well management. Again based on social environment, in small size parks people mainly come from surroundings area and mainly used as waiting place. In medium-size parks, people come to attend various occasion from different places. In large size and extra large size parks, people come from every part of the city area for tourism purpose. Urban parks are important source of green space. Its influence both physical and social environment of urban area. Nowadays green space area gradually decreases and transfer into open space. The consequence of this research reveals that changes of urban parks influence both physical and social environment and also impact on urban life.

Keywords: physical environment, social environment, urban life, urban parks

Procedia PDF Downloads 402
206 Test Rig Development for Up-to-Date Experimental Study of Multi-Stage Flash Distillation Process

Authors: Marek Vondra, Petr Bobák

Abstract:

Vacuum evaporation is a reliable and well-proven technology with a wide application range which is frequently used in food, chemical or pharmaceutical industries. Recently, numerous remarkable studies have been carried out to investigate utilization of this technology in the area of wastewater treatment. One of the most successful applications of vacuum evaporation principal is connected with seawater desalination. Since 1950’s, multi-stage flash distillation (MSF) has been the leading technology in this field and it is still irreplaceable in many respects, despite a rapid increase in cheaper reverse-osmosis-based installations in recent decades. MSF plants are conveniently operated in countries with a fluctuating seawater quality and at locations where a sufficient amount of waste heat is available. Nowadays, most of the MSF research is connected with alternative heat sources utilization and with hybridization, i.e. merging of different types of desalination technologies. Some of the studies are concerned with basic principles of the static flash phenomenon, but only few scientists have lately focused on the fundamentals of continuous multi-stage evaporation. Limited measurement possibilities at operating plants and insufficiently equipped experimental facilities may be the reasons. The aim of the presented study was to design, construct and test an up-to-date test rig with an advanced measurement system which will provide real time monitoring options of all the important operational parameters under various conditions. The whole system consists of a conventionally designed MSF unit with 8 evaporation chambers, versatile heating circuit for different kinds of feed water (e.g. seawater, waste water), sophisticated system for acquisition and real-time visualization of all the related quantities (temperature, pressure, flow rate, weight, conductivity, pH, water level, power input), access to a wide spectrum of operational media (salt, fresh and softened water, steam, natural gas, compressed air, electrical energy) and integrated transparent features which enable a direct visual control of selected physical mechanisms (water evaporation in chambers, water level right before brine and distillate pumps). Thanks to the adjustable process parameters, it is possible to operate the test unit at desired operational conditions. This allows researchers to carry out statistical design and analysis of experiments. Valuable results obtained in this manner could be further employed in simulations and process modeling. First experimental tests confirm correctness of the presented approach and promise interesting outputs in the future. The presented experimental apparatus enables flexible and efficient research of the whole MSF process.

Keywords: design of experiment, multi-stage flash distillation, test rig, vacuum evaporation

Procedia PDF Downloads 365
205 Integrations of the Instructional System Design for Students Learning Achievement Motives and Science Attitudes with Stem Educational Model on Stoichiometry Issue in Chemistry Classes with Different Genders

Authors: Tiptunya Duangsri, Panwilai Chomchid, Natchanok Jansawang

Abstract:

This research study was to investigate of education decisions must be made which a part of it should be passed on to future generations as obligatory for all members of a chemistry class for students who will prepare themselves for a special position. The descriptions of instructional design were provided and the recent criticisms are discussed. This research study to an outline of an integrative framework for the description of information and the instructional design model give structure to negotiate a semblance of conscious understanding. The aims of this study are to describe the instructional design model for comparisons between students’ genders of their effects on STEM educational learning achievement motives to their science attitudes and logical thinking abilities with a sample size of 18 students at the 11th grade level with the cluster random sampling technique in Mahawichanukul School were designed. The chemistry learning environment was administered with the STEM education method. To build up the 5-instrument lesson instructional plan issues were instructed innovations, the 30-item Logical Thinking Test (LTT) on 5 scales, namely; Inference, Recognition of Assumptions, Deduction, Interpretation and Evaluation scales was used. Students’ responses of their perceptions with the Test Of Chemistry-Related Attitude (TOCRA) were assessed of their attitude in science toward chemistry. The validity from Index Objective Congruence value (IOC) checked by five expert specialist educator in two chemistry classroom targets in STEM education, the E1/E2 process were equaled evidence of 84.05/81.42 which results based on criteria are higher than of 80/80 standard level with the IOC from the expert educators. Comparisons between students’ learning achievement motives with STEM educational model on stoichiometry issue in chemistry classes with different genders were differentiated at evidence level of .05, significantly. Associations between students’ learning achievement motives on their posttest outcomes and logical thinking abilities, the predictive efficiency (R2) values indicate that 69% and 70% of the variances in different male and female student groups of their logical thinking abilities. The predictive efficiency (R2) values indicate that 73%; and 74% of the variances in different male and female student groups of their science attitudes toward chemistry were associated. Statistically significant on students’ perceptions of their chemistry learning classroom environment and their science attitude toward chemistry when using the MCI and TOCRA, the predictive efficiency (R2) values indicated that 72% and 74% of the variances in different male and female student groups of their chemistry classroom climate, consequently. Suggestions that supporting chemistry or science teachers from science, technology, engineering and mathematics (STEM) in addressing complex teaching and learning issues related instructional design to develop, teach, and assess traditional are important strategies with a focus on STEM education instructional method.

Keywords: development, the instructional design model, students learning achievement motives, science attitudes with STEM educational model, stoichiometry issue, chemistry classes, genders

Procedia PDF Downloads 255
204 Open Science Philosophy, Research and Innovation

Authors: C.Ardil

Abstract:

Open Science translates the understanding and application of various theories and practices in open science philosophy, systems, paradigms and epistemology. Open Science originates with the premise that universal scientific knowledge is a product of a collective scholarly and social collaboration involving all stakeholders and knowledge belongs to the global society. Scientific outputs generated by public research are a public good that should be available to all at no cost and without barriers or restrictions. Open Science has the potential to increase the quality, impact and benefits of science and to accelerate advancement of knowledge by making it more reliable, more efficient and accurate, better understandable by society and responsive to societal challenges, and has the potential to enable growth and innovation through reuse of scientific results by all stakeholders at all levels of society, and ultimately contribute to growth and competitiveness of global society. Open Science is a global movement to improve accessibility to and reusability of research practices and outputs. In its broadest definition, it encompasses open access to publications, open research data and methods, open source, open educational resources, open evaluation, and citizen science. The implementation of open science provides an excellent opportunity to renegotiate the social roles and responsibilities of publicly funded research and to rethink the science system as a whole. Open Science is the practice of science in such a way that others can collaborate and contribute, where research data, lab notes and other research processes are freely available, under terms that enable reuse, redistribution and reproduction of the research and its underlying data and methods. Open Science represents a novel systematic approach to the scientific process, shifting from the standard practices of publishing research results in scientific publications towards sharing and using all available knowledge at an earlier stage in the research process, based on cooperative work and diffusing scholarly knowledge with no barriers and restrictions. Open Science refers to efforts to make the primary outputs of publicly funded research results (publications and the research data) publicly accessible in digital format with no limitations. Open Science is about extending the principles of openness to the whole research cycle, fostering, sharing and collaboration as early as possible, thus entailing a systemic change to the way science and research is done. Open Science is the ongoing transition in how open research is carried out, disseminated, deployed, and transformed to make scholarly research more open, global, collaborative, creative and closer to society. Open Science involves various movements aiming to remove the barriers for sharing any kind of output, resources, methods or tools, at any stage of the research process. Open Science embraces open access to publications, research data, source software, collaboration, peer review, notebooks, educational resources, monographs, citizen science, or research crowdfunding. The recognition and adoption of open science practices, including open science policies that increase open access to scientific literature and encourage data and code sharing, is increasing in the open science philosophy. Revolutionary open science policies are motivated by ethical, moral or utilitarian arguments, such as the right to access digital research literature for open source research or science data accumulation, research indicators, transparency in the field of academic practice, and reproducibility. Open science philosophy is adopted primarily to demonstrate the benefits of open science practices. Researchers use open science applications for their own advantage in order to get more offers, increase citations, attract media attention, potential collaborators, career opportunities, donations and funding opportunities. In open science philosophy, open data findings are evidence that open science practices provide significant benefits to researchers in scientific research creation, collaboration, communication, and evaluation according to more traditional closed science practices. Open science considers concerns such as the rigor of peer review, common research facts such as financing and career development, and the sacrifice of author rights. Therefore, researchers are recommended to implement open science research within the framework of existing academic evaluation and incentives. As a result, open science research issues are addressed in the areas of publishing, financing, collaboration, resource management and sharing, career development, discussion of open science questions and conclusions.

Keywords: Open Science, Open Science Philosophy, Open Science Research, Open Science Data

Procedia PDF Downloads 107
203 The South African Polycentric Water Resource Governance-Management Nexus: Parlaying an Institutional Agent and Structured Social Engagement

Authors: J. H. Boonzaaier, A. C. Brent

Abstract:

South Africa, a water scarce country, experiences the phenomenon that its life supporting natural water resources is seriously threatened by the users that are totally dependent on it. South Africa is globally applauded to have of the best and most progressive water laws and policies. There are however growing concerns regarding natural water resource quality deterioration and a critical void in the management of natural resources and compliance to policies due to increasing institutional uncertainties and failures. These are in accordance with concerns of many South African researchers and practitioners that call for a change in paradigm from talk to practice and a more constructive, practical approach to governance challenges in the management of water resources. A qualitative theory-building case study through longitudinal action research was conducted from 2014 to 2017. The research assessed whether a strategic positioned institutional agent can be parlayed to facilitate and execute WRM on catchment level by engaging multiple stakeholders in a polycentric setting. Through a critical realist approach a distinction was made between ex ante self-deterministic human behaviour in the realist realm, and ex post governance-management in the constructivist realm. A congruence analysis, including Toulmin’s method of argumentation analysis, was utilised. The study evaluated the unique case of a self-steering local water management institution, the Impala Water Users Association (WUA) in the Pongola River catchment in the northern part of the KwaZulu-Natal Province of South Africa. Exploiting prevailing water resource threats, it expanded its ancillary functions from 20,000 to 300,000 ha. Embarking on WRM activities, it addressed natural water system quality assessments, social awareness, knowledge support, and threats, such as: soil erosion, waste and effluent into water systems, coal mining, and water security dimensions; through structured engagement with 21 different catchment stakeholders. By implementing a proposed polycentric governance-management model on a catchment scale, the WUA achieved to fill the void. It developed a foundation and capacity to protect the resilience of the natural environment that is critical for freshwater resources to ensure long-term water security of the Pongola River basin. Further work is recommended on appropriate statutory delegations, mechanisms of sustainable funding, sufficient penetration of knowledge to local levels to catalyse behaviour change, incentivised support from professionals, back-to-back expansion of WUAs to alleviate scale and cost burdens, and the creation of catchment data monitoring and compilation centres.

Keywords: institutional agent, water governance, polycentric water resource management, water resource management

Procedia PDF Downloads 115
202 Evaluation of Cryoablation Procedures in Treatment of Atrial Fibrillation from 3 Years' Experiences in a Single Heart Center

Authors: J. Yan, B. Pieper, B. Bucsky, B. Nasseri, S. Klotz, H. H. Sievers, S. Mohamed

Abstract:

Cryoablation is evermore applied for interventional treatment of paroxysmal (PAAF) or persistent atrial fibrillation (PEAF). In the cardiac surgery, this procedure is often combined with coronary arterial bypass graft (CABG) and valve operations. Three different methods are feasible in this sense in respect to practicing extents and mechanisms such as lone left atrial cryoablation, Cox-Maze IV and III in our heart center. 415 patients (68 ± 0.8ys, male 68.2%) with predisposed atrial fibrillation who initially required either coronary or valve operations were enrolled and divided into 3 matched groups according to deployed procedures: CryoLA-group (cryoablation of lone left atrium, n=94); Cox-Maze-IV-group (n=93) and Cox-Maze-III-group (n=8). All patients additionally received closure of the left atrial appendage (LAA) and regularly underwent three-year ambulant follow-up assessments (3, 6, 9, 12, 18, 24, 30 and 36 months). Burdens of atrial fibrillation were assessed directly by means of cardiac monitor (Reveal XT, Medtronic) or of 3-day Holter electrocardiogram. Herewith, attacks frequencies of AF and their circadian patterns were systemically analyzed. Furthermore, anticoagulants and regular rate-/rhythm-controlling medications were evaluated and listed in terms of anti-rate and anti-rhythm regimens. Concerning PAAF treatment, Cox Maze IV procedure provided therapeutically acceptable effect as lone left atrium (LA) cryoablation did (5.25 ± 5.25% vs. 10.39 ± 9.96% AF-burden, p > 0.05). Interestingly, Cox Maze III method presented a better short-term effect in the PEAF therapy in comparison to lone cryoablation of LA and Cox Maze IV (0.25 ± 0.23% vs. 15.31 ± 5.99% and 9.10 ± 3.73% AF-burden within the first year, p < 0.05). But this therapeutic advantage went lost during ongoing follow-ups (26.65 ± 24.50% vs. 8.33 ± 8.06% and 15.73 ± 5.88% in 3rd follow-up year). In this way, lone LA-cryoablation established its antiarrhythmic efficacy and 69.5% patients were released from the Vit-K-antagonists, while Cox Maze IV liberated 67.2% patients from continuous anticoagulant medication. The AF-recurrences mostly performed such attacks property as less than 60min duration for all 3 procedures (p > 0.05). In the sense of the circadian distribution of the recurrence attacks, weighted by ongoing follow-ups, lone LA cryoablation achieved and stabilized the antiarrhythmic effects over time, which was especially observed in the treatment of PEAF, while Cox Maze IV and III had their antiarrhythmic effects weakened progressively. This phenomenon was likewise evaluable in the therapy of circadian rhythm of reverting AF-attacks. Furthermore, the strategy of rate control was much more often applied to support and maintain therapeutic successes obtained than the one of rhythm control. Derived from experiences in our heart center, lone LA cryoablation presented equivalent effects in the treatment of AF in comparison to Cox Maze IV and III procedures. These therapeutic successes were especially investigable in the patients suffering from persistent AF (PEAF). Additional supportive strategies such as rate control regime should be initialized and implemented to improve the therapeutic effects of the cryoablations according to appropriate criteria.

Keywords: AF-burden, atrial fibrillation, cardiac monitor, COX MAZE, cryoablation, Holter, LAA

Procedia PDF Downloads 174
201 Regulating the Ottomans on Turkish Television and the Making of Good Citizens

Authors: Chien Yang Erdem

Abstract:

This paper takes up the proliferating historical dramas and children’s programs featuring the Ottoman-Islamic legacy on Turkish television as a locus where the processes of subjectification take place. A critical analysis of this emergent cultural phenomenon reveals an alliance of neoliberal and neoconservative political rationalities based on which the Turkish media is restructured to transform society. The existing debates have focused on how the Ottoman historical dramas manifest the Justice and Development Party’s (Adalet ve Kalkınma Partisi) neo-Ottomanist ideology and foreign policy. However, this approach tends to overlook the more complex relationship between the media, government, and society. Employing Michel Foucault’s notion of 'technologies of the self,' this paper aims to examine the governing practices that are deployed to regulate the media and to transform individual citizens into governable subjects in contemporary Turkey. First, through a brief discussion of recent development of the Turkish media towards an authoritarian model, the paper suggests that the relation between the Ottoman television drama and the political subject in question cannot be adequately examined without taking into account the force of the market. Second, by focusing on the managerial restructuring of the Turkish Television and Radio Corporation (Türkiye Radyo ve Televizyon Kurumu), the paper aims to illustrate the rationale and process through which the Turkish media sector is transformed into an integral part of the free market where the government becomes a key actor. The paper contends that this new sphere of free market is organized in a way that enables direct interference of the government and divides media practitioners and consumers into opposing categories through their own participation in the media market. On the one hand, a 'free subject' is constituted based on the premise that the market is a sphere where individuals are obliged to exercise their right to freedom (of choice, lifestyle, and expression). On the other hand, this 'free subject' is increasingly subjugated to such disciplinary practices as censorship for being on the wrong side of the government. Finally, the paper examines the relation between the restructured Turkish media market and the proliferation of Ottoman television drama in the 2010s. The study maintains that the reorganization of the media market has produced a condition where private sector is encouraged to take an active role in reviving Turkey’s Ottoman-Islamic cultural heritage and promulgating moral-religious values. Paying specific attention to the controversial case of Magnificent Century (Muhteşem Yüzyıl) in contrast with TRT’s Ottoman historical drama and children’s programs, the paper aims to identify the ways in which individual citizens are directed to conduct themselves as a virtuous citizenry. It is through the double movement between the governing practices associated with the media market and those concerning the making of a 'conservative generation' that a subject of citizenry of new Turkey is constituted.

Keywords: neoconservatism, neoliberalism, ottoman historical drama, technologies of the self, Turkish television

Procedia PDF Downloads 113
200 A Socio-Spatial Analysis of Financialization and the Formation of Oligopolies in Brazilian Basic Education

Authors: Gleyce Assis Da Silva Barbosa

Abstract:

In recent years, we have witnessed a vertiginous growth of large education companies. Daughters of national and world capital, these companies expand both through consolidated physical networks in the form of branches spread across the territory and through institutional networks such as business networks through mergers, acquisitions, creation of new companies and influence. They do this by incorporating small, medium and large schools and universities, teaching systems and other products and services. They are also able to weave their webs directly or indirectly in philanthropic circles, limited partnerships, family businesses and even in public education through various mechanisms of outsourcing, privatization and commercialization of products for the sector. Although the growth of these groups in basic education seems to us a recent phenomenon in peripheral countries such as Brazil, its diffusion is closely linked to higher education conglomerates and other sectors of the economy forming oligopolies, which began to expand in the 1990s with strong state support and through political reforms that redefined its role, transforming it into a fundamental agent in the formation of guidelines to boost the incorporation of neoliberal logic. This expansion occurred through the objectification of education, commodifying it and transforming students into consumer clients. Financial power combined with the neo-liberalization of state public policies allowed the profusion of social exclusion, the increase of individuals without access to basic services, deindustrialization, automation, capital volatility and the indetermination of the economy; in addition, this process causes capital to be valued and devalued at rates never seen before, which together generates various impacts such as the precariousness of work. Understanding the connection between these processes, which engender the economy, allows us to see their consequences in labor relations and in the territory. In this sense, it is necessary to analyze the geographic-economic context and the role of the facilitating agents of this process, which can give us clues about the ongoing transformations and the directions of education in the national and even international scenario since this process is linked to the multiple scales of financial globalization. Therefore, the present research has the general objective of analyzing the socio-spatial impacts of financialization and the formation of oligopolies in Brazilian basic education. For this, the survey of laws, data, and public policies on the subject in question was used as a methodology. As a methodology, the work was based on some data from these companies available on websites for investors. Survey of information from global and national companies that operate in Brazilian basic education. In addition to mapping the expansion of educational oligopolies using public data on the location of schools. With this, the research intends to provide information about the ongoing commodification process in the country. Discuss the consequences of the oligopolization of education, considering the impacts that financialization can bring to teaching work.

Keywords: financialization, oligopolies, education, Brazil

Procedia PDF Downloads 39
199 Studying Language of Immediacy and Language of Distance from a Corpus Linguistic Perspective: A Pilot Study of Evaluation Markers in French Television Weather Reports

Authors: Vince Liégeois

Abstract:

Language of immediacy and distance: Within their discourse theory, Koch & Oesterreicher establish a distinction between a language of immediacy and a language of distance. The former refers to those discourses which are oriented more towards a spoken norm, whereas the latter entails discourses oriented towards a written norm, regardless of whether they are realised phonically or graphically. This means that an utterance can be realised phonically but oriented more towards the written language norm (e.g., a scientific presentation or eulogy) or realised graphically but oriented towards a spoken norm (e.g., a scribble or chat messages). Research desiderata: The methodological approach from Koch & Oesterreicher has often been criticised for not providing a corpus-linguistic methodology, which makes it difficult to work with quantitative data or address large text collections within this research paradigm. Consequently, the Koch & Oesterreicher approach has difficulties gaining ground in those research areas which rely more on corpus linguistic research models, like text linguistics and LSP-research. A combinatory approach: Accordingly, we want to establish a combinatory approach with corpus-based linguistic methodology. To this end, we propose to (i) include data about the context of an utterance (e.g., monologicity/dialogicity, familiarity with the speaker) – which were called “conditions of communication” in the original work of Koch & Oesterreicher – and (ii) correlate the linguistic phenomenon at the centre of the inquiry (e.g., evaluation markers) to a group of linguistic phenomena deemed typical for either distance- or immediacy-language. Based on these two parameters, linguistic phenomena and texts could then be mapped on an immediacy-distance continuum. Pilot study: To illustrate the benefits of this approach, we will conduct a pilot study on evaluation phenomena in French television weather reports, a form of domain-sensitive discourse which has often been cited as an example of a “text genre”. Within this text genre, we will look at so-called “evaluation markers,” e.g., fixed strings like bad weather, stifling hot, and “no luck today!”. These evaluation markers help to communicate the coming weather situation towards the lay audience but have not yet been studied within the Koch & Oesterreicher research paradigm. Accordingly, we want to figure out whether said evaluation markers are more typical for those weather reports which tend more towards immediacy or those which tend more towards distance. To this aim, we collected a corpus with different kinds of television weather reports,e.g., as part of the news broadcast, including dialogue. The evaluation markers themselves will be studied according to the explained methodology, by correlating them to (i) metadata about the context and (ii) linguistic phenomena characterising immediacy-language: repetition, deixis (personal, spatial, and temporal), a freer choice of tense and right- /left-dislocation. Results: Our results indicate that evaluation markers are more dominantly present in those weather reports inclining towards immediacy-language. Based on the methodology established above, we have gained more insight into the working of evaluation markers in the domain-sensitive text genre of (television) weather reports. For future research, it will be interesting to determine whether said evaluation markers are also typical for immediacy-language-oriented in other domain-sensitive discourses.

Keywords: corpus-based linguistics, evaluation markers, language of immediacy and distance, weather reports

Procedia PDF Downloads 187
198 Assessment of Psychological Needs and Characteristics of Elderly Population for Developing Information and Communication Technology Services

Authors: Seung Ah Lee, Sunghyun Cho, Kyong Mee Chung

Abstract:

Rapid population aging became a worldwide demographic phenomenon due to rising life expectancy and declining fertility rates. Considering the current increasing rate of population aging, it is assumed that Korean society enters into a ‘super-aged’ society in 10 years, in which people aged 65 years or older account for more than 20% of entire population. In line with this trend, ICT services aimed to help elderly people to improve the quality of life have been suggested. However, existing ICT services mainly focus on supporting health or nursing care and are somewhat limited to meet a variety of specialized needs and challenges of this population. It is pointed out that the majority of services have been driven by technology-push policies. Given that the usage of ICT services greatly vary on individuals’ socio-economic status (SES), physical and psychosocial needs, this study systematically categorized elderly population into sub-groups and identified their needs and characteristics related to ICT usage in detail. First, three assessment criteria (demographic variables including SES, cognitive functioning level, and emotional functioning level) were identified based on previous literature, experts’ opinions, and focus group interview. Second, survey questions for needs assessment were developed based on the criteria and administered to 600 respondents from a national probability sample. The questionnaire consisted of 67 items concerning demographic information, experience on ICT services and information technology (IT) devices, quality of life and cognitive functioning, etc. As the result of survey, age (60s, 70s, 80s), education level (college graduates or more, middle and high school, less than primary school) and cognitive functioning level (above the cut-off, below the cut-off) were considered the most relevant factors for categorization and 18 sub-groups were identified. Finally, 18 sub-groups were clustered into 3 groups according to following similarities; computer usage rate, difficulties in using ICT, and familiarity with current or previous job. Group 1 (‘active users’) included those who with high cognitive function and educational level in their 60s and 70s. They showed favorable and familiar attitudes toward ICT services and used the services for ‘joyful life’, ‘intelligent living’ and ‘relationship management’. Group 2 (‘potential users’), ranged from age of 60s to 80s with high level of cognitive function and mostly middle to high school graduates, reported some difficulties in using ICT and their expectations were lower than in group 1 despite they were similar to group 1 in areas of needs. Group 3 (‘limited users’) consisted of people with the lowest education level or cognitive function, and 90% of group reported difficulties in using ICT. However, group 3 did not differ from group 2 regarding the level of expectation for ICT services and their main purpose of using ICT was ‘safe living’. This study developed a systematic needs assessment tool and identified three sub-groups of elderly ICT users based on multi-criteria. It is implied that current cognitive function plays an important role in using ICT and determining needs among the elderly population. Implications and limitations were further discussed.

Keywords: elderly population, ICT, needs assessment, population aging

Procedia PDF Downloads 122
197 Ruta graveolens Fingerprints Obtained with Reversed-Phase Gradient Thin-Layer Chromatography with Controlled Solvent Velocity

Authors: Adrian Szczyrba, Aneta Halka-Grysinska, Tomasz Baj, Tadeusz H. Dzido

Abstract:

Since prehistory, plants were constituted as an essential source of biologically active substances in folk medicine. One of the examples of medicinal plants is Ruta graveolens L. For a long time, Ruta g. herb has been famous for its spasmolytic, diuretic, or anti-inflammatory therapeutic effects. The wide spectrum of secondary metabolites produced by Ruta g. includes flavonoids (eg. rutin, quercetin), coumarins (eg. bergapten, umbelliferone) phenolic acids (eg. rosmarinic acid, chlorogenic acid), and limonoids. Unfortunately, the presence of produced substances is highly dependent on environmental factors like temperature, humidity, or soil acidity; therefore standardization is necessary. There were many attempts of characterization of various phytochemical groups (eg. coumarins) of Ruta graveolens using the normal – phase thin-layer chromatography (TLC). However, due to the so-called general elution problem, usually, some components remained unseparated near the start or finish line. Therefore Ruta graveolens is a very good model plant. Methanol and petroleum ether extract from its aerial parts were used to demonstrate the capabilities of the new device for gradient thin-layer chromatogram development. The development of gradient thin-layer chromatograms in the reversed-phase system in conventional horizontal chambers can be disrupted by problems associated with an excessive flux of the mobile phase to the surface of the adsorbent layer. This phenomenon is most likely caused by significant differences between the surface tension of the subsequent fractions of the mobile phase. An excessive flux of the mobile phase onto the surface of the adsorbent layer distorts the flow of the mobile phase. The described effect produces unreliable, and unrepeatable results, causing blurring and deformation of the substance zones. In the prototype device, the mobile phase solution is delivered onto the surface of the adsorbent layer with controlled velocity (by moving pipette driven by 3D machine). The delivery of the solvent to the adsorbent layer is equal to or lower than that of conventional development. Therefore chromatograms can be developed with optimal linear mobile phase velocity. Furthermore, under such conditions, there is no excess of eluent solution on the surface of the adsorbent layer so the higher performance of the chromatographic system can be obtained. Directly feeding the adsorbent layer with eluent also enables to perform convenient continuous gradient elution practically without the so-called gradient delay. In the study, unique fingerprints of methanol and petroleum ether extracts of Ruta graveolens aerial parts were obtained with stepwise gradient reversed-phase thin-layer chromatography. Obtained fingerprints under different chromatographic conditions will be compared. The advantages and disadvantages of the proposed approach to chromatogram development with controlled solvent velocity will be discussed.

Keywords: fingerprints, gradient thin-layer chromatography, reversed-phase TLC, Ruta graveolens

Procedia PDF Downloads 264
196 Networked Media, Citizen Journalism and Political Participation in Post-Revolutionary Tunisia: Insight from a European Research Project

Authors: Andrea Miconi

Abstract:

The research will focus on the results of the Tempus European Project eMEDia dedicated to Cross-Media Journalism. The project is founded by the European Commission as it involves four European partners - IULM University, Tampere University, University of Barcelona, and the Mediterranean network Unimed - and three Tunisian Universities – IPSI La Manouba, Sfax and Sousse – along with the Tunisian Ministry for Higher Education and the National Syndicate of Journalists. The focus on Tunisian condition is basically due to the role played by digital activists in its recent history. The research is dedicated to the relationship between political participation, news-making practices and the spread of social media, as it is affecting Tunisian society. As we know, Tunisia during the Arab Spring had been widely considered as a laboratory for the analysis the use of new technologies for political participation. Nonetheless, the literature about the Arab Spring actually fell short in explaining the genesis of the phenomenon, on the one hand by isolating technologies as a casual factor in the spread of demonstrations, and on the other by analyzing North-African condition through a biased perspective. Nowadays, it is interesting to focus on the consolidation of the information environment three years after the uprisings. And what is relevant, only a close, in-depth analysis of Tunisian society is able to provide an explanation of its history, and namely of the part of digital media in the overall evolution of political system. That is why the research is based on different methodologies: desk stage, interviews, and in-depth analysis of communication practices. Networked journalism is the condition determined by the technological innovation on news-making activities: a condition upon which professional journalist can no longer be considered the only player in the information arena, and a new skill must be developed. Along with democratization, nonetheless, the so-called citizen journalism is also likely to produce some ambiguous effects, such as the lack of professional standards and the spread of information cascades, which may prove to be particularly dangerous in an evolving media market as the Tunisian one. This is why, according to the project, a new profile must be defined, which is able to manage this new condition, and which can be hardly reduced to the parameters of traditional journalistic work. Rather than simply using new devices for news visualization, communication professionals must also be able to dialogue with all new players and to accept the decentralized nature of digital environments. This networked nature of news-making seemed to emerge during the Tunisian revolution, when bloggers, journalists, and activists used to retweet each other. Nonetheless, this intensification of communication exchange was inspired by the political climax of the uprising, while all media, by definition, are also supposed to bring some effects on people’s state of mind, culture and daily life routines. That is why it is worth analyzing the consolidation of these practices in a normal, post-revolutionary situation.

Keywords: cross-media, education, Mediterranean, networked journalism, social media, Tunisia

Procedia PDF Downloads 174
195 Seismic Perimeter Surveillance System (Virtual Fence) for Threat Detection and Characterization Using Multiple ML Based Trained Models in Weighted Ensemble Voting

Authors: Vivek Mahadev, Manoj Kumar, Neelu Mathur, Brahm Dutt Pandey

Abstract:

Perimeter guarding and protection of critical installations require prompt intrusion detection and assessment to take effective countermeasures. Currently, visual and electronic surveillance are the primary methods used for perimeter guarding. These methods can be costly and complicated, requiring careful planning according to the location and terrain. Moreover, these methods often struggle to detect stealthy and camouflaged insurgents. The object of the present work is to devise a surveillance technique using seismic sensors that overcomes the limitations of existing systems. The aim is to improve intrusion detection, assessment, and characterization by utilizing seismic sensors. Most of the similar systems have only two types of intrusion detection capability viz., human or vehicle. In our work we could even categorize further to identify types of intrusion activity such as walking, running, group walking, fence jumping, tunnel digging and vehicular movements. A virtual fence of 60 meters at GCNEP, Bahadurgarh, Haryana, India, was created by installing four underground geophones at a distance of 15 meters each. The signals received from these geophones are then processed to find unique seismic signatures called features. Various feature optimization and selection methodologies, such as LightGBM, Boruta, Random Forest, Logistics, Recursive Feature Elimination, Chi-2 and Pearson Ratio were used to identify the best features for training the machine learning models. The trained models were developed using algorithms such as supervised support vector machine (SVM) classifier, kNN, Decision Tree, Logistic Regression, Naïve Bayes, and Artificial Neural Networks. These models were then used to predict the category of events, employing weighted ensemble voting to analyze and combine their results. The models were trained with 1940 training events and results were evaluated with 831 test events. It was observed that using the weighted ensemble voting increased the efficiency of predictions. In this study we successfully developed and deployed the virtual fence using geophones. Since these sensors are passive, do not radiate any energy and are installed underground, it is impossible for intruders to locate and nullify them. Their flexibility, quick and easy installation, low costs, hidden deployment and unattended surveillance make such systems especially suitable for critical installations and remote facilities with difficult terrain. This work demonstrates the potential of utilizing seismic sensors for creating better perimeter guarding and protection systems using multiple machine learning models in weighted ensemble voting. In this study the virtual fence achieved an intruder detection efficiency of over 97%.

Keywords: geophone, seismic perimeter surveillance, machine learning, weighted ensemble method

Procedia PDF Downloads 47