Search results for: adult social work
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 21940

Search results for: adult social work

850 Inventory and Pollinating Role of Bees (Hymenoptera: apoidea) on Turnip (Brassica rapa L.) and Radish (Raphanus sativus L.) (Brassicaceae) in Constantine Area (Algeria)

Authors: Benachour Karima

Abstract:

Pollination is a key factor in crop production and the presence of insect pollinators, mainly wild bees, is essential for improving yields. In this work, visiting apoids of two vegetable crops, the turnip (Brassica rapa L.) and the radish (Raphanus sativus L.) (Brassicaceae) were recorded during flowering times of 2003 and 2004 in Constantine area (36°22’N 06°37’E, 660 m). The observations were conducted in a plot of approximately 308 m2 of the Institute of Nutrition, Food and Food Technology (University of Mentouri Brothers). To estimate the density of bees (per 100 flowers or m2), 07 plots (01m2 for each one) are defined from the edge of the culture and in the first two rows. From flowering and every two days, foraging insects are recorded from 09 am until 17 pm (Gmt+1).The purpose of visit (collecting nectar, pollen or both) and pollinating efficiency (estimated by the number of flowers visited per minute and the number of positive visits) were noted for the most abundant bees on flowers. The action of pollinating insects is measured by comparing seed yields of 07 plots covered with tulle with 07 other accessible to pollinators. 04 families of Apoidea: Apidae, Halictidae, Andrenidae and Megachilidae were observed on the two plants. On turnip, the honeybee is the most common visitor (on average 214visites/ m2), it is followed by the Halictidae Lasioglossum mediterraneum whose visits are less intense (20 individuals/m2). Visits by Andrenidae, represented by several species such as Andrena lagopus, A.flavipes, A.agilissima and A.rhypara were episodic. The honeybee collected mainly nectar, its visits were all potentially fertilizing (contact with stigma) and more frequent (on average 14 flowers/min. L.mediterraneum visited only 05 flrs/min, it collected mostly the two products together and all its visits were also positive. On radish, the wild bee Ceratina cucurbitina recorded the highest number of visits (on average 06 individuals/100flo wers), the Halictidae represented mainly by L.mediterraneum, and L.malachurum, L.pauxillum were less abundant. C.cucurbitina visited on average 10 flowers /min and all its visits are positive. Visits of Halictidae were less frequent (05-06 flowers/min) and not all fertilizing. Seed yield of Brassica rapa (average number of pods /plant, seeds/ pods and average weight of 1000 seeds) was significantly higher in the presence of pollinators. Similarly, the pods of caged plants gave a percentage of aborted seeds (10.3%) significantly higher than that obtained on free plants (4.12%), the pods of caged plants also gave a percentage of malformed seeds (1.9%) significantly higher than that of the free plants (0.9%). For radish, the seed yield in the presence and absence of insects are almost similar. Only the percentage of malformed seeds (3.8%) obtained from the pods of caged plants was significantly higher in comparison with pods of free plants (1.9%). Following these results, it is clear that pollinators especially bees are essential for the production and improvement of crop yields and therefore it is necessary to protect this fauna increasingly threatened.

Keywords: foraging behavior, honey bee, radish, seed yield, turnip, wild bee

Procedia PDF Downloads 205
849 Localized Recharge Modeling of a Coastal Aquifer from a Dam Reservoir (Korba, Tunisia)

Authors: Nejmeddine Ouhichi, Fethi Lachaal, Radhouane Hamdi, Olivier Grunberger

Abstract:

Located in Cap Bon peninsula (Tunisia), the Lebna dam was built in 1987 to balance local water salt intrusion taking place in the coastal aquifer of Korba. The first intention was to reduce coastal groundwater over-pumping by supplying surface water to a large irrigation system. The unpredicted beneficial effect was recorded with the occurrence of a direct localized recharge to the coastal aquifer by leakage through the geological material of the southern bank of the lake. The hydrological balance of the reservoir dam gave an estimation of the annual leakage volume, but dynamic processes and sound quantification of recharge inputs are still required to understand the localized effect of the recharge in terms of piezometry and quality. Present work focused on simulating the recharge process to confirm the hypothesis, and established a sound quantification of the water supply to the coastal aquifer and extend it to multi-annual effects. A spatial frame of 30km² was used for modeling. Intensive outcrops and geophysical surveys based on 68 electrical resistivity soundings were used to characterize the aquifer 3D geometry and the limit of the Plio-quaternary geological material concerned by the underground flow paths. Permeabilities were determined using 17 pumping tests on wells and piezometers. Six seasonal piezometric surveys on 71 wells around southern reservoir dam banks were performed during the 2019-2021 period. Eight monitoring boreholes of high frequency (15min) piezometric data were used to examine dynamical aspects. Model boundary conditions were specified using the geophysics interpretations coupled with the piezometric maps. The dam-groundwater flow model was performed using Visual MODFLOW software. Firstly, permanent state calibration based on the first piezometric map of February 2019 was established to estimate the permanent flow related to the different reservoir levels. Secondly, piezometric data for the 2019-2021 period were used for transient state calibration and to confirm the robustness of the model. Preliminary results confirmed the temporal link between the reservoir level and the localized recharge flow with a strong threshold effect for levels below 16 m.a.s.l. The good agreement of computed flow through recharge cells on the southern banks and hydrological budget of the reservoir open the path to future simulation scenarios of the dilution plume imposed by the localized recharge. The dam reservoir-groundwater flow-model simulation results approve a potential for storage of up to 17mm/year in existing wells, under gravity-feed conditions during level increases on the reservoir into the three years of operation. The Lebna dam groundwater flow model characterized a spatiotemporal relation between groundwater and surface water.

Keywords: leakage, MODFLOW, saltwater intrusion, surface water-groundwater interaction

Procedia PDF Downloads 130
848 Examining the Impact of De-Escalation Training among Emergency Department Nurses

Authors: Jonathan D. Recchi

Abstract:

Introduction: Workplace violence is a major concern for nurses throughout the United States and is a rising occupational health hazard that has been exacerbated by both the Covid-19 pandemic and increasing patient and family member incivility. De-escalation training has been found to be an evidence-based tool for emergency department nurses to help avoid or mitigate high-risk situations that could lead to workplace violence. Many healthcare organizations either do not provide de-escalation training to their staff or only provide it sparingly, such as during new employee orientation. There is limited research in the literature on the psychological benefits of de-escalation training. Purpose: The purpose of this study is to determine if there are psychological and organizational advantages to providing emergency department nurses with de-escalation training. Equipping emergency department nurses with skills that are essential to de-escalate violent or potentially violent patients may help prevent physical, mental, and/or psychological damage to the nurse because of violence and/or threatening acts. The hypothesis is that providing de-scalation training to emergency department nurses will lead to increased nurse confidence in dealing with aggressive patients, increased resiliency, increased professional quality of life, and increased intention to stay with their current organization. This study aims to show that organizations would benefit from providing de-escalation training to all nurses operating in high-risk areas on a regular basis. Significance: Showing psychological benefits to providing evidence-based de-escalation training can provide healthcare organizations with the ability to retain a more resilient and prepared workforce. Method: This study uses a pre-experimental cross-sectional pre-/post-test design using a convenience sample of emergency department registered nurses employed across Jefferson Health Northeast (Jefferson Torresdale, Jefferson Bucks, and Jefferson Frankford. Inclusion criteria include registered nurses who work full or part-time, with 51% or more of their clinical time spent in direct clinical care. Excluded from participation are registered nurses in orientation, per-diem nurses, temporary and/or travel nurses, nurses who spend less than 51% of their time in direct patient care, and nurses who have received de-escalation training within the past two years. This study uses the Connor-Davidson Resilience Scale 10 (CD-RISC-10), the Clinician Confidence in Coping with Patient Aggression Scale, the Press Ganey Intention To Stay question, and the Professional Quality of Life Scale. Results: A Paired t-Test will be used to analyze the mean scores of the three scales and one question pre and post-intervention to determine if there is a statistically significant difference in RN resiliency, confidence in coping with patient aggression, intention to stay, and professional quality of life. Discussion and Conclusions: Upon completion, the outcomes of this intervention will show the importance of providing evidence-based de-escalation training to all nurses operating within the emergency department.

Keywords: de-escalation, nursing, emergency department, workplace violence

Procedia PDF Downloads 95
847 Evotrader: Bitcoin Trading Using Evolutionary Algorithms on Technical Analysis and Social Sentiment Data

Authors: Martin Pellon Consunji

Abstract:

Due to the rise in popularity of Bitcoin and other crypto assets as a store of wealth and speculative investment, there is an ever-growing demand for automated trading tools, such as bots, in order to gain an advantage over the market. Traditionally, trading in the stock market was done by professionals with years of training who understood patterns and exploited market opportunities in order to gain a profit. However, nowadays a larger portion of market participants are at minimum aided by market-data processing bots, which can generally generate more stable signals than the average human trader. The rise in trading bot usage can be accredited to the inherent advantages that bots have over humans in terms of processing large amounts of data, lack of emotions of fear or greed, and predicting market prices using past data and artificial intelligence, hence a growing number of approaches have been brought forward to tackle this task. However, the general limitation of these approaches can still be broken down to the fact that limited historical data doesn’t always determine the future, and that a lot of market participants are still human emotion-driven traders. Moreover, developing markets such as those of the cryptocurrency space have even less historical data to interpret than most other well-established markets. Due to this, some human traders have gone back to the tried-and-tested traditional technical analysis tools for exploiting market patterns and simplifying the broader spectrum of data that is involved in making market predictions. This paper proposes a method which uses neuro evolution techniques on both sentimental data and, the more traditionally human-consumed, technical analysis data in order to gain a more accurate forecast of future market behavior and account for the way both automated bots and human traders affect the market prices of Bitcoin and other cryptocurrencies. This study’s approach uses evolutionary algorithms to automatically develop increasingly improved populations of bots which, by using the latest inflows of market analysis and sentimental data, evolve to efficiently predict future market price movements. The effectiveness of the approach is validated by testing the system in a simulated historical trading scenario, a real Bitcoin market live trading scenario, and testing its robustness in other cryptocurrency and stock market scenarios. Experimental results during a 30-day period show that this method outperformed the buy and hold strategy by over 260% in terms of net profits, even when taking into consideration standard trading fees.

Keywords: neuro-evolution, Bitcoin, trading bots, artificial neural networks, technical analysis, evolutionary algorithms

Procedia PDF Downloads 110
846 Investigation of Yard Seam Workings for the Proposed Newcastle Light Rail Project

Authors: David L. Knott, Robert Kingsland, Alistair Hitchon

Abstract:

The proposed Newcastle Light Rail is a key part of the revitalisation of Newcastle, NSW and will provide a frequent and reliable travel option throughout the city centre, running from Newcastle Interchange at Wickham to Pacific Park in Newcastle East, a total of 2.7 kilometers in length. Approximately one-third of the route, along Hunter and Scott Streets, is subject to potential shallow underground mine workings. The extent of mining and seams mined is unclear. Convicts mined the Yard Seam and overlying Dudley (Dirty) Seam in Newcastle sometime between 1800 and 1830. The Australian Agricultural Company mined the Yard Seam from about 1831 to the 1860s in the alignment area. The Yard Seam was about 3 feet (0.9m) thick, and therefore, known as the Yard Seam. Mine maps do not exist for the workings in the area of interest and it was unclear if both or just one seam was mined. Information from 1830s geological mapping and other data showing shaft locations were used along Scott Street and information from the 1908 Royal Commission was used along Hunter Street to develop an investigation program. In addition, mining was encountered for several sites to the south of the alignment at depths of about 7 m to 25 m. Based on the anticipated depths of mining, it was considered prudent to assess the potential for sinkhole development on the proposed alignment and realigned underground utilities and to obtain approval for the work from Subsidence Advisory NSW (SA NSW). The assessment consisted of a desktop study, followed by a subsurface investigation. Four boreholes were drilled along Scott Street and three boreholes were drilled along Hunter Street using HQ coring techniques in the rock. The placement of boreholes was complicated by the presence of utilities in the roadway and traffic constraints. All the boreholes encountered the Yard Seam, with conditions varying from unmined coal to an open void, indicating the presence of mining. The geotechnical information obtained from the boreholes was expanded by using various downhole techniques including; borehole camera, borehole sonar, and downhole geophysical logging. The camera provided views of the rock and helped to explain zones of no recovery. In addition, timber props within the void were observed. Borehole sonar was performed in the void and provided an indication of room size as well as the presence of timber props within the room. Downhole geophysical logging was performed in the boreholes to measure density, natural gamma, and borehole deviation. The data helped confirm that all the mining was in the Yard Seam and that the overlying Dudley Seam had been eroded in the past over much of the alignment. In summary, the assessment allowed the potential for sinkhole subsidence to be assessed and a mitigation approach developed to allow conditional approval by SA NSW. It also confirmed the presence of mining in the Yard Seam, the depth to the seam and mining conditions, and indicated that subsidence did not appear to have occurred in the past.

Keywords: downhole investigation techniques, drilling, mine subsidence, yard seam

Procedia PDF Downloads 303
845 Mathematical Modeling of Avascular Tumor Growth and Invasion

Authors: Meitham Amereh, Mohsen Akbari, Ben Nadler

Abstract:

Cancer has been recognized as one of the most challenging problems in biology and medicine. Aggressive tumors are a lethal type of cancers characterized by high genomic instability, rapid progression, invasiveness, and therapeutic resistance. Their behavior involves complicated molecular biology and consequential dynamics. Although tremendous effort has been devoted to developing therapeutic approaches, there is still a huge need for new insights into the dark aspects of tumors. As one of the key requirements in better understanding the complex behavior of tumors, mathematical modeling and continuum physics, in particular, play a pivotal role. Mathematical modeling can provide a quantitative prediction on biological processes and help interpret complicated physiological interactions in tumors microenvironment. The pathophysiology of aggressive tumors is strongly affected by the extracellular cues such as stresses produced by mechanical forces between the tumor and the host tissue. During the tumor progression, the growing mass displaces the surrounding extracellular matrix (ECM), and due to the level of tissue stiffness, stress accumulates inside the tumor. The produced stress can influence the tumor by breaking adherent junctions. During this process, the tumor stops the rapid proliferation and begins to remodel its shape to preserve the homeostatic equilibrium state. To reach this, the tumor, in turn, upregulates epithelial to mesenchymal transit-inducing transcription factors (EMT-TFs). These EMT-TFs are involved in various signaling cascades, which are often associated with tumor invasiveness and malignancy. In this work, we modeled the tumor as a growing hyperplastic mass and investigated the effects of mechanical stress from surrounding ECM on tumor invasion. The invasion is modeled as volume-preserving inelastic evolution. In this framework, principal balance laws are considered for tumor mass, linear momentum, and diffusion of nutrients. Also, mechanical interactions between the tumor and ECM is modeled using Ciarlet constitutive strain energy function, and dissipation inequality is utilized to model the volumetric growth rate. System parameters, such as rate of nutrient uptake and cell proliferation, are obtained experimentally. To validate the model, human Glioblastoma multiforme (hGBM) tumor spheroids were incorporated inside Matrigel/Alginate composite hydrogel and was injected into a microfluidic chip to mimic the tumor’s natural microenvironment. The invasion structure was analyzed by imaging the spheroid over time. Also, the expression of transcriptional factors involved in invasion was measured by immune-staining the tumor. The volumetric growth, stress distribution, and inelastic evolution of tumors were predicted by the model. Results showed that the level of invasion is in direct correlation with the level of predicted stress within the tumor. Moreover, the invasion length measured by fluorescent imaging was shown to be related to the inelastic evolution of tumors obtained by the model.

Keywords: cancer, invasion, mathematical modeling, microfluidic chip, tumor spheroids

Procedia PDF Downloads 101
844 Mapping the Early History of Common Law Education in England, 1292-1500

Authors: Malcolm Richardson, Gabriele Richardson

Abstract:

This paper illustrates how historical problems can be studied successfully using GIS even in cases in which data, in the modern sense, is fragmentary. The overall problem under investigation is how early (1300-1500) English schools of Common Law moved from apprenticeship training in random individual London inns run in part by clerks of the royal chancery to become what is widely called 'the Third University of England,' a recognized system of independent but connected legal inns. This paper focuses on the preparatory legal inns, called the Inns of Chancery, rather than the senior (and still existing) Inns of Court. The immediate problem studied in this paper is how the junior legal inns were organized, staffed, and located from 1292 to about 1500, and what maps tell us about the role of the chancery clerks as managers of legal inns. The authors first uncovered the names of all chancery clerks of the period, most of them unrecorded in histories, from archival sources in the National Archives, Kew. Then they matched the names with London property leases. Using ArcGIS, the legal inns and their owners were plotted on a series of maps covering the period 1292 to 1500. The results show a distinct pattern of ownership of the legal inns and suggest a narrative that would help explain why the Inns of Chancery became serious centers of learning during the fifteenth century. In brief, lower-ranking chancery clerks, always looking for sources of income, discovered by 1370 that legal inns could be a source of income. Since chancery clerks were intimately involved with writs and other legal forms, and since the chancery itself had a long-standing training system, these clerks opened their own legal inns to train fledgling lawyers, estate managers, and scriveners. The maps clearly show growth patterns of ownership by the chancery clerks for both legal inns and other London properties in the areas of Holborn and The Strand between 1450 and 1417. However, the maps also show that a royal ordinance of 1417 forbidding chancery clerks to live with lawyers, law students, and other non-chancery personnel had an immediate effect, and properties in that area of London leased by chancery clerks simply stop after 1417. The long-term importance of the patterns shown in the maps is that while the presence of chancery clerks in the legal inns likely created a more coherent education system, their removal forced the legal profession, suddenly without a hostelry managerial class, to professionalize the inns and legal education themselves. Given the number and social status of members of the legal inns, the effect on English education was to free legal education from the limits of chancery clerk education (the clerks were not practicing common lawyers) and to enable it to become broader in theory and practice, in fact, a kind of 'finishing school' for the governing (if not noble) class.

Keywords: GIS, law, London, education

Procedia PDF Downloads 161
843 Morphological Transformation of Traditional Cities: The Case Study of the Historic Center of the City of Najaf

Authors: Sabeeh Lafta Farhan, Ihsan Abbass Jasim, Sohaib Kareem Al-Mamoori

Abstract:

This study addresses the subject of transformation of urban structures and how does this transformation affect the character of traditional cities, which represents the research issue. Hence, the research has aimed at studying and learning about the urban structure characteristics and morphological transformation features in the traditional cities centers, and to look for means and methods to preserve the character of those cities. Cities are not merely locations inhabited by a large number of people, they are political and legal entities, in addition to economic activities that distinguish these cities, thus, they are a complex set of institutions, and the transformation in urban environment cannot be recognized without understanding these relationships. The research presumes an existing impact of urbanization on the properties of traditional structure of the Holy City of Najaf. The research has defined urbanization as restructuring and re-planning of urban areas that have lost their functions and bringing them into social and cultural life in the city, to be able to serve economy in order to better respond to the needs of users. Sacred Cities provide the organic connection between acts of worship and dealings and reveal the mechanisms and reasons behind the regulatory nature of the sacred shrine and their role in achieving organizational assimilation of urban morphology. The research has reached a theoretical framework of the particulars of urbanization. This framework has been applied to the historic center of the old city of Najaf, where the most important findings of the research were that the visual and structural dominant presence of holy shrine of Imam Ali (peace be upon him) remains to emphasize the visual particularity, and the main role of the city, which hosts one of the most important Muslim shrines in the world, in addition to the visible golden dome rising above the skyline, and the Imam Ali Mosque the hub and the center for religious activities. Thus, in view of being a place of main importance and a symbol of religious and Islamic culture, it is very important to have the shrine of Imam Ali (AS) prevailing on all zones of re-development in the old city. Consequently, the research underlined that the distinctive and unique character of the city of Najaf did not proceed from nothing, but was achieved through the unrivaled characteristics and features possessed by the city of Najaf alone, which allowed it and enabled it to occupy this status among the Arab and Muslim cities. That is why the activities arising from the development have to enhance the historical role of the city in order to have this development as clear support, strength and further addition to the city assets and its cultural heritage, and not seeing the developmental activities crushing the city urban traditional fabric, cultural heritage and its historical specificity.

Keywords: Iraq, the city of Najaf, heritage, traditional cities, morphological transformation

Procedia PDF Downloads 304
842 Nonequilibrium Effects in Photoinduced Ultrafast Charge Transfer Reactions

Authors: Valentina A. Mikhailova, Serguei V. Feskov, Anatoly I. Ivanov

Abstract:

In the last decade the nonequilibrium charge transfer have attracted considerable interest from the scientific community. Examples of such processes are the charge recombination in excited donor-acceptor complexes and the intramolecular electron transfer from the second excited electronic state. In these reactions the charge transfer proceeds predominantly in the nonequilibrium mode. In the excited donor-acceptor complexes the nuclear nonequilibrium is created by the pump pulse. The intramolecular electron transfer from the second excited electronic state is an example where the nuclear nonequilibrium is created by the forward electron transfer. The kinetics of these nonequilibrium reactions demonstrate a number of peculiar properties. Most important from them are: (i) the absence of the Marcus normal region in the free energy gap law for the charge recombination in excited donor-acceptor complexes, (ii) extremely low quantum yield of thermalized charge separated state in the ultrafast charge transfer from the second excited state, (iii) the nonexponential charge recombination dynamics in excited donor-acceptor complexes, (iv) the dependence of the charge transfer rate constant on the excitation pulse frequency. This report shows that most of these kinetic features can be well reproduced in the framework of stochastic point-transition multichannel model. The model involves an explicit description of the nonequilibrium excited state formation by the pump pulse and accounts for the reorganization of intramolecular high-frequency vibrational modes, for their relaxation as well as for the solvent relaxation. The model is able to quantitatively reproduce complex nonequilibrium charge transfer kinetics observed in modern experiments. The interpretation of the nonequilibrium effects from a unified point of view in the terms of the multichannel point transition stochastic model allows to see similarities and differences of electron transfer mechanism in various molecular donor-acceptor systems and formulates general regularities inherent in these phenomena. The nonequilibrium effects in photoinduced ultrafast charge transfer which have been studied for the last 10 years are analyzed. The methods of suppression of the ultrafast charge recombination, similarities and dissimilarities of electron transfer mechanism in different molecular donor-acceptor systems are discussed. The extremely low quantum yield of the thermalized charge separated state observed in the ultrafast charge transfer from the second excited state in the complex consisting of 1,2,4-trimethoxybenzene and tetracyanoethylene in acetonitrile solution directly demonstrates that its effectiveness can be close to unity. This experimental finding supports the idea that the nonequilibrium charge recombination in the excited donor-acceptor complexes can be also very effective so that the part of thermalized complexes is negligible. It is discussed the regularities inherent to the equilibrium and nonequilibrium reactions. Their fundamental differences are analyzed. Namely the opposite dependencies of the charge transfer rates on the dynamical properties of the solvent. The increase of the solvent viscosity results in decreasing the thermal rate and vice versa increasing the nonequilibrium rate. The dependencies of the rates on the solvent reorganization energy and the free energy gap also can considerably differ. This work was supported by the Russian Science Foundation (Grant No. 16-13-10122).

Keywords: Charge recombination, higher excited states, free energy gap law, nonequilibrium

Procedia PDF Downloads 316
841 Validating the Cerebral Palsy Quality of Life for Children (CPQOL-Child) Questionnaire for Use in Sri Lanka

Authors: Shyamani Hettiarachchi, Gopi Kitnasamy

Abstract:

Background: The potentially high level of physical need and dependency experienced by children with cerebral palsy could affect the quality of life (QOL) of the child, the caregiver and his/her family. Poor QOL in children with cerebral palsy is associated with the parent-child relationship, limited opportunities for social participation, limited access to healthcare services, psychological well-being and the child's physical functioning. Given that children experiencing disabilities have little access to remedial support with an inequitable service across districts in Sri Lanka, and given the impact of culture and societal stigma, there may be differing viewpoints across respondents. Objectives: The aim of this study was to evaluate the psychometric properties of the Tamil version of the Cerebral Palsy Quality of Life for Children (CPQOL-Child) Questionnaire. Design: An instrument development and validation study. Methods: Forward and backward translations of the CPQOL-Child were undertaken by a team comprised of a physiotherapist, speech and language therapist and two linguists for the primary caregiver form and the child self-report form. As part of a pilot phase, the Tamil version of the CPQOL was completed by 45 primary caregivers with children with cerebral palsy and 15 children with cerebral palsy (GMFCS level 3-4). In addition, the primary caregivers commented on the process of filling in the questionnaire. The psychometric properties of test-retest reliability, internal consistency and construct validity were undertaken. Results: The test-retest reliability and internal consistency were high. A significant association (p < 0.001) was found between limited motor skills and poor QOL. The Cronbach's alpha for the whole questionnaire was at 0.95.Similarities and divergences were found between the two groups of respondents. The child respondents identified limited motor skills as associated with physical well-being and autonomy. Akin to this, the primary caregivers associated the severity of motor function with limitations of physical well-being and autonomy. The trend observed was that QOL was not related to the level of impairment but connected to environmental factors by the child respondents. In addition to this, the main concern among primary caregivers about the child's future and on the child's lack of independence was not fully captured by the QOL questionnaire employed. Conclusions: Although the initial results of the CPQOL questionnaire show high test-retest reliability and internal consistency of the instrument, it does not fully reflect the socio-cultural realities and primary concerns of the caregivers. The current findings highlight the need to take child and caregiver perceptions of QOL into account in clinical practice and research. It strongly indicates the need for culture-specific measures of QOL.

Keywords: cerebral palsy, CPQOL, culture, quality of life

Procedia PDF Downloads 337
840 Development a Home-Hotel-Hospital-School Community-Based Palliative Care Model for Patients with Cancer in Suratthani, Thailand

Authors: Patcharaporn Sakulpong, Wiriya Phokhwang

Abstract:

Background: Banpunrug (Love Sharing House) established in 2013 provides a community-based palliative care for patients with cancer from 7 provinces in southern Thailand. These patients come to receive outpatient chemotherapy and radiotherapy at Suratthani Cancer Hospital. They are poor and uneducated; they need an accommodation during their 30-45 day course of therapy. Methods: A community-participatory action research (PAR) was employed to establish a model of palliative care for patients with cancer. The participants included health care providers, community, and patients and families. The PAR process includes problem identification and need assessment, community and team establishment, field survey, organization founding, model of care planning, action and inquiry (PDCA), outcome evaluation, and model distribution. Results: The model of care at Banpunrug involves the concepts of HHHS model, in that Banpunrug is a Home for patients; patients live in a house comfortable like in a Hotel resource; the patients are given care and living facilities similarly to those in a Hospital; the house is a School for patients to learn how to take care themselves, how to live well with cancer, and most importantly how to prepare themselves for a good death. The house is also a humanized care school for health care providers. Banpunrug’s philosophy of care is based on friendship therapy, social and spiritual support, community partnership, patient-family centeredness, Live & Love sharing house, and holistic and humanized care. With this philosophy, the house is managed as a home of the patients and everyone involved; everything is costless for all eligible patients and their family members; all facilities and living expense are donated from benevolent people, friends, and community. Everyone, including patients and family, has a sense of belonging to the house and there is no authority between health care providers and the patients in the house. The house is situated in a temple and a community and supported by many local nonprofit organizations and healthcare facilities such as a health promotion hospital at sub-disctrict level and Suratthani Cancer Hospital. Village health volunteers and multi-professional health care volunteers have contributed not only appropriate care, but also knowledge and experience to develop a distinguishing HHHS community-based palliative care model for patients with cancer. Since its opening the house has been a home for more than 400 patients and 300 family members. It is also a model for many national and international healthcare organizations and providers, who come to visit and learn about palliative care in and by community. Conclusions: The success of this palliative care model comes from community involvement, multi-professional volunteers and distributions, and concepts of HHHS model. Banpunrug promotes a consistent care across the cancer trajectory independent of prognosis in order to strengthen a full integration of palliative

Keywords: community-based palliative care, model, participatory action research, patients with cancer

Procedia PDF Downloads 260
839 E-Governance: A Key for Improved Public Service Delivery

Authors: Ayesha Akbar

Abstract:

Public service delivery has witnessed a significant improvement with the integration of information communication technology (ICT). It not only improves management structure with advanced technology for surveillance of service delivery but also provides evidence for informed decisions and policy. Pakistan’s public sector organizations have not been able to produce some good results to ensure service delivery. Notwithstanding, some of the public sector organizations in Pakistan has diffused modern technology and proved their credence by providing better service delivery standards. These good indicators provide sound basis to integrate technology in public sector organizations and shift of policy towards evidence based policy making. Rescue-1122 is a public sector organization which provides emergency services and proved to be a successful model for the provision of service delivery to save human lives and to ensure human development in Pakistan. The information about the organization has been received by employing qualitative research methodology. The information is broadly based on primary and secondary sources which includes Rescue-1122 website, official reports of organizations; UNDP (United Nation Development Program), WHO (World Health Organization) and by conducting 10 in-depth interviews with the high administrative staff of organizations who work in the Lahore offices. The information received has been incorporated with the study for the better understanding of the organization and their management procedures. Rescue-1122 represents a successful model in delivering the services in an efficient way to deal with the disaster management. The management of Rescue has strategized the policies and procedures in such a way to develop a comprehensive model with the integration of technology. This model provides efficient service delivery as well as maintains the standards of the organization. The service delivery model of rescue-1122 works on two fronts; front-office interface and the back-office interface. Back-office defines the procedures of operations and assures the compliance of the staff whereas, front-office equipped with the latest technology and good infrastructure handles the emergency calls. Both ends are integrated with satellite based vehicle tracking, wireless system, fleet monitoring system and IP camera which monitors every move of the staff to provide better services and to pinpoint the distortions in the services. The standard time of reaching to the emergency spot is 7 minutes, and during entertaining the case; driver‘s behavior, traffic volume and the technical assistance being provided to the emergency case is being monitored by front-office. Then the whole information get uploaded to the main dashboard of Lahore headquarter from the provincial offices. The latest technology is being materialized by Rescue-1122 for delivering the efficient services, investigating the flaws; if found, and to develop data to make informed decision making. The other public sector organizations of Pakistan can also develop such models to integrate technology for improving service delivery and to develop evidence for informed decisions and policy making.

Keywords: data, e-governance, evidence, policy

Procedia PDF Downloads 231
838 Development of Biosensor Chip for Detection of Specific Antibodies to HSV-1

Authors: Zatovska T. V., Nesterova N. V., Baranova G. V., Zagorodnya S. D.

Abstract:

In recent years, biosensor technologies based on the phenomenon of surface plasmon resonance (SPR) are becoming increasingly used in biology and medicine. Their application facilitates exploration in real time progress of binding of biomolecules and identification of agents that specifically interact with biologically active substances immobilized on the biosensor surface (biochips). Special attention is paid to the use of Biosensor analysis in determining the antibody-antigen interaction in the diagnostics of diseases caused by viruses and bacteria. According to WHO, the diseases that are caused by the herpes simplex virus (HSV), take second place (15.8%) after influenza as a cause of death from viral infections. Current diagnostics of HSV infection include PCR and ELISA assays. The latter allows determination the degree of immune response to viral infection and respective stages of its progress. In this regard, the searches for new and available diagnostic methods are very important. This work was aimed to develop Biosensor chip for detection of specific antibodies to HSV-1 in the human blood serum. The proteins of HSV1 (strain US) were used as antigens. The viral particles were accumulated in cell culture MDBK and purified by differential centrifugation in cesium chloride density gradient. Analysis of the HSV1 proteins was performed by polyacrylamide gel electrophoresis and ELISA. The protein concentration was measured using De Novix DS-11 spectrophotometer. The device for detection of antigen-antibody interactions was an optoelectronic two-channel spectrometer ‘Plasmon-6’, using the SPR phenomenon in the Krechman optical configuration. It was developed at the Lashkarev Institute of Semiconductor Physics of NASU. The used carrier was a glass plate covered with 45 nm gold film. Screening of human blood serums was performed using the test system ‘HSV-1 IgG ELISA’ (GenWay, USA). Development of Biosensor chip included optimization of conditions of viral antigen sorption and analysis steps. For immobilization of viral proteins 0.2% solution of Dextran 17, 200 (Sigma, USA) was used. Sorption of antigen took place at 4-8°C within 18-24 hours. After washing of chip, three times with citrate buffer (pH 5,0) 1% solution of BSA was applied to block the sites not occupied by viral antigen. It was found direct dependence between the amount of immobilized HSV1 antigen and SPR response. Using obtained biochips, panels of 25 positive and 10 negative for the content of antibodies to HSV-1 human sera were analyzed. The average value of SPR response was 185 a.s. for negative sera and from 312 to. 1264 a.s. for positive sera. It was shown that SPR data were agreed with ELISA results in 96% of samples proving the great potential of SPR in such researches. It was investigated the possibility of biochip regeneration and it was shown that application of 10 mM NaOH solution leads to rupture of intermolecular bonds. This allows reuse the chip several times. Thus, in this study biosensor chip for detection of specific antibodies to HSV1 was successfully developed expanding a range of diagnostic methods for this pathogen.

Keywords: biochip, herpes virus, SPR

Procedia PDF Downloads 412
837 A Comparative Human Rights Analysis of the Securitization of Migration in the Fight against Terrorism in Europe: An Evaluation of Belgium

Authors: Louise Reyntjens

Abstract:

The last quarter of the twentieth century was characterized by the emergence of a new kind of terrorism: religiously-inspired terrorism. Islam finds itself at the heart of this new wave, considering the number of international attacks committed by Islamic-inspired perpetrators. With religiously inspired terrorism as an operating framework, governments increasingly rely on immigration law to counter such terrorism. Immigration law seems particularly useful because its core task consists of keeping ‘unwanted’ people out. Islamic terrorists more often than not have an immigrant background and will be subject to immigration law. As a result, immigration law becomes more and more ‘securitized’. The European migration crisis has reinforced this trend. The research explores the human rights consequences of immigration law’s securitization in Europe. For this, the author selected four European countries for a comparative study: Belgium, France, the United Kingdom and Sweden. All these countries face similar social and security issues but respond very differently to them. The United Kingdom positions itself on the repressive side of the spectrum. Sweden on the other hand also introduced restrictions to its immigration policy but remains on the tolerant side of the spectrum. Belgium and France are situated in between. This contribution evaluates the situation in Belgium. Through a series of legislative changes, the Belgian parliament (i) greatly expanded the possibilities of expelling foreign nationals for (vaguely defined) reasons of ‘national security’; (ii) abolished almost all procedural protection associated with this decision (iii) broadened, as an extra security measure, the possibility of depriving individuals condemned of terrorism of their Belgian nationality. Measures such as these are obviously problematic from a human rights perspective; they jeopardize the principle of legality, the presumption of innocence, the right to protection of private and family life and the prohibition on torture. Moreover, this contribution also raises questions about the efficacy of immigration law’s suitability as a counterterrorism instrument. Is it a legitimate step, considering the type of terrorism we face today? Or, is it merely a strategic move, considering the broader maneuvering space immigration law offers and the lack of political resistance governments receive when infringing the rights of foreigners? Even more so, figures demonstrate that today’s terrorist threat does not necessarily stem from outside our borders. Does immigration law then still absorb - if it has ever done so (completely) - the threat? The study’s goal is to critically assess, from a human rights perspective, the counterterrorism strategies European governments have adopted. As most governments adopt a variation of the same core concepts, the study’s findings will hold true even beyond the four countries addressed.

Keywords: Belgium, counterterrorism strategies, human rights, immigration law

Procedia PDF Downloads 100
836 Multi-Criteria Decision Making Network Optimization for Green Supply Chains

Authors: Bandar A. Alkhayyal

Abstract:

Modern supply chains are typically linear, transforming virgin raw materials into products for end consumers, who then discard them after use to landfills or incinerators. Nowadays, there are major efforts underway to create a circular economy to reduce non-renewable resource use and waste. One important aspect of these efforts is the development of Green Supply Chain (GSC) systems which enables a reverse flow of used products from consumers back to manufacturers, where they can be refurbished or remanufactured, to both economic and environmental benefit. This paper develops novel multi-objective optimization models to inform GSC system design at multiple levels: (1) strategic planning of facility location and transportation logistics; (2) tactical planning of optimal pricing; and (3) policy planning to account for potential valuation of GSC emissions. First, physical linear programming was applied to evaluate GSC facility placement by determining the quantities of end-of-life products for transport from candidate collection centers to remanufacturing facilities while satisfying cost and capacity criteria. Second, disassembly and remanufacturing processes have received little attention in industrial engineering and process cost modeling literature. The increasing scale of remanufacturing operations, worth nearly $50 billion annually in the United States alone, have made GSC pricing an important subject of research. A non-linear physical programming model for optimization of pricing policy for remanufactured products that maximizes total profit and minimizes product recovery costs were examined and solved. Finally, a deterministic equilibrium model was used to determine the effects of internalizing a cost of GSC greenhouse gas (GHG) emissions into optimization models. Changes in optimal facility use, transportation logistics, and pricing/profit margins were all investigated against a variable cost of carbon, using case study system created based on actual data from sites in the Boston area. As carbon costs increase, the optimal GSC system undergoes several distinct shifts in topology as it seeks new cost-minimal configurations. A comprehensive study of quantitative evaluation and performance of the model has been done using orthogonal arrays. Results were compared to top-down estimates from economic input-output life cycle assessment (EIO-LCA) models, to contrast remanufacturing GHG emission quantities with those from original equipment manufacturing operations. Introducing a carbon cost of $40/t CO2e increases modeled remanufacturing costs by 2.7% but also increases original equipment costs by 2.3%. The assembled work advances the theoretical modeling of optimal GSC systems and presents a rare case study of remanufactured appliances.

Keywords: circular economy, extended producer responsibility, greenhouse gas emissions, industrial ecology, low carbon logistics, green supply chains

Procedia PDF Downloads 154
835 Optimal-Based Structural Vibration Attenuation Using Nonlinear Tuned Vibration Absorbers

Authors: Pawel Martynowicz

Abstract:

Vibrations are a crucial problem for slender structures such as towers, masts, chimneys, wind turbines, bridges, high buildings, etc., that is why most of them are equipped with vibration attenuation or fatigue reduction solutions. In this work, a slender structure (i.e., wind turbine tower-nacelle model) equipped with nonlinear, semiactive tuned vibration absorber(s) is analyzed. For this study purposes, magnetorheological (MR) dampers are used as semiactive actuators. Several optimal-based approaches to structural vibration attenuation are investigated against the standard ‘ground-hook’ law and passive tuned vibration absorber(s) implementations. The common approach to optimal control of nonlinear systems is offline computation of the optimal solution, however, so determined open loop control suffers from lack of robustness to uncertainties (e.g., unmodelled dynamics, perturbations of external forces or initial conditions), and thus perturbation control techniques are often used. However, proper linearization may be an issue for highly nonlinear systems with implicit relations between state, co-state, and control. The main contribution of the author is the development as well as numerical and experimental verification of the Pontriagin maximum-principle-based vibration control concepts that produce directly actuator control input (not the demanded force), thus force tracking algorithm that results in control inaccuracy is entirely omitted. These concepts, including one-step optimal control, quasi-optimal control, and optimal-based modified ‘ground-hook’ law, can be directly implemented in online and real-time feedback control for periodic (or semi-periodic) disturbances with invariant or time-varying parameters, as well as for non-periodic, transient or random disturbances, what is a limitation for some other known solutions. No offline calculation, excitations/disturbances assumption or vibration frequency determination is necessary, moreover, all of the nonlinear actuator (MR damper) force constraints, i.e., no active forces, lower and upper saturation limits, hysteresis-type dynamics, etc., are embedded in the control technique, thus the solution is optimal or suboptimal for the assumed actuator, respecting its limitations. Depending on the selected method variant, a moderate or decisive reduction in the computational load is possible compared to other methods of nonlinear optimal control, while assuring the quality and robustness of the vibration reduction system, as well as considering multi-pronged operational aspects, such as possible minimization of the amplitude of the deflection and acceleration of the vibrating structure, its potential and/or kinetic energy, required actuator force, control input (e.g. electric current in the MR damper coil) and/or stroke amplitude. The developed solutions are characterized by high vibration reduction efficiency – the obtained maximum values of the dynamic amplification factor are close to 2.0, while for the best of the passive systems, these values exceed 3.5.

Keywords: magnetorheological damper, nonlinear tuned vibration absorber, optimal control, real-time structural vibration attenuation, wind turbines

Procedia PDF Downloads 114
834 Hydrogeological Appraisal of Karacahisar Coal Field (Western Turkey): Impacts of Mining on Groundwater Resources Utilized for Water Supply

Authors: Sukran Acikel, Mehmet Ekmekci, Otgonbayar Namkhai

Abstract:

Lignite coal fields in western Turkey generally occurs in tensional Neogene basins bordered by major faults. Karacahisar coal field in Mugla province of western Turkey is a large Neogene basin filled with alternation of silisic and calcerous layers. The basement of the basin is composed of mainly karstified carbonate rocks of Mesozoic and schists of Paleozoic age. The basement rocks are exposed at highlands surrounding the basin. The basin fill deposits forms shallow, low yield and local aquifers whereas karstic carbonate rock masses forms the major aquifer in the region. The karstic aquifer discharges through a spring zone issuing at intersection of two major faults. Municipal water demand in Bodrum city, a touristic attraction area is almost totally supplied by boreholes tapping the karstic aquifer. A well field has been constructed on the eastern edge of the coal basin, which forms a ridge separating two Neogene basins. A major concern was raised about the plausible impact of mining activities on groundwater system in general and on water supply well field in particular. The hydrogeological studies carried out in the area revealed that the coal seam is located below the groundwater level. Mining operations will be affected by groundwater inflow to the pits, which will require dewatering measures. Dewatering activities in mine sites have two-sided effects: a) lowers the groundwater level at and around the pit for a safe and effective mining operation, b) continuous dewatering causes expansion of cone of depression to reach a spring, stream and/or well being utilized by local people, capturing their water. Plausible effect of mining operations on the flow of the spring zone was another issue of concern. Therefore, a detailed representative hydrogeological conceptual model of the site was developed on the basis of available data and field work. According to the hydrogeological conceptual model, dewatering of Neogene layers will not hydraulically affect the water supply wells, however, the ultimate perimeter of the open pit will expand to intersect the well field. According to the conceptual model, the coal seam is separated from the bottom by a thick impervious clay layer sitting on the carbonate basement. Therefore, the hydrostratigraphy does not allow a hydraulic interaction between the mine pit and the karstic carbonate rock aquifer. However, the structural setting in the basin suggests that deep faults intersecting the basement and the Neogene sequence will most probably carry the deep groundwater up to a level above the bottom of the pit. This will require taking necessary measure to lower the piezometric level of the carbonate rock aquifer along the faults. Dewatering the carbonate rock aquifer will reduce the flow to the spring zone. All findings were put together to recommend a strategy for safe and effective mining operation.

Keywords: conceptual model, dewatering, groundwater, mining operation

Procedia PDF Downloads 393
833 Design and Development of Graphene Oxide Modified by Chitosan Nanosheets Showing pH-Sensitive Surface as a Smart Drug Delivery System for Control Release of Doxorubicin

Authors: Parisa Shirzadeh

Abstract:

Drug delivery systems in which drugs are traditionally used, multi-stage and at specified intervals by patients, do not meet the needs of the world's up-to-date drug delivery. In today's world, we are dealing with a huge number of recombinant peptide and protean drugs and analogues of hormones in the body, most of which are made with genetic engineering techniques. Most of these drugs are used to treat critical diseases such as cancer. Due to the limitations of the traditional method, researchers sought to find ways to solve the problems of the traditional method to a large extent. Following these efforts, controlled drug release systems were introduced, which have many advantages. Using controlled release of the drug in the body, the concentration of the drug is kept at a certain level, and in a short time, it is done at a higher rate. Graphene is a natural material that is biodegradable, non-toxic, and natural compared to carbon nanotubes; its price is lower than carbon nanotubes and is cost-effective for industrialization. On the other hand, the presence of highly effective surfaces and wide surfaces of graphene plates makes it more effective to modify graphene than carbon nanotubes. Graphene oxide is often synthesized using concentrated oxidizers such as sulfuric acid, nitric acid, and potassium permanganate based on Hummer 1 method. In comparison with the initial graphene, the resulting graphene oxide is heavier and has carboxyl, hydroxyl, and epoxy groups. Therefore, graphene oxide is very hydrophilic and easily dissolves in water and creates a stable solution. On the other hand, because the hydroxyl, carboxyl, and epoxy groups created on the surface are highly reactive, they have the ability to work with other functional groups such as amines, esters, polymers, etc. Connect and bring new features to the surface of graphene. In fact, it can be concluded that the creation of hydroxyl groups, Carboxyl, and epoxy and in fact graphene oxidation is the first step and step in creating other functional groups on the surface of graphene. Chitosan is a natural polymer and does not cause toxicity in the body. Due to its chemical structure and having OH and NH groups, it is suitable for binding to graphene oxide and increasing its solubility in aqueous solutions. Graphene oxide (GO) has been modified by chitosan (CS) covalently, developed for control release of doxorubicin (DOX). In this study, GO is produced by the hummer method under acidic conditions. Then, it is chlorinated by oxalyl chloride to increase its reactivity against amine. After that, in the presence of chitosan, the amino reaction was performed to form amide transplantation, and the doxorubicin was connected to the carrier surface by π-π interaction in buffer phosphate. GO, GO-CS, and GO-CS-DOX characterized by FT-IR, RAMAN, TGA, and SEM. The ability to load and release is determined by UV-Visible spectroscopy. The loading result showed a high capacity of DOX absorption (99%) and pH dependence identified as a result of DOX release from GO-CS nanosheet at pH 5.3 and 7.4, which show a fast release rate in acidic conditions.

Keywords: graphene oxide, chitosan, nanosheet, controlled drug release, doxorubicin

Procedia PDF Downloads 114
832 Improved Operating Strategies for the Optimization of Proton Exchange Membrane Fuel Cell System Performance

Authors: Guillaume Soubeyran, Fabrice Micoud, Benoit Morin, Jean-Philippe Poirot-Crouvezier, Magali Reytier

Abstract:

Proton Exchange Membrane Fuel Cell (PEMFC) technology is considered as a solution for the reduction of CO2 emissions. However, this technology still meets several challenges for high-scale industrialization. In this context, the increase of durability remains a critical aspect for competitiveness of this technology. Fortunately, performance degradations in nominal operating conditions is partially reversible, meaning that if specific conditions are applied, a partial recovery of fuel cell performance can be achieved, while irreversible degradations can only be mitigated. Thus, it is worth studying the optimal conditions to rejuvenate these reversible degradations and assessing the long-term impact of such procedures on the performance of the cell. Reversible degradations consist mainly of anode Pt active sites poisoning by carbon monoxide at the anode, heterogeneities in water management during use, and oxidation/deactivation of Pt active sites at the cathode. The latter is identified as a major source of reversible performance loss caused by the presence oxygen, high temperature and high cathode potential that favor platinum oxidation, especially in high efficiency operating points. Hence, we studied here a recovery procedure aiming at reducing the platinum oxides by decreasing cathode potential during operation. Indeed, the application of short air starvation phase leads to a drop of cathode potential. Cell performances are temporarily increased afterwards. Nevertheless, local temperature and current heterogeneities within the cells are favored and shall be minimized. The consumption of fuel during the recovery phase shall also be considered to evaluate the global efficiency. Consequently, the purpose of this work is to find an optimal compromise between the recovery of reversible degradations by air starvation, the increase of global cell efficiency and the mitigation of irreversible degradations effects. Different operating parameters have first been studied such as cell voltage, temperature and humidity in single cell set-up. Considering the global PEMFC system efficiency, tests showed that reducing duration of recovery phase and reducing cell voltage was the key to ensure an efficient recovery. Recovery phase frequency was a major factor as well. A specific method was established to find the optimal frequency depending on the duration and voltage of the recovery phase. Then, long-term degradations have also been studied by applying FC-DLC cycles based on NEDC cycles on a 4-cell short stack by alternating test sequences with and without recovery phases. Depending on recovery phase timing, cell efficiency during the cycle was increased up to 2% thanks to a mean voltage increase of 10 mV during test sequences with recovery phases. However, cyclic voltammetry tests results suggest that the implementation of recovery phases causes an acceleration of the decrease of platinum active areas that could be due to the high potential variations applied to the cathode electrode during operation.

Keywords: durability, PEMFC, recovery procedure, reversible degradation

Procedia PDF Downloads 119
831 Development of an Systematic Design in Evaluating Force-On-Force Security Exercise at Nuclear Power Plants

Authors: Seungsik Yu, Minho Kang

Abstract:

As the threat of terrorism to nuclear facilities is increasing globally after the attacks of September 11, we are striving to recognize the physical protection system and strengthen the emergency response system. Since 2015, Korea has implemented physical protection security exercise for nuclear facilities. The exercise should be carried out with full cooperation between the operator and response forces. Performance testing of the physical protection system should include appropriate exercises, for example, force-on-force exercises, to determine if the response forces can provide an effective and timely response to prevent sabotage. Significant deficiencies and actions taken should be reported as stipulated by the competent authority. The IAEA(International Atomic Energy Agency) is also preparing force-on-force exercise program documents to support exercise of member states. Currently, ROK(Republic of Korea) is implementing exercise on the force-on-force exercise evaluation system which is developed by itself for the nuclear power plant, and it is necessary to establish the exercise procedure considering the use of the force-on-force exercise evaluation system. The purpose of this study is to establish the work procedures of the three major organizations related to the force-on-force exercise of nuclear power plants in ROK, which conduct exercise using force-on-force exercise evaluation system. The three major organizations are composed of licensee, KINAC (Korea Institute of Nuclear Nonproliferation and Control), and the NSSC(Nuclear Safety and Security Commission). Major activities are as follows. First, the licensee establishes and conducts an exercise plan, and when recommendations are derived from the result of the exercise, it prepares and carries out a force-on-force result report including a plan for implementation of the recommendations. Other detailed tasks include consultation with surrounding units for adversary, interviews with exercise participants, support for document evaluation, and self-training to improve the familiarity of the MILES (Multiple Integrated Laser Engagement System). Second, KINAC establishes a force-on-force exercise plan review report and reviews the force-on-force exercise plan report established by licensee. KINAC evaluate force-on-force exercise using exercise evaluation system and prepare training evaluation report. Other detailed tasks include MILES training, adversary consultation, management of exercise evaluation systems, and analysis of exercise evaluation results. Finally, the NSSC decides whether or not to approve the force-on-force exercise and makes a correction request to the nuclear facility based on the exercise results. The most important part of ROK's force-on-force exercise system is the analysis through the exercise evaluation system implemented by KINAC after the exercise. The analytical method proceeds in the order of collecting data from the exercise evaluation system and analyzing the collected data. The exercise application process of the exercise evaluation system introduced in ROK in 2016 will be concretely set up, and a system will be established to provide objective and consistent conclusions between exercise sessions. Based on the conclusions drawn up, the ultimate goal is to complement the physical protection system of licensee so that the system makes licensee respond effectively and timely against sabotage or unauthorized removal of nuclear materials.

Keywords: Force-on-Force exercise, nuclear power plant, physical protection, sabotage, unauthorized removal

Procedia PDF Downloads 134
830 Empirical Study of Innovative Development of Shenzhen Creative Industries Based on Triple Helix Theory

Authors: Yi Wang, Greg Hearn, Terry Flew

Abstract:

In order to understand how cultural innovation occurs, this paper explores the interaction in Shenzhen of China between universities, creative industries, and government in creative economic using the Triple Helix framework. During the past two decades, Triple Helix has been recognized as a new theory of innovation to inform and guide policy-making in national and regional development. Universities and governments around the world, especially in developing countries, have taken actions to strengthen connections with creative industries to develop regional economies. To date research based on the Triple Helix model has focused primarily on Science and Technology collaborations, largely ignoring other fields. Hence, there is an opportunity for work to be done in seeking to better understand how the Triple Helix framework might apply in the field of creative industries and what knowledge might be gleaned from such an undertaking. Since the late 1990s, the concept of ‘creative industries’ has been introduced as policy and academic discourse. The development of creative industries policy by city agencies has improved city wealth creation and economic capital. It claims to generate a ‘new economy’ of enterprise dynamics and activities for urban renewal through the arts and digital media, via knowledge transfer in knowledge-based economies. Creative industries also involve commercial inputs to the creative economy, to dynamically reshape the city into an innovative culture. In particular, this paper will concentrate on creative spaces (incubators, digital tech parks, maker spaces, art hubs) where academic, industry and government interact. China has sought to enhance the brand of their manufacturing industry in cultural policy. It aims to transfer the image of ‘Made in China’ to ‘Created in China’ as well as to give Chinese brands more international competitiveness in a global economy. Shenzhen is a notable example in China as an international knowledge-based city following this path. In 2009, the Shenzhen Municipal Government proposed the city slogan ‘Build a Leading Cultural City”’ to show the ambition of government’s strong will to develop Shenzhen’s cultural capacity and creativity. The vision of Shenzhen is to become a cultural innovation center, a regional cultural center and an international cultural city. However, there has been a lack of attention to the triple helix interactions in the creative industries in China. In particular, there is limited knowledge about how interactions in creative spaces co-location within triple helix networks significantly influence city based innovation. That is, the roles of participating institutions need to be better understood. Thus, this paper discusses the interplay between university, creative industries and government in Shenzhen. Secondary analysis and documentary analysis will be used as methods in an effort to practically ground and illustrate this theoretical framework. Furthermore, this paper explores how are creative spaces being used to implement Triple Helix in creative industries. In particular, the new combination of resources generated from the synthesized consolidation and interactions through the institutions. This study will thus provide an innovative lens to understand the components, relationships and functions that exist within creative spaces by applying Triple Helix framework to the creative industries.

Keywords: cultural policy, creative industries, creative city, triple Helix

Procedia PDF Downloads 184
829 Household Climate-Resilience Index Development for the Health Sector in Tanzania: Use of Demographic and Health Surveys Data Linked with Remote Sensing

Authors: Heribert R. Kaijage, Samuel N. A. Codjoe, Simon H. D. Mamuya, Mangi J. Ezekiel

Abstract:

There is strong evidence that climate has changed significantly affecting various sectors including public health. The recommended feasible solution is adopting development trajectories which combine both mitigation and adaptation measures for improving resilience pathways. This approach demands a consideration for complex interactions between climate and social-ecological systems. While other sectors such as agriculture and water have developed climate resilience indices, the public health sector in Tanzania is still lagging behind. The aim of this study was to find out how can we use Demographic and Health Surveys (DHS) linked with Remote Sensing (RS) technology and metrological information as tools to inform climate change resilient development and evaluation for the health sector. Methodological review was conducted whereby a number of studies were content analyzed to find appropriate indicators and indices for climate resilience household and their integration approach. These indicators were critically reviewed, listed, filtered and their sources determined. Preliminary identification and ranking of indicators were conducted using participatory approach of pairwise weighting by selected national stakeholders from meeting/conferences on human health and climate change sciences in Tanzania. DHS datasets were retrieved from Measure Evaluation project, processed and critically analyzed for possible climate change indicators. Other sources for indicators of climate change exposure were also identified. For the purpose of preliminary reporting, operationalization of selected indicators was discussed to produce methodological approach to be used in resilience comparative analysis study. It was found that household climate resilient index depends on the combination of three indices namely Household Adaptive and Mitigation Capacity (HC), Household Health Sensitivity (HHS) and Household Exposure Status (HES). It was also found that, DHS alone cannot complement resilient evaluation unless integrated with other data sources notably flooding data as a measure of vulnerability, remote sensing image of Normalized Vegetation Index (NDVI) and Metrological data (deviation from rainfall pattern). It can be concluded that if these indices retrieved from DHS data sets are computed and scientifically integrated can produce single climate resilience index and resilience maps could be generated at different spatial and time scales to enhance targeted interventions for climate resilient development and evaluations. However, further studies are need to test for the sensitivity of index in resilience comparative analysis among selected regions.

Keywords: climate change, resilience, remote sensing, demographic and health surveys

Procedia PDF Downloads 153
828 Effects of Abiotic Stress on the Phytochemical Content and Bioactivity of Pistacia lentiscus L.

Authors: S. Mamoucha, N. Tsafantakis, Α. Ioannidis, S. Chatzipanagiotou, C. Nikolaou, L. Skaltsounis, N. Fokialakis, N. Christodoulakis

Abstract:

Introduction: Plant secondary metabolites (SM) can be grouped into three chemically distinct groups: terpenes, phenolics, and nitrogen-containing compounds. For many years the adaptive significance of SM was unknown. They were thought to be functionless end-products. Currently it is accepted that many secondary metabolites (also known as natural products) have important ecological roles in plants. For instance, they serve as attractants (odor, color, taste) for pollinators and seed-dispersing animals. Moreover, they protect plants from herbivores, microbial pathogens and from environmental stress (high and low temperatures, drought, alkalinity, salinity, radiation etc). It is well known that both biotic and abiotic stress often increase the accumulation of SM. The local climatic conditions, seasonal changes, external factors such as light, temperature, humidity affect the biosynthesis and composition of secondary metabolites. A well known dioecious evergreen plant, Pistacia lentiscus L. (mastic tree), was selected in order to study the metabolic variations occur in response to the different climate conditions, due to the seasonal variation and its effect on the biosynthesis of bioactive compounds. Materials-methods: Young and mature leaves were collected in January and July 2014, dried and extracted by accelerated solvent extraction (Dionex ASE™ 350) using solvents of increased polarity (DCM, MeOH, and H2O). GC-MS and UHPLC-HRMS analysis were carried out in order to define the nature and the relative abundance of SM. The antibacterial activity was evaluated by using the Agar Disc Diffusion Assay against ATCC and clinical isolates strains: Escherichia coli, Staphylococcus aureus, Pseudomonas aeruginosa, Candida albicans, Streptococcus mutans and Klebsiella pneumoniae. All tests were carried out in duplicate and the average radii of the inhibition zones were calculated for each extract. Results: According to the phytochemical profile obtained from each extract, the biosynthesis of SM varied both qualitatively and quantitatively under the two different types of seasonal stress. With exception of the biologically inactive nonpolar DCM extract of July, all extracts inhibited the growth of most of the investigated microorganisms. A clear positive correlation has been observed between the relative abundance of SM and the bioactivity of the DCM extracts of January and July. Observed changes during phytochemical analysis were mainly focused on the triterpenoid content. On the other hand, the bioactivity of the polar extracts (MeOH and H2O) of January and July resulted practically invariable against most of the microorganisms, besides the significant variation of the SM content due to the seasonal variation. Conclusion: Our results clearly confirmed the hypothesis of abiotic stress as an important regulating factor that significantly affects the biosynthesis of secondary metabolites and thus the presence of bioactive compounds. Acknowledgment: This work was supported by IKY - State Scholarship Foundation, Athens, Greece.

Keywords: antibacterial screening, phytochemical profile, Pistacia lentiscus, abiotic stress

Procedia PDF Downloads 238
827 Lessons Learned from a Chronic Care Behavior Change Program: Outcome to Make Physical Activity a Habit

Authors: Doaa Alhaboby

Abstract:

Behavior change is a complex process that often requires ongoing support and guidance. Telecoaching programs have emerged as effective tools in facilitating behavior change by providing personalized support remotely. This abstract explores the lessons learned from a randomized controlled trial (RCT) evaluation of a telecoaching program focused on behavior change for Diabetics and discusses strategies for implementing these lessons to overcome the challenge of making physical activity a habit. The telecoaching program involved participants engaging in regular coaching sessions delivered via phone calls. These sessions aimed to address various aspects of behavior change, including goal setting, self-monitoring, problem-solving, and social support. Over the course of the program, participants received personalized guidance tailored to their unique needs and preferences. One of the key lessons learned from the RCT was the importance of engagement, readiness to change and the use of technology. Participants who set specific, measurable, attainable, relevant, and time-bound (SMART) goals were more likely to make sustained progress toward behavior change. Additionally, regular self-monitoring of behavior and progress was found to be instrumental in promoting accountability and motivation. Moving forward, implementing the lessons learned from the RCT can help individuals overcome the hardest part of behavior change: making physical activity a habit. One strategy is to prioritize consistency and establish a regular routine for physical activity. This may involve scheduling workouts at the same time each day or week and treating them as non-negotiable appointments. Additionally, integrating physical activity into daily life routines and taking into consideration the main challenges that can stop the process of integrating physical activity routines into the daily schedule can help make it more habitual. Furthermore, leveraging technology and digital tools can enhance adherence to physical activity goals. Mobile apps, wearable activity trackers, and online fitness communities can provide ongoing support, motivation, and accountability. These tools can also facilitate self-monitoring of behavior and progress, allowing individuals to track their activity levels and adjust their goals as needed. In conclusion, telecoaching programs offer valuable insights into behavior change and provide strategies for overcoming challenges, such as making physical activity a habit. By applying the lessons learned from these programs and incorporating them into daily life, individuals can cultivate sustainable habits that support their long-term health and well-being.

Keywords: lifestyle, behavior change, physical activity, chronic conditions

Procedia PDF Downloads 46
826 On the Question of Ideology: Criticism of the Enlightenment Approach and Theory of Ideology as Objective Force in Gramsci and Althusser

Authors: Edoardo Schinco

Abstract:

Studying the Marxist intellectual tradition, it is possible to verify that there were numerous cases of philosophical regression, in which the important achievements of detailed studies have been replaced by naïve ideas and previous misunderstandings: one of most important example of this tendency is related to the question of ideology. According to a common Enlightenment approach, the ideology is essentially not a reality, i.e., a factor capable of having an effect on the reality itself; in other words, the ideology is a mere error without specific historical meaning, which is only due to ignorance or inability of subjects to understand the truth. From this point of view, the consequent and immediate practice against every form of ideology are the rational dialogue, the reasoning based on common sense, in order to dispel the obscurity of ignorance through the light of pure reason. The limits of this philosophical orientation are however both theoretical and practical: on the one hand, the Enlightenment criticism of ideology is not an historicistic thought, since it cannot grasp the inner connection that ties an historical context and its peculiar ideology together; moreover, on the other hand, when the Enlightenment approach fails to release people from their illusions (e.g., when the ideology persists, despite the explanation of its illusoriness), it usually becomes a racist or elitarian thought. Unlike this first conception of ideology, Gramsci attempts to recover Marx’s original thought and to valorize its dialectical methodology with respect to the reality of ideology. As Marx suggests, the ideology – in negative meaning – is surely an error, a misleading knowledge, which aims to defense the current state of things and to conceal social, political or moral contradictions; but, that is precisely why the ideological error is not casual: every ideology mediately roots in a particular material context, from which it takes its reason being. Gramsci avoids, however, any mechanistic interpretation of Marx and, for this reason; he underlines the dialectic relation that exists between material base and ideological superstructure; in this way, a specific ideology is not only a passive product of base but also an active factor that reacts on the base itself and modifies it. Therefore, there is a considerable revaluation of ideology’s role in maintenance of status quo and the consequent thematization of both ideology as objective force, active in history, and ideology as cultural hegemony of ruling class on subordinate groups. Among the Marxists, the French philosopher Louis Althusser also gives his contribution to this crucial question; as follower of Gramsci’s thought, he develops the idea of ideology as an objective force through the notions of Repressive State Apparatus (RSA) and Ideological State Apparatuses (ISA). In addition to this, his philosophy is characterized by the presence of structuralist elements, which must be studied, since they deeply change the theoretical foundation of his Marxist thought.

Keywords: Althusser, enlightenment, Gramsci, ideology

Procedia PDF Downloads 183
825 The Process of Irony Comprehension in Young Children: Evidence from Monolingual and Bilingual Preschoolers

Authors: Natalia Banasik

Abstract:

Comprehension of verbal irony is an example of pragmatic competence in understanding figurative language. The knowledge of how it develops may shed new light on the understanding of social and communicative competence that is crucial for one's effective functioning in the society. Researchers agree it is a competence that develops late in a child’s development. One of the abilities that seems crucial for irony comprehension is theory of mind (ToM), that is the ability to understand that others may have beliefs, desires and intentions different from one’s own. Although both theory of mind and irony comprehension require the ability to understand the figurative use of the false description of the reality, the exact relationship between them is still unknown. Also, even though irony comprehension in children has been studied for over thirty years, the results of the studies are inconsistent as to the age when this competence are acquired. The presented study aimed to answer questions about the developmental trajectories of irony comprehension and ascribing function to ironic utterances by preschool children. Specifically, we were interested in how it is related to the development of ToM and how comprehension of the function of irony changes with age. Data was collected from over 150 monolingual, Polish-speaking children and (so far) thirty bilingual children speaking Polish and English who live in the US. Four-, five- and six-year-olds were presented with a story comprehension task in the form of audio and visual stimuli programmed in the E-prime software (pre-recorded narrated stories, some of which included ironic utterances, and pictures accompanying the stories displayed on a touch screen). Following the presentation, the children were then asked to answer a series of questions. The questions checked the children’s understanding of the intended utterance meaning, evaluation of the degree to which it was funny and evaluation of how nice the speaker was. The children responded by touching the screen, which made it possible to measure reaction times. Additionally, the children were asked to explain why the speaker had uttered the ironic statement. Both quantitive and qualitative analyses were applied. The results of our study indicate that for irony recognition there is a significant difference among the three age groups, but what is new is that children as young as four do understand the real meaning behind the ironic statement as long as the utterance is not grammtically or lexically complex also, there is a clear correlation of ToM and irony comprehension. Although four-year olds and six-year olds understand the real meaning of the ironic utterance, it is not earlier than at the age of six when children start to explain the reason of using this marked form of expression. They talk about the speaker's intention to tell a joke, be funny, or to protect the listener's emotions. There are also some metalinguistic references, such as "mommy sometimes says things that don't make sense and this is called a metaphor".

Keywords: child's pragmatics, figurative speech, irony comprehension in children, theory of mind and irony

Procedia PDF Downloads 300
824 Strategic Entrepreneurship: Model Proposal for Post-Troika Sustainable Cultural Organizations

Authors: Maria Inês Pinho

Abstract:

Recent literature on issues of Cultural Management (also called Strategic Management for cultural organizations) systematically seeks for models that allow such equipment to adapt to the constant change that occurs in contemporary societies. In the last decade, the world, and in particular Europe has experienced a serious financial problem that has triggered defensive mechanisms, both in the direction of promoting the balance of public accounts and in the sense of the anonymous loss of the democratic and cultural values of each nation. If in the first case emerged the Troika that led to strong cuts in funding for Culture, deeply affecting those organizations; in the second case, the commonplace citizen is seen fighting for the non-closure of cultural equipment. Despite this, the cultural manager argues that there is no single formula capable of solving the need to adapt to change. In another way, it is up to this agent to know the existing scientific models and to adapt them in the best way to the reality of the institution he coordinates. These actions, as a rule, are concerned with the best performance vis-à-vis external audiences or with the financial sustainability of cultural organizations. They forget, therefore, that all this mechanics cannot function without its internal public, without its Human Resources. The employees of the cultural organization must then have an entrepreneurial posture - must be intrapreneurial. This paper intends to break this form of action and lead the cultural manager to understand that his role should be in the sense of creating value for society, through a good organizational performance. This is only possible with a posture of strategic entrepreneurship. In other words, with a link between: Cultural Management, Cultural Entrepreneurship and Cultural Intrapreneurship. In order to prove this assumption, the case study methodology was used with the symbol of the European Capital of Culture (Casa da Música) as well as qualitative and quantitative techniques. The qualitative techniques included the procedure of in-depth interviews to managers, founders and patrons and focus groups to public with and without experience in managing cultural facilities. The quantitative techniques involved the application of a questionnaire to middle management and employees of Casa da Música. After the triangulation of the data, it was proved that contemporary management of cultural organizations must implement among its practices, the concept of Strategic Entrepreneurship and its variables. Also, the topics which characterize the Cultural Intrapreneurship notion (job satisfaction, the quality in organizational performance, the leadership and the employee engagement and autonomy) emerged. The findings show then that to be sustainable, a cultural organization should meet the concerns of both external and internal forum. In other words, it should have an attitude of citizenship to the communities, visible on a social responsibility and a participatory management, only possible with the implementation of the concept of Strategic Entrepreneurship and its variable of Cultural Intrapreneurship.

Keywords: cultural entrepreneurship, cultural intrapreneurship, cultural organizations, strategic management

Procedia PDF Downloads 169
823 The French Ekang Ethnographic Dictionary. The Quantum Approach

Authors: Henda Gnakate Biba, Ndassa Mouafon Issa

Abstract:

Dictionaries modeled on the Western model [tonic accent languages] are not suitable and do not account for tonal languages phonologically, which is why the [prosodic and phonological] ethnographic dictionary was designed. It is a glossary that expresses the tones and the rhythm of words. It recreates exactly the speaking or singing of a tonal language, and allows the non-speaker of this language to pronounce the words as if they were a native. It is a dictionary adapted to tonal languages. It was built from ethnomusicological theorems and phonological processes, according to Jean. J. Rousseau 1776 hypothesis /To say and to sing were once the same thing/. Each word in the French dictionary finds its corresponding language, ekaη. And each word ekaη is written on a musical staff. This ethnographic dictionary is also an inventive, original and innovative research thesis, but it is also an inventive, original and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation and the question of modeling in the human sciences: mathematics, computer science, translation automation and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal and random music. The experimentation confirming the theorization designed a semi-digital, semi-analog application which translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music and deterministic and random music). To test this application, I use a music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). Translation is done (from writing to writing, from writing to speech and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz and, world music or, variety etc. The software runs, giving you the option to choose harmonies, and then you select your melody.

Keywords: music, language, entenglement, science, research

Procedia PDF Downloads 55
822 Assessment of Urban Environmental Noise in Urban Habitat: A Spatial Temporal Study

Authors: Neha Pranav Kolhe, Harithapriya Vijaye, Arushi Kamle

Abstract:

The economic growth engines are urban regions. As the economy expands, so does the need for peace and quiet, and noise pollution is one of the important social and environmental issue. Health and wellbeing are at risk from environmental noise pollution. Because of urbanisation, population growth, and the consequent rise in the usage of increasingly potent, diverse, and highly mobile sources of noise, it is now more severe and pervasive than ever before, and it will only become worse. Additionally, it will expand as long as there is an increase in air, train, and highway traffic, which continue to be the main contributors of noise pollution. The current study will be conducted in two zones of class I city of central India (population range: 1 million–4 million). Total 56 measuring points were chosen to assess noise pollution. The first objective evaluates the noise pollution in various urban habitats determined as formal and informal settlement. It identifies the comparison of noise pollution within the settlements using T- Test analysis. The second objective assess the noise pollution in silent zones (as stated in Central Pollution Control Board) in a hierarchical way. It also assesses the noise pollution in the settlements and compares with prescribed permissible limits using class I sound level equipment. As appropriate indices, equivalent noise level on the (A) frequency weighting network, minimum sound pressure level and maximum sound pressure level were computed. The survey is conducted for a period of 1 week. Arc GIS is used to plot and map the temporal and spatial variability in urban settings. It is discovered that noise levels at most stations, particularly at heavily trafficked crossroads and subway stations, were significantly different and higher than acceptable limits and squares. The study highlights the vulnerable areas that should be considered while city planning. The study demands area level planning while preparing a development plan. It also demands attention to noise pollution from the perspective of residential and silent zones. The city planning in urban areas neglects the noise pollution assessment at city level. This contributes to that, irrespective of noise pollution guidelines, the ground reality is far away from its applicability. The result produces incompatible land use on a neighbourhood scale with respect to noise pollution. The study's final results will be useful to policymakers, architects and administrators in developing countries. This will be useful for noise pollution in urban habitat governance by efficient decision making and policy formulation to increase the profitability of these systems.

Keywords: noise pollution, formal settlements, informal settlements, built environment, silent zone, residential area

Procedia PDF Downloads 111
821 The Effect of Group Counseling on the Victimhood Perceptions of Adolescent Who Are the Subject of Peer Victimization and on Their Coping Strategies

Authors: İsmail Seçer, Taştan Seçer

Abstract:

In this study, the effect of the group counseling on the victimhood perceptions of the primary school 7th and 8th grade students who are determined to be the subject of peer victimization and their dealing way with it was analyzed. The research model is Solomon Four Group Experimental Model. In this model, there are four groups that were determined with random sampling. Two of the groups have been used as experimental group and the other two have been used as control group. Solomon model is defined as real experimental model. In real experimental models, there are multiple groups consisting of subject which have similar characteristics, and selection of the subjects is done with random sampling. For this purpose, 230 students from Kültür Kurumu Primary School in Erzurum were asked to fill Adolescent Peer Victim Form. 100 students whose victim scores were higher and who were determined to be the subject of bullying were talked face to face and informed about the current study, and they were asked if they were willing to participate or not. As a result of these interviews, 60 students were determined to participate in the experimental study and four group consisting of 15 people were created with simple random sampling method. After the groups had been formed, experimental and control group were determined with casting lots. After determining experimental and control groups, an 11-session group counseling activity which was prepared by the researcher according to the literature was applied. The purpose of applying group counseling is to change the ineffective dealing ways with bullying and their victimhood perceptions. Each session was planned to be 75 minutes and applied as planned. In the control groups, counseling activities in the primary school counseling curricula was applied for 11 weeks. As a result of the study, physical, emotional and verbal victimhood perceptions of the participants in the experimental groups were decreased significantly compared to pre-experimental situations and to those in control group. Besides, it was determined that this change observed in the victimhood perceptions of the experimental group occurred independently from the effect of variables such as gender, age and academic success. The first evidence of the study related to the dealing ways is that the scores of the participants in the experimental group related to the ineffective dealing ways such as despair and avoidance is decreased significantly compared to the pre-experimental situation and to those in control group. The second evidence related to the dealing ways is that the scores of the participants in the experimental group related to effective dealing ways such as seeking for help, consulting social support, resistance and optimism is increased significantly compared to the pre-experimental situation and to those in control group. According to the evidence obtained through the study, it can be said that group counseling is an effective approach to change the victimhood perceptions of the individuals who are the subject of bullying and their dealing strategies with it.

Keywords: bullying, perception of victimization, coping strategies, ancova analysis

Procedia PDF Downloads 378