Search results for: ICT application for SMEs
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8470

Search results for: ICT application for SMEs

220 Healing (in) Relationship: The Theory and Practice of Inner-Outer Peacebuilding in North-Western India

Authors: Josie Gardner

Abstract:

The overall intention of this research is to reimagine peacebuilding in both in theory and practical application in light of the shortcomings and unsustainability of the current peacebuilding paradigm. These limitations are identified here as an overly rational-material approach to peacebuilding that neglects the inner dimension of peace for a fragmented rather than holistic model, and that espouses a conflict and violence-centric approach to peacebuilding. In counter, this presentation is purposed to investigate the dynamics of inner and outer peace as a holistic, complex system towards ‘inner-outer’ peacebuilding. This paper draws from primary research in the protracted conflict context of north-western India (Jammu, Kashmir & Ladakh) as a case study. This presentation has two central aims. First, to introduce the process of inner (psycho-spiritual) peacebuilding, which has thus far been neglected by mainstream and orthodox literature. Second, to examine why inner peacebuilding is essential for realising sustainable peace on a broader scale as outer (socio-political) peace and to better understand how the inner and outer dynamics of peace relate and affect one another. To these ends, Josephine (the researcher/author/presenter) partnered with Yakjah Reconciliation and Development Network to implement a series of action-oriented workshops and retreats centred around healing, reconciliation, leadership, and personal development for the dual purpose of collaboratively generating data, theory, and insights, as well as providing the youth leaders with an experiential, transformative experience. The research team created and used a novel methodological approach called Mapping Ritual Ecologies, which draws from Participatory Action Research and Digital Ethnography to form a collaborative research model with a group of 20 youth co-researchers who are emerging youth peace leaders in Kashmir, Jammu, and Ladakh. This research found significant intra- and inter-personal shifts towards an experience of inner peace through inner peacebuilding activities. Moreover, this process of inner peacebuilding affected their families and communities through interpersonal healing and peace leadership in an inside-out process of change. These insights have generated rich insights and have supported emerging theories about the dynamics between inner and outer peace, power, justice, and collective healing. This presentation argues that the largely neglected dimension of inner (psycho-spiritual) peacebuilding is imperative for broader socio-political (outer) change. Changing structures of oppression, injustice, and violence—i.e. structures of separation—requires individual, interpersonal, and collective healing. While this presentation primarily examines and advocates for inside-out peacebuilding and social justice, it will also touch upon the effect of systems of separation on the inner condition and human experience. This research reimagines peacebuilding as a holistic inner-outer approach. This offers an alternative path forward those weaves together self-actualisation and social justice. While contextualised within north-western India with a small case study population, the findings speak also to other conflict contexts as well as our global peacebuilding and social justice milieu.

Keywords: holistic, inner peacebuilding, psycho-spiritual, systems youth

Procedia PDF Downloads 120
219 Application of Harris Hawks Optimization Metaheuristic Algorithm and Random Forest Machine Learning Method for Long-Term Production Scheduling Problem under Uncertainty in Open-Pit Mines

Authors: Kamyar Tolouei, Ehsan Moosavi

Abstract:

In open-pit mines, the long-term production scheduling optimization problem (LTPSOP) is a complicated problem that contains constraints, large datasets, and uncertainties. Uncertainty in the output is caused by several geological, economic, or technical factors. Due to its dimensions and NP-hard nature, it is usually difficult to find an ideal solution to the LTPSOP. The optimal schedule generally restricts the ore, metal, and waste tonnages, average grades, and cash flows of each period. Past decades have witnessed important measurements of long-term production scheduling and optimal algorithms since researchers have become highly cognizant of the issue. In fact, it is not possible to consider LTPSOP as a well-solved problem. Traditional production scheduling methods in open-pit mines apply an estimated orebody model to produce optimal schedules. The smoothing result of some geostatistical estimation procedures causes most of the mine schedules and production predictions to be unrealistic and imperfect. With the expansion of simulation procedures, the risks from grade uncertainty in ore reserves can be evaluated and organized through a set of equally probable orebody realizations. In this paper, to synthesize grade uncertainty into the strategic mine schedule, a stochastic integer programming framework is presented to LTPSOP. The objective function of the model is to maximize the net present value and minimize the risk of deviation from the production targets considering grade uncertainty simultaneously while satisfying all technical constraints and operational requirements. Instead of applying one estimated orebody model as input to optimize the production schedule, a set of equally probable orebody realizations are applied to synthesize grade uncertainty in the strategic mine schedule and to produce a more profitable and risk-based production schedule. A mixture of metaheuristic procedures and mathematical methods paves the way to achieve an appropriate solution. This paper introduced a hybrid model between the augmented Lagrangian relaxation (ALR) method and the metaheuristic algorithm, the Harris Hawks optimization (HHO), to solve the LTPSOP under grade uncertainty conditions. In this study, the HHO is experienced to update Lagrange coefficients. Besides, a machine learning method called Random Forest is applied to estimate gold grade in a mineral deposit. The Monte Carlo method is used as the simulation method with 20 realizations. The results specify that the progressive versions have been considerably developed in comparison with the traditional methods. The outcomes were also compared with the ALR-genetic algorithm and ALR-sub-gradient. To indicate the applicability of the model, a case study on an open-pit gold mining operation is implemented. The framework displays the capability to minimize risk and improvement in the expected net present value and financial profitability for LTPSOP. The framework could control geological risk more effectively than the traditional procedure considering grade uncertainty in the hybrid model framework.

Keywords: grade uncertainty, metaheuristic algorithms, open-pit mine, production scheduling optimization

Procedia PDF Downloads 105
218 Teaching Linguistic Humour Research Theories: Egyptian Higher Education EFL Literature Classes

Authors: O. F. Elkommos

Abstract:

“Humour studies” is an interdisciplinary research area that is relatively recent. It interests researchers from the disciplines of psychology, sociology, medicine, nursing, in the work place, gender studies, among others, and certainly teaching, language learning, linguistics, and literature. Linguistic theories of humour research are numerous; some of which are of interest to the present study. In spite of the fact that humour courses are now taught in universities around the world in the Egyptian context it is not included. The purpose of the present study is two-fold: to review the state of arts and to show how linguistic theories of humour can be possibly used as an art and craft of teaching and of learning in EFL literature classes. In the present study linguistic theories of humour were applied to selected literary texts to interpret humour as an intrinsic artistic communicative competence challenge. Humour in the area of linguistics was seen as a fifth component of communicative competence of the second language leaner. In literature it was studied as satire, irony, wit, or comedy. Linguistic theories of humour now describe its linguistic structure, mechanism, function, and linguistic deviance. Semantic Script Theory of Verbal Humor (SSTH), General Theory of Verbal Humor (GTVH), Audience Based Theory of Humor (ABTH), and their extensions and subcategories as well as the pragmatic perspective were employed in the analyses. This research analysed the linguistic semantic structure of humour, its mechanism, and how the audience reader (teacher or learner) becomes an interactive interpreter of the humour. This promotes humour competence together with the linguistic, social, cultural, and discourse communicative competence. Studying humour as part of the literary texts and the perception of its function in the work also brings its positive association in class for educational purposes. Humour is by default a provoking/laughter-generated device. Incongruity recognition, perception and resolving it, is a cognitive mastery. This cognitive process involves a humour experience that lightens up the classroom and the mind. It establishes connections necessary for the learning process. In this context the study examined selected narratives to exemplify the application of the theories. It is, therefore, recommended that the theories would be taught and applied to literary texts for a better understanding of the language. Students will then develop their language competence. Teachers in EFL/ESL classes will teach the theories, assist students apply them and interpret text and in the process will also use humour. This is thus easing students' acquisition of the second language, making the classroom an enjoyable, cheerful, self-assuring, and self-illuminating experience for both themselves and their students. It is further recommended that courses of humour research studies should become an integral part of higher education curricula in Egypt.

Keywords: ABTH, deviance, disjuncture, episodic, GTVH, humour competence, humour comprehension, humour in the classroom, humour in the literary texts, humour research linguistic theories, incongruity-resolution, isotopy-disjunction, jab line, longer text joke, narrative story line (macro-micro), punch line, six knowledge resource, SSTH, stacks, strands, teaching linguistics, teaching literature, TEFL, TESL

Procedia PDF Downloads 302
217 Luminescent Properties of Plastic Scintillator with Large Area Photonic Crystal Prepared by a Combination of Nanoimprint Lithography and Atomic Layer Deposition

Authors: Jinlu Ruan, Liang Chen, Bo Liu, Xiaoping Ouyang, Zhichao Zhu, Zhongbing Zhang, Shiyi He, Mengxuan Xu

Abstract:

Plastic scintillators play an important role in the measurement of a mixed neutron/gamma pulsed radiation, neutron radiography and pulse shape discrimination technology. In some research, these luminescent properties are necessary that photons produced by the interactions between a plastic scintillator and radiations can be detected as much as possible by the photoelectric detectors and more photons can be emitted from the scintillators along a specific direction where detectors are located. Unfortunately, a majority of these photons produced are trapped in the plastic scintillators due to the total internal reflection (TIR), because there is a significant light-trapping effect when the incident angle of internal scintillation light is larger than the critical angle. Some of these photons trapped in the scintillator may be absorbed by the scintillator itself and the others are emitted from the edges of the scintillator. This makes the light extraction of plastic scintillators very low. Moreover, only a small portion of the photons emitted from the scintillator easily can be detected by detectors effectively, because the distribution of the emission directions of this portion of photons exhibits approximate Lambertian angular profile following a cosine emission law. Therefore, enhancing the light extraction efficiency and adjusting the emission angular profile become the keys for improving the number of photons detected by the detectors. In recent years, photonic crystal structures have been covered on inorganic scintillators to enhance the light extraction efficiency and adjust the angular profile of scintillation light successfully. However, that, preparation methods of photonic crystals will deteriorate performance of plastic scintillators and even destroy the plastic scintillators, makes the investigation on preparation methods of photonic crystals for plastic scintillators and luminescent properties of plastic scintillators with photonic crystal structures inadequate. Although we have successfully made photonic crystal structures covered on the surface of plastic scintillators by a modified self-assembly technique and achieved a great enhance of light extraction efficiency without evident angular-dependence for the angular profile of scintillation light, the preparation of photonic crystal structures with large area (the diameter is larger than 6cm) and perfect periodic structure is still difficult. In this paper, large area photonic crystals on the surface of scintillators were prepared by nanoimprint lithography firstly, and then a conformal layer with material of high refractive index on the surface of photonic crystal by atomic layer deposition technique in order to enhance the stability of photonic crystal structures and increase the number of leaky modes for improving the light extraction efficiency. The luminescent properties of the plastic scintillator with photonic crystals prepared by the mentioned method are compared with those of the plastic scintillator without photonic crystal. The results indicate that the number of photons detected by detectors is increased by the enhanced light extraction efficiency and the angular profile of scintillation light exhibits evident angular-dependence for the scintillator with photonic crystals. The mentioned preparation of photonic crystals is beneficial to scintillation detection applications and lays an important technique foundation for the plastic scintillators to meet special requirements under different application backgrounds.

Keywords: angular profile, atomic layer deposition, light extraction efficiency, plastic scintillator, photonic crystal

Procedia PDF Downloads 200
216 Mixed-Methods Analyses of Subjective Strategies of Most Unlikely but Successful Transitions from Social Benefits to Work

Authors: Hirseland Andreas, Kerschbaumer Lukas

Abstract:

In the case of Germany, there are about one million long-term unemployed – a figure that did not vary much during the past years. These long-term unemployed did not benefit from the prospering labor market while most short-term unemployed did. Instead, they are continuously dependent on welfare and sometimes precarious short-term employment, experiencing work poverty. Long-term unemployment thus turns into a main obstacle to become employed again, especially if it is accompanied by other impediments such as low-level education (school/vocational), poor health (especially chronical illness), advanced age (older than fifty), immigrant status, motherhood or engagement in care for other relatives. As can be shown by this current research project, in these cases the chance to regain employment decreases to near nil. Almost two-thirds of all welfare recipients have multiple impediments which hinder a successful transition from welfare back to sustainable and sufficient employment. Prospective employers are unlikely to hire long-term unemployed with additional impediments because they evaluate potential employees on their negative signaling (e.g. low-level education) and the implicit assumption of unproductiveness (e.g. poor health, age). Some findings of the panel survey “Labor market and social security” (PASS) carried out by the Institute of Employment Research (the research institute of the German Federal Labor Agency) spread a ray of hope, showing that unlikely does not necessarily mean impossible. The presentation reports on current research on these very scarce “success stories” of unlikely transitions from long-term unemployment to work and how these cases were able to perform this switch against all odds. The study is based on a mixed-method design. Within the panel survey (~15,000 respondents in ~10,000 households), only 66 cases of such unlikely transitions were observed. These cases have been explored by qualitative inquiry – in depth-interviews and qualitative network techniques. There is strong evidence that sustainable transitions are influenced by certain biographical resources like habits of network use, a set of informal skills and particularly a resilient way of dealing with obstacles, combined with contextual factors rather than by job-placement procedures promoted by Job-Centers according to activation rules or by following formal paths of application. On the employer’s side small and medium-sized enterprises are often found to give job opportunities to a wider variety of applicants, often based on a slow but steadily increasing relationship leading to employment. According to these results it is possible to show and discuss some limitations of (German) activation policies targeting the labor market and their impact on welfare dependency and long-term unemployment. Based on these findings, indications for more supportive small-scale measures in the field of labor-market policies are suggested to help long-term unemployed with multiple impediments to overcome their situation (e.g. organizing small-scale-structures and low-threshold services to encounter possible employers on a more informal basis like “meet and greet”).

Keywords: against-all-odds, mixed-methods, Welfare State, long-term unemployment

Procedia PDF Downloads 361
215 A Critical Analysis of How the Role of the Imam Can Best Meet the Changing Social, Cultural, and Faith-Based Needs of Muslim Families in 21st Century Britain

Authors: Christine Hough, Eddie Abbott-Halpin, Tariq Mahmood, Jessica Giles

Abstract:

This paper draws together the findings from two research studies, each undertaken with cohorts of South Asian Muslim respondents located in the North of England between 2017 and 2019. The first study, entitled Faith Family and Crime (FFC), investigated the extent to which a Muslim family’s social and health well-being is affected by a family member’s involvement in the Criminal Justice System (CJS). This study captured a range of data through a detailed questionnaire and structured interviews. The data from the interview transcripts were analysed using open coding and an application of aspects of the grounded theory approach. The findings provide clear evidence that the respondents were neither well-informed nor supported throughout the processes of the CJS, from arrest to post-sentencing. These experiences gave rise to mental and physical stress, potentially unfair sentencing, and a significant breakdown in communication within the respondents’ families. They serve to highlight a particular aspect of complexity in the current needs of those South Asian Muslim families who find themselves involved in the CJS and is closely connected to family structure, culture, and faith. The second study, referred to throughout this paper as #ImamsBritain (that provides the majority of content for this paper), explores how Imams, in their role as community faith leaders, can best address the complex – and changing - needs of South Asian Muslims families, such as those that emerged in the findings from FFC. The changing socio-economic and political climates of the last thirty or so years have brought about significant changes to the lives of Muslim families, and these have created more complex levels of social, cultural, and faith-based needs for families and individuals. As a consequence, Imams now have much greater demands made of them, and so their role has undergone far-reaching changes in response to this. The #ImamsBritain respondents identified a pressing need to develop a wider range of pastoral and counseling skills, which they saw as extending far beyond the traditional role of the Imam as a religious teacher and spiritual guide. The #ImamsBritain project was conducted with a cohort of British Imams in the North of England. Data was collected firstly through a questionnaire that related to the respondents’ training and development needs and then analysed in depth using the Delphi approach. Through Delphi, the data were scrutinized in depth using interpretative content analysis. The findings from this project reflect the respondents’ individual perceptions of the kind of training and development they need to fulfill their role in 21st Century Britain. They also provide a unique framework for constructing a professional guide for Imams in Great Britain. The discussions and critical analyses in this paper draw on the discourses of professionalization and pastoral care and relevant reports and reviews on Imam training in Europe and Canada.

Keywords: criminal justice system, faith and culture, Imams, Muslim community leadership, professionalization, South Asian family structure

Procedia PDF Downloads 138
214 Ethanolamine Detection with Composite Films

Authors: S. A. Krutovertsev, A. E. Tarasova, L. S. Krutovertseva, O. M. Ivanova

Abstract:

The aim of the work was to get stable sensitive films with good sensitivity to ethanolamine (C2H7NO) in air. Ethanolamine is used as adsorbent in different processes of gas purification and separation. Besides it has wide industrial application. Chemical sensors of sorption type are widely used for gas analysis. Their behavior is determined by sensor characteristics of sensitive sorption layer. Forming conditions and characteristics of chemical gas sensors based on nanostructured modified silica films activated by different admixtures have been studied. As additives molybdenum containing polyoxometalates of the eighteen series were incorporated in silica films. The method of hydrolythic polycondensation from tetraethyl orthosilicate solutions was used for forming such films in this work. The method’s advantage is a possibility to introduce active additives directly into an initial solution. This method enables to obtain sensitive thin films with high specific surface at room temperature. Particular properties make polyoxometalates attractive as active additives for forming of gas-sensitive films. As catalyst of different redox processes, they can either accelerate the reaction of the matrix with analyzed gas or interact with it, and it results in changes of matrix’s electrical properties Polyoxometalates based films were deposited on the test structures manufactured by microelectronic planar technology with interdigitated electrodes. Modified silica films were deposited by a casting method from solutions based on tetraethyl orthosilicate and polyoxometalates. Polyoxometalates were directly incorporated into initial solutions. Composite nanostructured films were deposited by drop casting method on test structures with a pair of interdigital metal electrodes formed at their surface. The sensor’s active area was 4.0 x 4.0 mm, and electrode gap was egual 0.08 mm. Morphology of the layers surface were studied with Solver-P47 scanning probe microscope (NT-MDT, Russia), the infrared spectra were investigated by a Bruker EQUINOX 55 (Germany). The conditions of film formation varied during the tests. Electrical parameters of the sensors were measured electronically in real-time mode. Films had highly developed surface with value of 450 m2/g and nanoscale pores. Thickness of them was 0,2-0,3 µm. The study shows that the conditions of the environment affect markedly the sensors characteristics, which can be improved by choosing of the right procedure of forming and processing. Addition of polyoxometalate into silica film resulted in stabilization of film mass and changed markedly of electrophysical characteristics. Availability of Mn3P2Mo18O62 into silica film resulted in good sensitivity and selectivity to ethanolamine. Sensitivity maximum was observed at weight content of doping additive in range of 30–50% in matrix. With ethanolamine concentration changing from 0 to 100 ppm films’ conductivity increased by 10-12 times. The increase of sensor’s sensitivity was received owing to complexing reaction of tested substance with cationic part of polyoxometalate. This fact results in intramolecular redox reaction which sharply change electrophysical properties of polyoxometalate. This process is reversible and takes place at room temperature.

Keywords: ethanolamine, gas analysis, polyoxometalate, silica film

Procedia PDF Downloads 210
213 Transparency of Algorithmic Decision-Making: Limits Posed by Intellectual Property Rights

Authors: Olga Kokoulina

Abstract:

Today, algorithms are assuming a leading role in various areas of decision-making. Prompted by a promise to provide increased economic efficiency and fuel solutions for pressing societal challenges, algorithmic decision-making is often celebrated as an impartial and constructive substitute for human adjudication. But in the face of this implied objectivity and efficiency, the application of algorithms is also marred with mounting concerns about embedded biases, discrimination, and exclusion. In Europe, vigorous debates on risks and adverse implications of algorithmic decision-making largely revolve around the potential of data protection laws to tackle some of the related issues. For example, one of the often-cited venues to mitigate the impact of potentially unfair decision-making practice is a so-called 'right to explanation'. In essence, the overall right is derived from the provisions of the General Data Protection Regulation (‘GDPR’) ensuring the right of data subjects to access and mandating the obligation of data controllers to provide the relevant information about the existence of automated decision-making and meaningful information about the logic involved. Taking corresponding rights and obligations in the context of the specific provision on automated decision-making in the GDPR, the debates mainly focus on efficacy and the exact scope of the 'right to explanation'. In essence, the underlying logic of the argued remedy lies in a transparency imperative. Allowing data subjects to acquire as much knowledge as possible about the decision-making process means empowering individuals to take control of their data and take action. In other words, forewarned is forearmed. The related discussions and debates are ongoing, comprehensive, and, often, heated. However, they are also frequently misguided and isolated: embracing the data protection law as ultimate and sole lenses are often not sufficient. Mandating the disclosure of technical specifications of employed algorithms in the name of transparency for and empowerment of data subjects potentially encroach on the interests and rights of IPR holders, i.e., business entities behind the algorithms. The study aims at pushing the boundaries of the transparency debate beyond the data protection regime. By systematically analysing legal requirements and current judicial practice, it assesses the limits of the transparency requirement and right to access posed by intellectual property law, namely by copyrights and trade secrets. It is asserted that trade secrets, in particular, present an often-insurmountable obstacle for realising the potential of the transparency requirement. In reaching that conclusion, the study explores the limits of protection afforded by the European Trade Secrets Directive and contrasts them with the scope of respective rights and obligations related to data access and portability enshrined in the GDPR. As shown, the far-reaching scope of the protection under trade secrecy is evidenced both through the assessment of its subject matter as well as through the exceptions from such protection. As a way forward, the study scrutinises several possible legislative solutions, such as flexible interpretation of the public interest exception in trade secrets as well as the introduction of the strict liability regime in case of non-transparent decision-making.

Keywords: algorithms, public interest, trade secrets, transparency

Procedia PDF Downloads 124
212 Hydrodynamics in Wetlands of Brazilian Savanna: Electrical Tomography and Geoprocessing

Authors: Lucas M. Furlan, Cesar A. Moreira, Jepherson F. Sales, Guilherme T. Bueno, Manuel E. Ferreira, Carla V. S. Coelho, Vania Rosolen

Abstract:

Located in the western part of the State of Minas Gerais, Brazil, the study area consists of a savanna environment, represented by sedimentary plateau and a soil cover composed by lateritic and hydromorphic soils - in the latter, occurring the deferruginization and concentration of high-alumina clays, exploited as refractory material. In the hydromorphic topographic depressions (wetlands) the hydropedogical relationships are little known, but it is observed that in times of rainfall, the depressed region behaves like a natural seasonal reservoir - which suggests that the wetlands on the surface of the plateau are places of recharge of the aquifer. The aquifer recharge areas are extremely important for the sustainable social, economic and environmental development of societies. The understanding of hydrodynamics in relation to the functioning of the ferruginous and hydromorphic lateritic soils system in the savanna environment is a subject rarely explored in the literature, especially its understanding through the joint application of geoprocessing by UAV (Unmanned Aerial Vehicle) and electrical tomography. The objective of this work is to understand the hydrogeological dynamics in a wetland (with an area of 426.064 m²), in the Brazilian savanna,as well as the understanding of the subsurface architecture of hydromorphic depressions in relation to the recharge of aquifers. The wetland was compartmentalized in three different regions, according to the geoprocessing. Hydraulic conductivity studies were performed in each of these three portions. Electrical tomography was performed on 9 lines of 80 meters in length and spaced 10 meters apart (direction N45), and a line with 80 meters perpendicular to all others. With the data, it was possible to generate a 3D cube. The integrated analysis showed that the area behaves like a natural seasonal reservoir in the months of greater precipitation (December – 289mm; January – 277,9mm; February – 213,2mm), because the hydraulic conductivity is very low in all areas. In the aerial images, geotag correction of the images was performed, that is, the correction of the coordinates of the images by means of the corrected coordinates of the Positioning by Precision Point of the Brazilian Institute of Geography and Statistics (IBGE-PPP). Later, the orthomosaic and the digital surface model (DSM) were generated, which with specific geoprocessing generated the volume of water that the wetland can contain - 780,922m³ in total, 265,205m³ in the region with intermediate flooding and 49,140m³ in the central region, where a greater accumulation of water was observed. Through the electrical tomography it was possible to identify that up to the depth of 6 meters the water infiltrates vertically in the central region. From the 8 meters depth, the water encounters a more resistive layer and the infiltration begins to occur horizontally - tending to concentrate the recharge of the aquifer to the northeast and southwest of the wetland. The hydrodynamics of the area is complex and has many challenges in its understanding. The next step is to relate hydrodynamics to the evolution of the landscape, with the enrichment of high-alumina clays, and to propose a management model for the seasonal reservoir.

Keywords: electrical tomography, hydropedology, unmanned aerial vehicle, water resources management

Procedia PDF Downloads 146
211 Magnetic Carriers of Organic Selenium (IV) Compounds: Physicochemical Properties and Possible Applications in Anticancer Therapy

Authors: E. Mosiniewicz-Szablewska, P. Suchocki, P. C. Morais

Abstract:

Despite the significant progress in cancer treatment, there is a need to search for new therapeutic methods in order to minimize side effects. Chemotherapy, the main current method of treating cancer, is non-selective and has a number of limitations. Toxicity to healthy cells is undoubtedly the biggest problem limiting the use of many anticancer drugs. The problem of how to kill cancer without harming a patient can be solved by using organic selenium (IV) compounds. Organic selenium (IV) compounds are a new class of materials showing a strong anticancer activity. They are first organic compounds containing selenium at the +4 oxidation level and therefore they eliminate the multidrug-resistance for all tumor cell lines tested so far. These materials are capable of selectively killing cancer cells without damaging the healthy ones. They are obtained by the incorporation of selenous acid (H2SeO3) into molecules of fatty acids of sunflower oil and therefore, they are inexpensive to manufacture. Attaching these compounds to magnetic carriers enables their precise delivery directly to the tumor area and the simultaneous application of the magnetic hyperthermia, thus creating a huge opportunity to effectively get rid of the tumor without any side effects. Polylactic-co-glicolic acid (PLGA) nanocapsules loaded with maghemite (-Fe2O3) nanoparticles and organic selenium (IV) compounds are successfully prepared by nanoprecipitation method. In vitro antitumor activity of the nanocapsules were evidenced using murine melanoma (B16-F10), oral squamos carcinoma (OSCC) and murine (4T1) and human (MCF-7) breast lines. Further exposure of these cells to an alternating magnetic field increased the antitumor effect of nanocapsules. Moreover, the nanocapsules presented antitumor effect while not affecting normal cells. Magnetic properties of the nanocapsules were investigated by means of dc magnetization, ac susceptibility and electron spin resonance (ESR) measurements. The nanocapsules presented a typical superparamagnetic behavior around room temperature manifested itself by the split between zero field-cooled/field-cooled (ZFC/FC) magnetization curves and the absence of hysteresis on the field-dependent magnetization curve above the blocking temperature. Moreover, the blocking temperature decreased with increasing applied magnetic field. The superparamagnetic character of the nanocapsules was also confirmed by the occurrence of a maximum in temperature dependences of both real ′(T) and imaginary ′′ (T) components of the ac magnetic susceptibility, which shifted towards higher temperatures with increasing frequency. Additionally, upon decreasing the temperature the ESR signal shifted to lower fields and gradually broadened following closely the predictions for the ESR of superparamagnetoc nanoparticles. The observed superparamagnetic properties of nanocapsules enable their simple manipulation by means of magnetic field gradient, after introduction into the blood stream, which is a necessary condition for their use as magnetic drug carriers. The observed anticancer and superparamgnetic properties show that the magnetic nanocapsules loaded with organic selenium (IV) compounds should be considered as an effective material system for magnetic drug delivery and magnetohyperthermia inductor in antitumor therapy.

Keywords: cancer treatment, magnetic drug delivery system, nanomaterials, nanotechnology

Procedia PDF Downloads 204
210 Establishing Feedback Partnerships in Higher Education: A Discussion of Conceptual Framework and Implementation Strategies

Authors: Jessica To

Abstract:

Feedback is one of the powerful levers for enhancing students’ performance. However, some students are under-engaged with feedback because they lack responsibility for feedback uptake. To resolve this conundrum, recent literature proposes feedback partnerships in which students and teachers share the power and responsibilities to co-construct feedback. During feedback co-construction, students express feedback needs to teachers, and teachers respond to individuals’ needs in return. Though this approach can increase students’ feedback ownership, its application is lagging as the field lacks conceptual clarity and implementation guide. This presentation aims to discuss the conceptual framework of feedback partnerships and feedback co-construction strategies. It identifies the components of feedback partnerships and strategies which could facilitate feedback co-construction. A systematic literature review was conducted to answer the questions. The literature search was performed using ERIC, PsycINFO, and Google Scholar with the keywords “assessment partnership”, “student as partner,” and “feedback engagement”. No time limit was set for the search. The inclusion criteria encompassed (i) student-teacher partnerships in feedback, (ii) feedback engagement in higher education, (iii) peer-reviewed publications, and (iv) English as the language of publication. Those without addressing conceptual understanding and implementation strategies were excluded. Finally, 65 publications were identified and analysed using thematic analysis. For the procedure, the texts relating to the questions were first extracted. Then, codes were assigned to summarise the ideas of the texts. Upon subsuming similar codes into themes, four themes emerged: students’ responsibilities, teachers’ responsibilities, conditions for partnerships development, and strategies. Their interrelationships were examined iteratively for framework development. Establishing feedback partnerships required different responsibilities of students and teachers during feedback co-construction. Students needed to self-evaluate performance against task criteria, identify inadequacies and communicate their needs to teachers. During feedback exchanges, they interpreted teachers’ comments, generated self-feedback through reflection, and co-developed improvement plans with teachers. Teachers had to increase students’ understanding of criteria and evaluation skills and create opportunities for students’ expression of feedback needs. In feedback dialogue, teachers responded to students’ needs and advised on the improvement plans. Feedback partnerships would be best grounded in an environment with trust and psychological safety. Four strategies could facilitate feedback co-construction. First, students’ understanding of task criteria could be increased by rubrics explanation and exemplar analysis. Second, students could sharpen evaluation skills if they participated in peer review and received teacher feedback on the quality of peer feedback. Third, provision of self-evaluation checklists and prompts and teacher modeling of self-assessment process could aid students in articulating feedback needs. Fourth, the trust could be fostered when teachers explained the benefits of feedback co-construction, showed empathy, and provided personalised comments in dialogue. Some strategies were applied in interactive cover sheets in which students performed self-evaluation and made feedback requests on a cover sheet during assignment submission, followed by teachers’ response to individuals’ requests. The significance of this presentation lies in unpacking the conceptual framework of feedback partnerships and outlining feedback co-construction strategies. With a solid foundation in theory and practice, researchers and teachers could better enhance students’ engagement with feedback.

Keywords: conceptual framework, feedback co-construction, feedback partnerships, implementation strategies

Procedia PDF Downloads 90
209 Reduced General Dispersion Model in Cylindrical Coordinates and Isotope Transient Kinetic Analysis in Laminar Flow

Authors: Masood Otarod, Ronald M. Supkowski

Abstract:

This abstract discusses a method that reduces the general dispersion model in cylindrical coordinates to a second order linear ordinary differential equation with constant coefficients so that it can be utilized to conduct kinetic studies in packed bed tubular catalytic reactors at a broad range of Reynolds numbers. The model was tested by 13CO isotope transient tracing of the CO adsorption of Boudouard reaction in a differential reactor at an average Reynolds number of 0.2 over Pd-Al2O3 catalyst. Detailed experimental results have provided evidence for the validity of the theoretical framing of the model and the estimated parameters are consistent with the literature. The solution of the general dispersion model requires the knowledge of the radial distribution of axial velocity. This is not always known. Hence, up until now, the implementation of the dispersion model has been largely restricted to the plug-flow regime. But, ideal plug-flow is impossible to achieve and flow regimes approximating plug-flow leave much room for debate as to the validity of the results. The reduction of the general dispersion model transpires as a result of the application of a factorization theorem. Factorization theorem is derived from the observation that a cross section of a catalytic bed consists of a solid phase across which the reaction takes place and a void or porous phase across which no significant measure of reaction occurs. The disparity in flow and the heterogeneity of the catalytic bed cause the concentration of reacting compounds to fluctuate radially. These variabilities signify the existence of radial positions at which the radial gradient of concentration is zero. Succinctly, factorization theorem states that a concentration function of axial and radial coordinates in a catalytic bed is factorable as the product of the mean radial cup-mixing function and a contingent dimensionless function. The concentration of adsorbed compounds are also factorable since they are piecewise continuous functions and suffer the same variability but in the reverse order of the concentration of mobile phase compounds. Factorability is a property of packed beds which transforms the general dispersion model to an equation in terms of the measurable mean radial cup-mixing concentration of the mobile phase compounds and mean cross-sectional concentration of adsorbed species. The reduced model does not require the knowledge of the radial distribution of the axial velocity. Instead, it is characterized by new transport parameters so denoted by Ωc, Ωa, Ωc, and which are respectively denominated convection coefficient cofactor, axial dispersion coefficient cofactor, and radial dispersion coefficient cofactor. These cofactors adjust the dispersion equation as compensation for the unavailability of the radial distribution of the axial velocity. Together with the rest of the kinetic parameters they can be determined from experimental data via an optimization procedure. Our data showed that the estimated parameters Ωc, Ωa Ωr, are monotonically correlated with the Reynolds number. This is expected to be the case based on the theoretical construct of the model. Computer generated simulations of methanation reaction on nickel provide additional support for the utility of the newly conceptualized dispersion model.

Keywords: factorization, general dispersion model, isotope transient kinetic, partial differential equations

Procedia PDF Downloads 269
208 Experimental-Numerical Inverse Approaches in the Characterization and Damage Detection of Soft Viscoelastic Layers from Vibration Test Data

Authors: Alaa Fezai, Anuj Sharma, Wolfgang Mueller-Hirsch, André Zimmermann

Abstract:

Viscoelastic materials have been widely used in the automotive industry over the last few decades with different functionalities. Besides their main application as a simple and efficient surface damping treatment, they may ensure optimal operating conditions for on-board electronics as thermal interface or sealing layers. The dynamic behavior of viscoelastic materials is generally dependent on many environmental factors, the most important being temperature and strain rate or frequency. Prior to the reliability analysis of systems including viscoelastic layers, it is, therefore, crucial to accurately predict the dynamic and lifetime behavior of these materials. This includes the identification of the dynamic material parameters under critical temperature and frequency conditions along with a precise damage localization and identification methodology. The goal of this work is twofold. The first part aims at applying an inverse viscoelastic material-characterization approach for a wide frequency range and under different temperature conditions. For this sake, dynamic measurements are carried on a single lap joint specimen using an electrodynamic shaker and an environmental chamber. The specimen consists of aluminum beams assembled to adapter plates through a viscoelastic adhesive layer. The experimental setup is reproduced in finite element (FE) simulations, and frequency response functions (FRF) are calculated. The parameters of both the generalized Maxwell model and the fractional derivatives model are identified through an optimization algorithm minimizing the difference between the simulated and the measured FRFs. The second goal of the current work is to guarantee an on-line detection of the damage, i.e., delamination in the viscoelastic bonding of the described specimen during frequency monitored end-of-life testing. For this purpose, an inverse technique, which determines the damage location and size based on the modal frequency shift and on the change of the mode shapes, is presented. This includes a preliminary FE model-based study correlating the delamination location and size to the change in the modal parameters and a subsequent experimental validation achieved through dynamic measurements of specimen with different, pre-generated crack scenarios and comparing it to the virgin specimen. The main advantage of the inverse characterization approach presented in the first part resides in the ability of adequately identifying the material damping and stiffness behavior of soft viscoelastic materials over a wide frequency range and under critical temperature conditions. Classic forward characterization techniques such as dynamic mechanical analysis are usually linked to limitations under critical temperature and frequency conditions due to the material behavior of soft viscoelastic materials. Furthermore, the inverse damage detection described in the second part guarantees an accurate prediction of not only the damage size but also its location using a simple test setup and outlines; therefore, the significance of inverse numerical-experimental approaches in predicting the dynamic behavior of soft bonding layers applied in automotive electronics.

Keywords: damage detection, dynamic characterization, inverse approaches, vibration testing, viscoelastic layers

Procedia PDF Downloads 205
207 Artificial Intelligence for Traffic Signal Control and Data Collection

Authors: Reggie Chandra

Abstract:

Trafficaccidents and traffic signal optimization are correlated. However, 70-90% of the traffic signals across the USA are not synchronized. The reason behind that is insufficient resources to create and implement timing plans. In this work, we will discuss the use of a breakthrough Artificial Intelligence (AI) technology to optimize traffic flow and collect 24/7/365 accurate traffic data using a vehicle detection system. We will discuss what are recent advances in Artificial Intelligence technology, how does AI work in vehicles, pedestrians, and bike data collection, creating timing plans, and what is the best workflow for that. Apart from that, this paper will showcase how Artificial Intelligence makes signal timing affordable. We will introduce a technology that uses Convolutional Neural Networks (CNN) and deep learning algorithms to detect, collect data, develop timing plans and deploy them in the field. Convolutional Neural Networks are a class of deep learning networks inspired by the biological processes in the visual cortex. A neural net is modeled after the human brain. It consists of millions of densely connected processing nodes. It is a form of machine learning where the neural net learns to recognize vehicles through training - which is called Deep Learning. The well-trained algorithm overcomes most of the issues faced by other detection methods and provides nearly 100% traffic data accuracy. Through this continuous learning-based method, we can constantly update traffic patterns, generate an unlimited number of timing plans and thus improve vehicle flow. Convolutional Neural Networks not only outperform other detection algorithms but also, in cases such as classifying objects into fine-grained categories, outperform humans. Safety is of primary importance to traffic professionals, but they don't have the studies or data to support their decisions. Currently, one-third of transportation agencies do not collect pedestrian and bike data. We will discuss how the use of Artificial Intelligence for data collection can help reduce pedestrian fatalities and enhance the safety of all vulnerable road users. Moreover, it provides traffic engineers with tools that allow them to unleash their potential, instead of dealing with constant complaints, a snapshot of limited handpicked data, dealing with multiple systems requiring additional work for adaptation. The methodologies used and proposed in the research contain a camera model identification method based on deep Convolutional Neural Networks. The proposed application was evaluated on our data sets acquired through a variety of daily real-world road conditions and compared with the performance of the commonly used methods requiring data collection by counting, evaluating, and adapting it, and running it through well-established algorithms, and then deploying it to the field. This work explores themes such as how technologies powered by Artificial Intelligence can benefit your community and how to translate the complex and often overwhelming benefits into a language accessible to elected officials, community leaders, and the public. Exploring such topics empowers citizens with insider knowledge about the potential of better traffic technology to save lives and improve communities. The synergies that Artificial Intelligence brings to traffic signal control and data collection are unsurpassed.

Keywords: artificial intelligence, convolutional neural networks, data collection, signal control, traffic signal

Procedia PDF Downloads 169
206 Structural, Spectral and Optical Properties of Boron-Aluminosilicate Glasses with High Dy₂O₃ and Er₂O₃ Content for Faraday Rotator Operating at 2µm

Authors: Viktor D. Dubrovin, Masoud Mollaee, Jie Zong, Xiushan Zhu, Nasser Peyghambarian

Abstract:

Glasses doped with high rare-earth (RE) elements concentration attracted considerable attention since the middle of the 20th century due to their particular magneto-optical properties. Such glasses exhibit the Faraday effect in which the polarization plane of a linearly polarized light beam is rotated by the interaction between the incident light and the magneto-optical material. That effect found application in optical isolators that are useful for laser systems, which can prevent back reflection of light into lasers or optical amplifiers and reduce signal instability and noise. Glasses are of particular interest since they are cost-effective and can be formed into fibers, thus breaking the limits of traditional bulk optics requiring optical coupling for use with fiber-optic systems. The advent of high-power fiber lasers operating near 2µm revealed a necessity in the development of all fiber isolators for this region. Ce³⁺, Pr³⁺, Dy³⁺, and Tb³⁺ ions provide the biggest contribution to the Verdet constant value of optical materials among the RE. It is known that Pr³⁺ and Tb³⁺ ions have strong absorption bands near 2 µm, thus making Dy³⁺ and Ce³⁺ the only prospective candidates for fiber isolator operating in that region. Due to the high tendency of Ce³⁺ ions pass to Ce⁴⁺ during the synthesis, glasses with high cerium content usually suffers from Ce⁴⁺ ions absorption extending from visible to IR. Additionally, Dy³⁺ (₆H¹⁵/²) same as Ho³⁺ (⁵I₈) ions, have the largest effective magnetic moment (µeff = 10.6 µB) among the RE ions that starts to play the key role if the operating region is far from 4fⁿ→ 4fⁿ⁻¹5 d¹ electric-dipole transition relevant to the Faraday Effect. Considering the high effective magnetic moment value of Er³⁺ ions (µeff = 9.6 µB) that is 3rd after Dy³⁺/ Ho³⁺ and Tb³⁺, it is possible to assume that Er³⁺ doped glasses should exhibit Verdet constant value near 2µm that is comparable with one of Dy doped glasses. Thus, partial replacement of Dy³⁺ on Er³⁺ ions has been performed, keeping the overall concentration of Re₂O₃ equal to 70 wt.% (30.6 mol.%). Al₂O₃-B₂O₃-SiO₂-30.6RE₂O₃ (RE= Er, Dy) glasses had been synthesized, and their thermal, spectral, optical, structural, and magneto-optical properties had been studied. Glasses synthesis had been conducted in Pt crucibles for 3h at 1500 °C. The obtained melt was poured into preheated up to 400 °C mold and annealed from 800 oC to room temperature for 12h with 1h dwell. The mass of obtained glass samples was about 200g. Shown that the difference between crystallization and glass transition temperature is about 150 oC, even taking into account the fact that high content of RE₂O₃ leads to glass network depolymerization. Verdet constant of Al₂O₃-B₂O₃-SiO₂-30.6RE₂O₃ glasses for wavelength 1950 nm can reach more than 5.9 rad/(T*m), which is among the highest number reported for a paramagnetic glass at this wavelength. The refractive index value was found to be equal to 1.7545 at 633 nm. Our experimental results show that Al₂O₃-B₂O₃-SiO₂-30.6RE₂O₃ glasses with high Dy₂O₃ content are expected to be promising material for use as highly effective Faraday isolators and modulators of electromagnetic radiation in the 2μm region.

Keywords: oxide glass, magneto-optical, dysprosium, erbium, Faraday rotator, boron-aluminosilicate system

Procedia PDF Downloads 114
205 Construction and Cross-Linking of Polyelectrolyte Multilayers Based on Polysaccharides as Antifouling Coatings

Authors: Wenfa Yu, Thuva Gnanasampanthan, John Finlay, Jessica Clarke, Charlotte Anderson, Tony Clare, Axel Rosenhahn

Abstract:

Marine biofouling is a worldwide problem at vast economic and ecological costs. Historically it was combated with toxic coatings such as tributyltin. As those coatings being banned nowadays, finding environmental friendly antifouling solution has become an urgent topic. In this study antifouling coatings consisted of natural occurring polysaccharides hyaluronic acid (HA), alginic acid (AA), chitosan (Ch) and polyelectrolyte polyethylenimine (PEI) are constructed into polyelectrolyte multilayers (PEMs) in a Layer-by-Layer (LbL) method. LbL PEM construction is a straightforward way to assemble biomacromolecular coatings on surfaces. Advantages about PEM include ease of handling, highly diverse PEM composition, precise control over the thickness and so on. PEMs have been widely employed in medical application and there are numerous studies regarding their protein adsorption, elasticity and cell adhesive properties. With the adjustment of coating composition, termination layer charge, coating morphology and cross-linking method, it is possible to prepare low marine biofouling coatings with PEMs. In this study, using spin coating technology, PEM construction was achieved at smooth multilayers with roughness as low as 2nm rms and highly reproducible thickness around 50nm. To obtain stability in sea water, the multilayers were covalently cross-linked either thermally or chemically. The cross-linking method affected surface energy, which was reflected in water contact angle, thermal cross-linking led to hydrophobic surfaces and chemical cross-linking generated hydrophilic surfaces. The coatings were then evaluated regarding its protein resistance and biological species resistance. While the hydrophobic thermally cross-linked PEM had low resistance towards proteins, the resistance of chemically cross-linked PEM strongly depended on the PEM termination layer and the charge of the protein, opposite charge caused high adsorption and same charge low adsorption, indicating electrostatic interaction plays a crucial role in the protein adsorption processes. Ulva linza was chosen as the biological species for antifouling performance evaluation. Despite of the poor resistance towards protein adsorption, thermally cross-linked PEM showed good resistance against Ulva spores settlement, the chemically cross-linked multilayers showed poor resistance regardless of the termination layer. Marine species adhesion is a complex process, although it involves proteins as bioadhesives, protein resistance its own is not a fully indicator for its antifouling performance. The species will pre select the surface, responding to cues like surface energy, chemistry, or charge and so on. Thus making it difficult for one single factors to determine its antifouling performance. Preparing PEM coating is a comprehensive work involving choosing polyelectrolyte combination, determining termination layer and the method for cross-linking. These decisions will affect PEM properties such as surface energy, charge, which is crucial, since biofouling is a process responding to surface properties in a highly sensitive and dynamic way.

Keywords: hyaluronic acid, polyelectrolyte multilayers, protein resistance, Ulva linza zoospores

Procedia PDF Downloads 164
204 The High Potential and the Little Use of Brazilian Class Actions for Prevention and Penalization Due to Workplace Accidents in Brazil

Authors: Sandra Regina Cavalcante, Rodolfo A. G. Vilela

Abstract:

Introduction: Work accidents and occupational diseases are a big problem for public health around the world and the main health problem of workers with high social and economic costs. Brazil has shown progress over the last years, with the development of the regulatory system to improve safety and quality of life in the workplace. However, the situation is far from acceptable, because the occurrences remain high and there is a great gap between legislation and reality, generated by the low level of voluntary compliance with the law. Brazilian laws provide procedural legal instruments for both, to compensate the damage caused to the worker's health and to prevent future injuries. In the Judiciary, the prevention idea is in the collective action, effected through Brazilian Class Actions. Inhibitory guardianships may impose both, improvements to the working environment, as well as determine the interruption of activity or a ban on the machine that put workers at risk. Both the Labor Prosecution and trade unions have to stand to promote this type of action, providing payment of compensation for collective moral damage. Objectives: To verify how class actions (known as ‘public civil actions’), regulated in Brazilian legal system to protect diffuse, collective and homogeneous rights, are being used to protect workers' health and safety. Methods: The author identified and evaluated decisions of Brazilian Superior Court of Labor involving collective actions and work accidents. The timeframe chosen was December 2015. The online jurisprudence database was consulted in page available for public consultation on the court website. The categorization of the data was made considering the result (court application was rejected or accepted), the request type, the amount of compensation and the author of the cause, besides knowing the reasoning used by the judges. Results: The High Court issued 21,948 decisions in December 2015, with 1448 judgments (6.6%) about work accidents and only 20 (0.09%) on collective action. After analyzing these 20 decisions, it was found that the judgments granted compensation for collective moral damage (85%) and/or obligation to make, that is, changes to improve prevention and safety (71%). The processes have been filed mainly by the Labor Prosecutor (83%), and also appeared lawsuits filed by unions (17%). The compensation for collective moral damage had average of 250,000 reais (about US$65,000), but it should be noted that there is a great range of values found, also are several situations repaired by this compensation. This is the last instance resource for this kind of lawsuit and all decisions were well founded and received partially the request made for working environment protection. Conclusions: When triggered, the labor court system provides the requested collective protection in class action. The values of convictions arbitrated in collective actions are significant and indicate that it creates social and economic repercussions, stimulating employers to improve the working environment conditions of their companies. It is necessary to intensify the use of collective actions, however, because they are more efficient for prevention than reparatory individual lawsuits, but it has been underutilized, mainly by Unions.

Keywords: Brazilian Class Action, collective action, work accident penalization, workplace accident prevention, workplace protection law

Procedia PDF Downloads 273
203 The Practices Perspective in Communication, Consumer and Cultural Studies: A Post-Heideggerian Narrative

Authors: Tony Wilson

Abstract:

This paper sets out a practices perspective or practices theory, which has become pervasive from business to sociological studies. In doing so, it locates the perspective historically (in the work of the philosopher Heidegger) and provides a contemporary illustration of its application to communication, consumer and cultural studies as central to this conference theme. The structured account of practices (as articulated in eight ‘axioms’) presented towards the conclusion of this paper is an initial statement - planned to encourage further detailed qualitative and systematic research in areas of interest to the conference. Practice theories of equipped and situated construction of participatory meaning (as in media and marketing consuming) are frequently characterized as lacking common ground, or core principles. This paper explores whether by retracing a journey to earlier philosophical underwriting, a shared territory promoting new research can be located as current philosophical hermeneutics. Moreover, through returning to hermeneutic first principles, the paper shows that a series of spatio-temporal metaphors become available - appropriate to analyzing communication as a process across disciplines in which it is considered. Thus one can argue, for instance, that media users engage (enter) digital text from their diverse ‘horizons of expectation’, in a productive enlarging ‘fusion’ of horizons of understanding, thereby ‘projecting’ a new narrative, integrated in a ‘hermeneutic circle’ of meaning. A politics of communication studies may contest a horizon of understanding - so engaging in critical ‘distancing’. Marketing’s consumers can occupy particular places on a horizon of understanding. Media users pass over borders of changing, revised perspectives. Practices research can now not only be discerned in multiple disciplines but equally crosses disciplines. The ubiquitous practice of media use by managers and visitors in a shopping mall - the mediatization of malls - responds to investigating not just with media study expertise, but from an interpretive marketing perspective. How have mediated identities of person or place been changed? Emphasizing understanding of entities in a material environment as ‘equipment’, practices theory enables the quantitative correlation of use and demographic variable as ‘Zeug Score’. Human behavior is fundamentally habitual - shaped by its tacit assumptions - occasionally interrupted by reflection. Practices theory acknowledges such action to be minimally monitored yet nonetheless considers it as constructing narrative. Thus presented in research, ‘storied’ behavior can then be seen to be (in)formed and shaped from a shifting hierarchy of ‘horizons’ or of perspectives - from habituated to reflective - rather than a single seamless narrative. Taking a communication practices perspective here avoids conflating tacit, transformative and theoretical understanding in research. In short, a historically grounded and unifying statement of contemporary practices theory will enhance its potential as a tool in communication, consumer and cultural research, landscaping interpretative horizons of human behaviour through exploring widely the culturally (in)formed narratives equipping and incorporated (reflectively, unreflectively) in people’s everyday lives.

Keywords: communication, consumer, cultural practices, hermeneutics

Procedia PDF Downloads 269
202 Feasibility of an Extreme Wind Risk Assessment Software for Industrial Applications

Authors: Francesco Pandolfi, Georgios Baltzopoulos, Iunio Iervolino

Abstract:

The impact of extreme winds on industrial assets and the built environment is gaining increasing attention from stakeholders, including the corporate insurance industry. This has led to a progressively more in-depth study of building vulnerability and fragility to wind. Wind vulnerability models are used in probabilistic risk assessment to relate a loss metric to an intensity measure of the natural event, usually a gust or a mean wind speed. In fact, vulnerability models can be integrated with the wind hazard, which consists of associating a probability to each intensity level in a time interval (e.g., by means of return periods) to provide an assessment of future losses due to extreme wind. This has also given impulse to the world- and regional-scale wind hazard studies.Another approach often adopted for the probabilistic description of building vulnerability to the wind is the use of fragility functions, which provide the conditional probability that selected building components will exceed certain damage states, given wind intensity. In fact, in wind engineering literature, it is more common to find structural system- or component-level fragility functions rather than wind vulnerability models for an entire building. Loss assessment based on component fragilities requires some logical combination rules that define the building’s damage state given the damage state of each component and the availability of a consequence model that provides the losses associated with each damage state. When risk calculations are based on numerical simulation of a structure’s behavior during extreme wind scenarios, the interaction of component fragilities is intertwined with the computational procedure. However, simulation-based approaches are usually computationally demanding and case-specific. In this context, the present work introduces the ExtReMe wind risk assESsment prototype Software, ERMESS, which is being developed at the University of Naples Federico II. ERMESS is a wind risk assessment tool for insurance applications to industrial facilities, collecting a wide assortment of available wind vulnerability models and fragility functions to facilitate their incorporation into risk calculations based on in-built or user-defined wind hazard data. This software implements an alternative method for building-specific risk assessment based on existing component-level fragility functions and on a number of simplifying assumptions for their interactions. The applicability of this alternative procedure is explored by means of an illustrative proof-of-concept example, which considers four main building components, namely: the roof covering, roof structure, envelope wall and envelope openings. The application shows that, despite the simplifying assumptions, the procedure can yield risk evaluations that are comparable to those obtained via more rigorous building-level simulation-based methods, at least in the considered example. The advantage of this approach is shown to lie in the fact that a database of building component fragility curves can be put to use for the development of new wind vulnerability models to cover building typologies not yet adequately covered by existing works and whose rigorous development is usually beyond the budget of portfolio-related industrial applications.

Keywords: component wind fragility, probabilistic risk assessment, vulnerability model, wind-induced losses

Procedia PDF Downloads 181
201 Leveraging Digital Cyber Technology for Self-Care and Improved Management of DMPA-SC Clients

Authors: Oluwaseun Adeleke, Grace Amarachi Omenife, Jennifer Adebambo, Mopelola Raji, Anthony Nwala, Mogbonjubade Adesulure

Abstract:

Introduction: The incorporation of digital technology in healthcare systems is instrumental in transforming the delivery, management, and overall experience of healthcare and holds the potential to scale up access through over 200 million active mobile phones used in Nigeria. Digital tools enable increased access to care, stronger client engagement, progress in research and data-driven insights, and more effective promotion of self-care and do-it-yourself practices. The Delivering Innovation in Self-Care (DISC) project 2021 has played a pivotal role in granting women greater autonomy over their sexual and reproductive health (SRH) through a variety of approaches, including information and training to self-inject contraception (DMPA-SC). To optimize its outcomes, the project also leverages digital technology platforms like social media: Facebook, Instagram, and Meet Tina (Chatbot) via WhatsApp, Customer Relationship Management (CRM) applications Freshworks, and Viamo. Methodology: The project has been successful at optimizing in-person digital cyberspace interaction to sensitize individuals effectively about self-injection and provide linkages to SI services. This platform employs the Freshworks CRM software application, along with specially trained personnel known as Cyber IPC Agents and DHIS calling centers. Integration of Freshworks CRM software with social media allows a direct connection with clients to address emerging issues, schedule follow-ups, send reminders to improve compliance with self-injection schedules, enhance the overall user experience for self-injection (SI) clients, and generate comprehensive reports and analytics on client interactions. Interaction covers a range of topics, including – How to use SI, learning more about SI, side-effects and its management, accessing services, fertility, ovulation, other family planning methods, inquiries related to Sexual Reproductive Health as well as uses an address log to connect them with nearby facilities or online pharmaceuticals. Results: Between the months of March to September, a total of 5,403 engagements were recorded. Among these, 4,685 were satisfactorily resolved. Since the program's inception, digital advertising has created 233,633,075 impressions, reached 12,715,582 persons, and resulted in 3,394,048 clicks. Conclusion: Leveraging digital technology has proven to be an invaluable tool in client management and improving client experience. The use of Cyber technology has enabled the successful development and maintenance of client relationships, which have been effective at providing support, facilitating delivery and compliance with DMPA-SC self-injection services, and ensuring overall client satisfaction. Concurrently, providing qualitative data, including user experience feedback, has enabled the derivation of crucial insights that inform the decision-making process and guide in normalizing self-care behavior.

Keywords: selfcare, DMPA-SC self-injection, digital technology, cyber technology, freshworks CRM software

Procedia PDF Downloads 67
200 Application of Self-Efficacy Theory in Counseling Deaf and Hard of Hearing Students

Authors: Nancy A. Delich, Stephen D. Roberts

Abstract:

This case study explores using self-efficacy theory in counseling deaf and hard of hearing students in one California school district. Self-efficacy is described as the confidence a student has for performing a set of skills required to succeed at a specific task. When students need to learn a skill, self-efficacy can be a major factor in influencing behavioral change. Self-efficacy is domain specific, meaning that students can have high confidence in their abilities to accomplish a task in one domain, while at the same time having low confidence in their abilities to accomplish another task in a different domain. The communication isolation experienced by deaf and hard of hearing children and adolescents can negatively impact their belief about their ability to navigate life challenges. There is a need to address issues that impact deaf and hard of hearing students’ social-emotional development. Failure to address these needs may result in depression, suicidal ideation, and anxiety among other mental health concerns. Self-efficacy training can be used to address these socio-emotional developmental issues with this population. Four sources of experiences are applied during an intervention: (a) enactive mastery experience, (b) vicarious experience, (c) verbal persuasion, and (d) physiological and affective states. This case study describes the use of self-efficacy training with a coed group of 12 deaf and hard of hearing high school students who experienced bullying at school. Beginning with enactive mastery experience, the counselor introduced the topic of bullying to the group. The counselor educated the students about the different types of bullying while teaching them the terminology, signs and their meanings. The most effective way to increase self-efficacy is through extensive practice. To better understand these concepts, the students practiced through role-playing with the goal of developing self-advocacy skills. Vicarious experience is the perception that students have about their capabilities. Viewing other students advocating for themselves, cognitively rehearsing what actions they will and will not take, and teaching each other how to stand up against bullying can strengthen their belief in successfully overcoming bullying. The third source of self-efficacy beliefs is verbal persuasion. It occurs when others express belief in the capabilities of the student. Didactic training and pedagogic materials on bullying were employed as part of the group counseling sessions. The fourth source of self-efficacy appraisals is physiological and affective states. Students expect positive emotions to be associated with successful skilled performance. When students practice new skills, the counselor can apply several strategies to enhance self-efficacy while reducing and controlling emotional and physical states. The intervention plan incorporated all four sources of self-efficacy training during several interactive group sessions regarding bullying. There was an increased understanding around the issues of bullying, resulting in the students’ belief of their ability to perform protective behaviors and deter future occurrences. The outcome of the intervention plan resulted in a reduction of reported bullying incidents. In conclusion, self-efficacy training can be an effective counseling and teaching strategy in addressing and enhancing the social-emotional functioning with deaf and hard of hearing adolescents.

Keywords: counseling, self-efficacy, bullying, social-emotional development, mental health, deaf and hard of hearing students

Procedia PDF Downloads 351
199 Application of Pedicled Perforator Flaps in Large Cavities of the Breast

Authors: Neerja Gupta

Abstract:

Objective-Reconstruction of large cavities of the breast without contralateral symmetrisation Background- Reconstruction of breast includes a wide spectrum of procedures from displacement to regional and distant flaps. The pedicled Perforator flaps cover a wide spectrum of reconstruction surgery for all quadrants of the breast, especially in patients with comorbidities. These axial flaps singly or adjunct are based on a near constant perforator vessel, a ratio of 2:1 at its entry in a flap is good to maintain vascularity. The perforators of lateral chest wall viz LICAP, LTAP have overlapping perfurosomes without clear demarcation. LTAP is localized in the narrow zone between the lateral breast fold and anterior axillary line,2.5-3.8cm from the fold. MICAP are localized at 1-2 cm from sternum. Being 1-2mm in diameter, a Single perforator is good to maintain the flap. LICAP has a dominant perforator in 6th-11th spaces, while LTAP has higher placed dominant perforators in 4th and 5th spaces. Methodology-Six consecutive patients who underwent reconstruction of the breast with pedicled perforator flaps were retrospectively analysed. Selections of the flap was done based on the size and locations of the tumour, anticipated volume loss, willingness to undergo contralateral symmetrisation, cosmetic expectations, and finances available.3 patients underwent vertical LTAP, the distal limit of the flap being the inframammary crease. 3 patients underwent MICAP, oriented along the axis of rib, the distal limit being the anterior axillary line. Preoperative identification was done using a unidirectional hand held doppler. The flap was raised caudal to cranial, the pivot point of rotation being the vessel entry into the skin. The donor area is determined by the skin pinch. Flap harvest time was 20-25 minutes. Intra operative vascularity was assessed with dermal bleed. The patient immediate pre, post-operative and follow up pics were compared independently by two breast surgeons. Patients were given a breast Q questionnaire (licensed) for scoring. Results-The median age of six patients was 46. Each patient had a hospital stay of 24 hours. None of the patients was willing for contralateral symmetrisation. The specimen dimensions were from 8x6.8x4 cm to 19x16x9 cm. The breast volume reconstructed range was 30 percent to 45 percent. All wide excision had free margins on frozen. The mean flap dimensions were 12x5x4.5 cm. One LTAP underwent marginal necrosis and delayed wound healing due to seroma. Three patients were phyllodes, of which one was borderline, and 2 were benign on final histopathology. All other 3 patients were invasive ductal cancer and have completed their radiation. The median follow up is 7 months the satisfaction scores at median follow of 7 months are 90 for physical wellbeing and 85 for surgical results. Surgeons scored fair to good in Harvard score. Conclusion- Pedicled perforator flaps are a valuable option for 3/8th volume of breast defects. LTAP is preferred for tumours at the Central, upper, and outer quadrants of the breast and MICAP for the inner and lower quadrant. The vascularity of the flap is dependent on the angiosomalterritories; adequate venous and cavity drainage.

Keywords: breast, oncoplasty, pedicled, perforator

Procedia PDF Downloads 187
198 Learning-Teaching Experience about the Design of Care Applications for Nursing Professionals

Authors: A. Gonzalez Aguna, J. M. Santamaria Garcia, J. L. Gomez Gonzalez, R. Barchino Plata, M. Fernandez Batalla, S. Herrero Jaen

Abstract:

Background: Computer Science is a field that transcends other disciplines of knowledge because it allows to support all kinds of physical and mental tasks. Health centres have a greater number and complexity of technological devices and the population consume and demand services derived from technology. Also, nursing education plans have included competencies related to and, even, courses about new technologies are offered to health professionals. However, nurses still limit their performance to the use and evaluation of products previously built. Objective: Develop a teaching-learning methodology for acquiring skills on designing applications for care. Methodology: Blended learning teaching with a group of graduate nurses through official training within a Master's Degree. The study sample was selected by intentional sampling without exclusion criteria. The study covers from 2015 to 2017. The teaching sessions included a four-hour face-to-face class and between one and three tutorials. The assessment was carried out by written test consisting of the preparation of an IEEE 830 Standard Specification document where the subject chosen by the student had to be a problem in the area of care. Results: The sample is made up of 30 students: 10 men and 20 women. Nine students had a degree in nursing, 20 diploma in nursing and one had a degree in Computer Engineering. Two students had a degree in nursing specialty through residence and two in equivalent recognition by exceptional way. Except for the engineer, no subject had previously received training in this regard. All the sample enrolled in the course received the classroom teaching session, had access to the teaching material through a virtual area and maintained at least one tutoring. The maximum of tutorials were three with an hour in total. Among the material available for consultation was an example of a document drawn up based on the IEEE Standard with an issue not related to care. The test to measure competence was completed by the whole group and evaluated by a multidisciplinary teaching team of two computer engineers and two nurses. Engineers evaluated the correctness of the characteristics of the document and the degree of comprehension in the elaboration of the problem and solution elaborated nurses assessed the relevance of the chosen problem statement, the foundation, originality and correctness of the proposed solution and the validity of the application for clinical practice in care. The results were of an average grade of 8.1 over 10 points, a range between 6 and 10. The selected topic barely coincided among the students. Examples of care areas selected are care plans, family and community health, delivery care, administration and even robotics for care. Conclusion: The applied methodology of learning-teaching for the design of technologies demonstrates the success in the training of nursing professionals. The role of expert is essential to create applications that satisfy the needs of end users. Nursing has the possibility, the competence and the duty to participate in the process of construction of technological tools that are going to impact in care of people, family and community.

Keywords: care, learning, nursing, technology

Procedia PDF Downloads 136
197 Combined Treatment with Microneedling and Chemical Peels Improves Periorbital Wrinkles and Skin Laxity

Authors: G. Kontochristopoulos, T. Spiliopoulos, V. Markantoni, E. Platsidaki, A. Kouris, E. Balamoti, C. Bokotas, G. Haidemenos

Abstract:

Introduction: There is a high patient demand for periorbital rejuvenation since the facial area is often the first to show visible signs of aging. With advancing age, there are sometimes marked changes that occur in the skin, fat, muscle and bone of the periorbital region, resulting to wrinkles and skin laxity. These changes are among the easiest areas to correct using several minimally invasive techniques, which have become increasingly popular over the last decade. Lasers, radiofrequency, botulinum toxin, fat grafting and fillers are available treatments sometimes in combination to traditional blepharoplasty. This study attempts to show the benefits of a minimally invasive approach to periorbital wrinkles and skin laxity that combine microneedling and 10% trichloroacetic acid (TCA) peels. Method: Eleven female patients aged 34-72 enrolled in the study. They all gave informed consent after receiving detailed information regarding the treatment procedure. Exclusion criteria in the study were previous treatment for the same condition in the past six months, pregnancy, allergy or hypersensitivity to the components, infection, inflammation and photosensitivity on the affected region. All patients had diffuse periorbital wrinkles and mild to moderate upper or lower eyelid skin laxity. They were treated with Automatic Microneedle Therapy System-Handhold and topical application of 10% trichloroacetic acid solution to each periorbital area for five minutes. Needling at a 0,25 mm depth was performed in both latelar (x-y) directions. Subsequently, the peeling agent was applied to each periorbital area for five minutes. Patients were subjected to the above combination every two weeks for a series of four treatments. Subsequently they were followed up regularly every month for two months. The effect was photo-documented. A Physician's and a Patient's Global Assessment Scale was used to evaluate the efficacy of the treatment (0-25% indicated poor response, 25%-50% fair, 50%-75% good and 75%-100% excellent response). Safety was assessed by monitoring early and delayed adverse events. Results: At the end of the study, almost all patients demonstrated significant aesthetic improvement. Physicians assessed a fair and a good improvement in 9(81.8% of patients) and 2(18.1% of patients) participants respectively. Patients Global Assessment rated a fair and a good response in 6 (54.5%) and 5 (45.4%) participants respectively. The procedure was well tolerated and all patients were satisfied. Mild discomfort and transient erythema were quite common during or immediately after the procedure, however only temporary. During the monthly follow up, no complications or scars were observed. Conclusions: Microneedling is known as a simple, office–based collagen induction therapy. Low concentration TCA solution applied to the epidermis that has been more permeable by microneedling, can reach the dermis more effectively. In the present study, chemical peels with 10% TCA acted as an adjuvant to microneedling, as it causes controlled skin damage, promoting regeneration and rejuvenation of tissues. This combined therapy improved periorbital fine lines, wrinkles, and overall appearance of the skin. Thus it constitutes an alternative treatment of periorbital skin aging, with encouraging results and minor side-effects.

Keywords: chemical peels, microneedling, periorbital wrinkles, skin laxity

Procedia PDF Downloads 354
196 Numerical Solution of Momentum Equations Using Finite Difference Method for Newtonian Flows in Two-Dimensional Cartesian Coordinate System

Authors: Ali Ateş, Ansar B. Mwimbo, Ali H. Abdulkarim

Abstract:

General transport equation has a wide range of application in Fluid Mechanics and Heat Transfer problems. In this equation, generally when φ variable which represents a flow property is used to represent fluid velocity component, general transport equation turns into momentum equations or with its well known name Navier-Stokes equations. In these non-linear differential equations instead of seeking for analytic solutions, preferring numerical solutions is a more frequently used procedure. Finite difference method is a commonly used numerical solution method. In these equations using velocity and pressure gradients instead of stress tensors decreases the number of unknowns. Also, continuity equation, by integrating the system, number of equations is obtained as number of unknowns. In this situation, velocity and pressure components emerge as two important parameters. In the solution of differential equation system, velocities and pressures must be solved together. However, in the considered grid system, when pressure and velocity values are jointly solved for the same nodal points some problems confront us. To overcome this problem, using staggered grid system is a referred solution method. For the computerized solutions of the staggered grid system various algorithms were developed. From these, two most commonly used are SIMPLE and SIMPLER algorithms. In this study Navier-Stokes equations were numerically solved for Newtonian flow, whose mass or gravitational forces were neglected, for incompressible and laminar fluid, as a hydro dynamically fully developed region and in two dimensional cartesian coordinate system. Finite difference method was chosen as the solution method. This is a parametric study in which varying values of velocity components, pressure and Reynolds numbers were used. Differential equations were discritized using central difference and hybrid scheme. The discritized equation system was solved by Gauss-Siedel iteration method. SIMPLE and SIMPLER were used as solution algorithms. The obtained results, were compared for central difference and hybrid as discritization methods. Also, as solution algorithm, SIMPLE algorithm and SIMPLER algorithm were compared to each other. As a result, it was observed that hybrid discritization method gave better results over a larger area. Furthermore, as computer solution algorithm, besides some disadvantages, it can be said that SIMPLER algorithm is more practical and gave result in short time. For this study, a code was developed in DELPHI programming language. The values obtained in a computer program were converted into graphs and discussed. During sketching, the quality of the graph was increased by adding intermediate values to the obtained result values using Lagrange interpolation formula. For the solution of the system, number of grid and node was found as an estimated. At the same time, to indicate that the obtained results are satisfactory enough, by doing independent analysis from the grid (GCI analysis) for coarse, medium and fine grid system solution domain was obtained. It was observed that when graphs and program outputs were compared with similar studies highly satisfactory results were achieved.

Keywords: finite difference method, GCI analysis, numerical solution of the Navier-Stokes equations, SIMPLE and SIMPLER algoritms

Procedia PDF Downloads 390
195 Stuck Spaces as Moments of Learning: Uncovering Threshold Concepts in Teacher Candidate Experiences of Teaching in Inclusive Classrooms

Authors: Joy Chadwick

Abstract:

There is no doubt that classrooms of today are more complex and diverse than ever before. Preparing teacher candidates to meet these challenges is essential to ensure the retention of teachers within the profession and to ensure that graduates begin their teaching careers with the knowledge and understanding of how to effectively meet the diversity of students they will encounter. Creating inclusive classrooms requires teachers to have a repertoire of effective instructional skills and strategies. Teachers must also have the mindset to embrace diversity and value the uniqueness of individual students in their care. This qualitative study analyzed teacher candidates' experiences as they completed a fourteen-week teaching practicum while simultaneously completing a university course focused on inclusive pedagogy. The research investigated the challenges and successes teacher candidates had in navigating the translation of theory related to inclusive pedagogy into their teaching practice. Applying threshold concept theory as a framework, the research explored the troublesome concepts, liminal spaces, and transformative experiences as connected to inclusive practices. Threshold concept theory suggests that within all disciplinary fields, there exists particular threshold concepts that serve as gateways or portals into previously inaccessible ways of thinking and practicing. It is in these liminal spaces that conceptual shifts in thinking and understanding and deep learning can occur. The threshold concept framework provided a lens to examine teacher candidate struggles and successes with the inclusive education course content and the application of this content to their practicum experiences. A qualitative research approach was used, which included analyzing twenty-nine course reflective journals and six follow up one-to-one semi structured interviews. The journals and interview transcripts were coded and themed using NVivo software. Threshold concept theory was then applied to the data to uncover the liminal or stuck spaces of learning and the ways in which the teacher candidates navigated those challenging places of teaching. The research also sought to uncover potential transformative shifts in teacher candidate understanding as connected to teaching in an inclusive classroom. The findings suggested that teacher candidates experienced difficulties when they did not feel they had the knowledge, skill, or time to meet the needs of the students in the way they envisioned they should. To navigate the frustration of this thwarted vision, they relied on present and previous course content and experiences, collaborative work with other teacher candidates and their mentor teachers, and a proactive approach to planning for students. Transformational shifts were most evident in their ability to reframe their perceptions of children from a deficit or disability lens to a strength-based belief in the potential of students. It was evident that through their course work and practicum experiences, their beliefs regarding struggling students shifted as they saw the value of embracing neurodiversity, the importance of relationships, and planning for and teaching through a strength-based approach. Research findings have implications for teacher education programs and for understanding threshold concepts theory as connected to practice-based learning experiences.

Keywords: inclusion, inclusive education, liminal space, teacher education, threshold concepts, troublesome knowledge

Procedia PDF Downloads 79
194 Analysis of Elastic-Plastic Deformation of Reinforced Concrete Shear-Wall Structures under Earthquake Excitations

Authors: Oleg Kabantsev, Karomatullo Umarov

Abstract:

The engineering analysis of earthquake consequences demonstrates a significantly different level of damage to load-bearing systems of different types. Buildings with reinforced concrete columns and separate shear-walls receive the highest level of damage. Traditional methods for predicting damage under earthquake excitations do not provide an answer to the question about the reasons for the increased vulnerability of reinforced concrete frames with shear-walls bearing systems. Thus, the study of the problem of formation and accumulation of damages in the structures reinforced concrete frame with shear-walls requires the use of new methods of assessment of the stress-strain state, as well as new approaches to the calculation of the distribution of forces and stresses in the load-bearing system based on account of various mechanisms of elastic-plastic deformation of reinforced concrete columns and walls. The results of research into the processes of non-linear deformation of structures with a transition to destruction (collapse) will allow to substantiate the characteristics of limit states of various structures forming an earthquake-resistant load-bearing system. The research of elastic-plastic deformation processes of reinforced concrete structures of frames with shear-walls is carried out on the basis of experimentally established parameters of limit deformations of concrete and reinforcement under dynamic excitations. Limit values of deformations are defined for conditions under which local damages of the maximum permissible level are formed in constructions. The research is performed by numerical methods using ETABS software. The research results indicate that under earthquake excitations, plastic deformations of various levels are formed in various groups of elements of the frame with the shear-wall load-bearing system. During the main period of seismic effects in the shear-wall elements of the load-bearing system, there are insignificant volumes of plastic deformations, which are significantly lower than the permissible level. At the same time, plastic deformations are formed in the columns and do not exceed the permissible value. At the final stage of seismic excitations in shear-walls, the level of plastic deformations reaches values corresponding to the plasticity coefficient of concrete , which is less than the maximum permissible value. Such volume of plastic deformations leads to an increase in general deformations of the bearing system. With the specified parameters of the deformation of the shear-walls in concrete columns, plastic deformations exceeding the limiting values develop, which leads to the collapse of such columns. Based on the results presented in this study, it can be concluded that the application seismic-force-reduction factor, common for the all load-bearing system, does not correspond to the real conditions of formation and accumulation of damages in elements of the load-bearing system. Using a single coefficient of seismic-force-reduction factor leads to errors in predicting the seismic resistance of reinforced concrete load-bearing systems. In order to provide the required level of seismic resistance buildings with reinforced concrete columns and separate shear-walls, it is necessary to use values of the coefficient of seismic-force-reduction factor differentiated by types of structural groups.1

Keywords: reinforced concrete structures, earthquake excitation, plasticity coefficients, seismic-force-reduction factor, nonlinear dynamic analysis

Procedia PDF Downloads 205
193 Treatment of Neuronal Defects by Bone Marrow Stem Cells Differentiation to Neuronal Cells Cultured on Gelatin-PLGA Scaffolds Coated with Nano-Particles

Authors: Alireza Shams, Ali Zamanian, Atefehe Shamosi, Farnaz Ghorbani

Abstract:

Introduction: Although the application of a new strategy remains a remarkable challenge for treatment of disabilities due to neuronal defects, progress in Nanomedicine and tissue engineering, suggesting the new medical methods. One of the promising strategies for reconstruction and regeneration of nervous tissue is replacing of lost or damaged cells by specific scaffolds after Compressive, ischemic and traumatic injuries of central nervous system. Furthermore, ultrastructure, composition, and arrangement of tissue scaffolds are effective on cell grafts. We followed implantation and differentiation of mesenchyme stem cells to neural cells on Gelatin Polylactic-co-glycolic acid (PLGA) scaffolds coated with iron nanoparticles. The aim of this study was to evaluate the capability of stem cells to differentiate into motor neuron-like cells under topographical cues and morphogenic factors. Methods and Materials: Bone marrow mesenchymal stem cells (BMMSCs) was obtained by primary cell culturing of adult rat bone marrow got from femur bone by flushing method. BMMSCs were incubated with DMEM/F12 (Gibco), 15% FBS and 100 U/ml pen/strep as media. Then, BMMSCs seeded on Gel/PLGA scaffolds and tissue culture (TCP) polystyrene embedded and incorporated by Fe Nano particles (FeNPs) (Fe3o4 oxide (M w= 270.30 gr/mol.). For neuronal differentiation, 2×10 5 BMMSCs were seeded on Gel/PLGA/FeNPs scaffolds was cultured for 7 days and 0.5 µ mol. Retinoic acid, 100 µ mol. Ascorbic acid,10 ng/ml. Basic fibroblast growth factor (Sigma, USA), 250 μM Iso butyl methyl xanthine, 100 μM 2-mercaptoethanol, and 0.2 % B27 (Invitrogen, USA) added to media. Proliferation of BMMSCs was assessed by using MTT assay for cell survival. The morphology of BMMSCs and scaffolds was investigated by scanning electron microscopy analysis. Expression of neuron-specific markers was studied by immunohistochemistry method. Data were analyzed by analysis of variance, and statistical significance was determined by Turkey’s test. Results: Our results revealed that differentiation and survival of BMMSCs into motor neuron-like cells on Gel/PLGA/FeNPs as a biocompatible and biodegradable scaffolds were better than those cultured in Gel/PLGA in absence of FeNPs and TCP scaffolds. FeNPs had raised physical power but decreased capacity absorption of scaffolds. Well defined oriented pores in scaffolds due to FeNPs may activate differentiation and synchronized cells as a mechanoreceptor. Induction effects of magnetic FeNPs by One way flow of channels in scaffolds help to lead the cells and can facilitate direction of their growth processes. Discussion: Progression of biological properties of BMMSCs and the effects of FeNPs spreading under magnetic field was evaluated in this investigation. In vitro study showed that the Gel/PLGA/FeNPs scaffold provided a suitable structure for motor neuron-like cells differentiation. This could be a promising candidate for enhancing repair and regeneration in neural defects. Dynamic and static magnetic field for inducing and construction of cells can provide better results for further experimental studies.

Keywords: differentiation, mesenchymal stem cells, nano particles, neuronal defects, Scaffolds

Procedia PDF Downloads 166
192 TNF Modulation of Cancer Stem Cells in Renal Clear Cell Carcinoma

Authors: Rafia S. Al-lamki, Jun Wang, Simon Pacey, Jordan Pober, John R. Bradley

Abstract:

Tumor necrosis factor alpha (TNF), signaling through TNFR2, may act an autocrine growth factor for renal tubular epithelial cells. Clear cell renal carcinomas (ccRCC) contain cancer stem cells (CSCs) that give rise to progeny which form the bulk of the tumor. CSCs are rarely in cell cycle and, as non-proliferating cells, resist most chemotherapeutic agents. Thus, recurrence after chemotherapy may result from the survival of CSCs. Therapeutic targeting of both CSCs and the more differentiated bulk tumor populations may provide a more effective strategy for treatment of RCC. In this study, we hypothesized that TNFR2 signaling will induce CSCs in ccRCC to enter cell cycle so that treatment with ligands that engage TNFR2 will render CSCs susceptible to chemotherapy. To test this hypothesis, we have utilized wild-type TNF (wtTNF) or specific muteins selective for TNFR1 (R1TNF) or TNFR2 (R2TNF) to treat either short-term organ cultures of ccRCC and adjacent normal kidney (NK) tissue or cultures of CD133+ cells isolated from ccRCC and adjacent NK, hereafter referred to as stem cell-like cells (SCLCs). The effect of cyclophosphamide (CP), currently an effective anticancer agent, was tested on CD133+SCLCs from ccRCC and NK before and after R2TNF treatment. Responses to TNF were assessed by flow cytometry (FACS), immunofluorescence, and quantitative real-time PCR, TUNEL, and cell viability assays. Cytotoxic effect of CP was analyzed by Annexin V and propidium iodide staining with FACS. In addition, we assessed the effect of TNF on isolated SCLCs differentiation using a three-dimensional (3D) culture system. Clinical samples of ccRCC contain a greater number SCLCs compared to NK and the number of SCSC increases with higher tumor grade. Isolated SCLCs show expression of stemness markers (oct4, Nanog, Sox2, Lin28) but not differentiation markers (cytokeratin, CD31, CD45, and EpCAM). In ccRCC organ cultures, wtTNF and R2TNF increase CD133 and TNFR2 expression and promote cell cycle entry whereas wtTNF and R1TNF increase TNFR1 expression and promote cell death of SCLCs. Similar findings are observed in SCLCs isolated from NK but the effect was greater in SCLCs isolated from ccRCC. Application of CP distinctly triggered apoptotic and necrotic cell death in SLCSs pre-treatment with R2TNF as compared to CP treatment alone, with SCLCs from ccRCC more sensitive to CP compared to SLCS from NK. Furthermore, TNF promotes differentiation of SCLCs to an epithelial phenotype in 3D cultures, confirmed by cytokeratin expression and loss of stemness markers Nanog and Sox2. The differentiated cells show positive expression of TNF and TNFR2. These findings provide evidence that selective engagement of TNFR2 drive CSCs to cell proliferation/differentiation, and targeting of cycling cells with TNFR2 agonist in combination with anti-cancer agents may be a potential therapy for RCC.

Keywords: cancer stem cells, ccRCC, cell cycle, cell death, TNF, TNFR1, TNFR2, CD133

Procedia PDF Downloads 262
191 Ethical Decision-Making in AI and Robotics Research: A Proposed Model

Authors: Sylvie Michel, Emmanuelle Gagnou, Joanne Hamet

Abstract:

Researchers in the fields of AI and Robotics frequently encounter ethical dilemmas throughout their research endeavors. Various ethical challenges have been pinpointed in the existing literature, including biases and discriminatory outcomes, diffusion of responsibility, and a deficit in transparency within AI operations. This research aims to pinpoint these ethical quandaries faced by researchers and shed light on the mechanisms behind ethical decision-making in the research process. By synthesizing insights from existing literature and acknowledging prevalent shortcomings, such as overlooking the heterogeneous nature of decision-making, non-accumulative results, and a lack of consensus on numerous factors due to limited empirical research, the objective is to conceptualize and validate a model. This model will incorporate influences from individual perspectives and situational contexts, considering potential moderating factors in the ethical decision-making process. Qualitative analyses were conducted based on direct observation of an AI/Robotics research team focusing on collaborative robotics for several months. Subsequently, semi-structured interviews with 16 team members were conducted. The entire process took place during the first semester of 2023. Observations were analyzed using an analysis grid, and the interviews underwent thematic analysis using Nvivo software. An initial finding involves identifying the ethical challenges that AI/robotics researchers confront, underlining a disparity between practical applications and theoretical considerations regarding ethical dilemmas in the realm of AI. Notably, researchers in AI prioritize the publication and recognition of their work, sparking the genesis of these ethical inquiries. Furthermore, this article illustrated that researchers tend to embrace a consequentialist ethical framework concerning safety (for humans engaging with robots/AI), worker autonomy in relation to robots, and the societal implications of labor (can robots displace jobs?). A second significant contribution entails proposing a model for ethical decision-making within the AI/Robotics research sphere. The model proposed adopts a process-oriented approach, delineating various research stages (topic proposal, hypothesis formulation, experimentation, conclusion, and valorization). Across these stages and the ethical queries, they entail, a comprehensive four-point comprehension of ethical decision-making is presented: recognition of the moral quandary; moral judgment, signifying the decision-maker's aptitude to discern the morally righteous course of action; moral intention, reflecting the ability to prioritize moral values above others; and moral behavior, denoting the application of moral intention to the situation. Variables such as political inclinations ((anti)-capitalism, environmentalism, veganism) seem to wield significant influence. Moreover, age emerges as a noteworthy moderating factor. AI and robotics researchers are continually confronted with ethical dilemmas during their research endeavors, necessitating thoughtful decision-making. The contribution involves introducing a contextually tailored model, derived from meticulous observations and insightful interviews, enabling the identification of factors that shape ethical decision-making at different stages of the research process.

Keywords: ethical decision making, artificial intelligence, robotics, research

Procedia PDF Downloads 79