Search results for: feature for feature match
126 Cognitive Dissonance in Robots: A Computational Architecture for Emotional Influence on the Belief System
Authors: Nicolas M. Beleski, Gustavo A. G. Lugo
Abstract:
Robotic agents are taking more and increasingly important roles in society. In order to make these robots and agents more autonomous and efficient, their systems have grown to be considerably complex and convoluted. This growth in complexity has led recent researchers to investigate forms to explain the AI behavior behind these systems in search for more trustworthy interactions. A current problem in explainable AI is the inner workings with the logic inference process and how to conduct a sensibility analysis of the process of valuation and alteration of beliefs. In a social HRI (human-robot interaction) setup, theory of mind is crucial to ease the intentionality gap and to achieve that we should be able to infer over observed human behaviors, such as cases of cognitive dissonance. One specific case inspired in human cognition is the role emotions play on our belief system and the effects caused when observed behavior does not match the expected outcome. In such scenarios emotions can make a person wrongly assume the antecedent P for an observed consequent Q, and as a result, incorrectly assert that P is true. This form of cognitive dissonance where an unproven cause is taken as truth induces changes in the belief base which can directly affect future decisions and actions. If we aim to be inspired by human thoughts in order to apply levels of theory of mind to these artificial agents, we must find the conditions to replicate these observable cognitive mechanisms. To achieve this, a computational architecture is proposed to model the modulation effect emotions have on the belief system and how it affects logic inference process and consequently the decision making of an agent. To validate the model, an experiment based on the prisoner's dilemma is currently under development. The hypothesis to be tested involves two main points: how emotions, modeled as internal argument strength modulators, can alter inference outcomes, and how can explainable outcomes be produced under specific forms of cognitive dissonance.Keywords: cognitive architecture, cognitive dissonance, explainable ai, sensitivity analysis, theory of mind
Procedia PDF Downloads 132125 Ranking Theory-The Paradigm Shift in Statistical Approach to the Issue of Ranking in a Sports League
Authors: E. Gouya Bozorg
Abstract:
The issue of ranking of sports teams, in particular soccer teams is of primary importance in the professional sports. However, it is still based on classical statistics and models outside of area of mathematics. Rigorous mathematics and then statistics despite the expectation held of them have not been able to effectively engage in the issue of ranking. It is something that requires serious pathology. The purpose of this study is to change the approach to get closer to mathematics proper for using in the ranking. We recommend using theoretical mathematics as a good option because it can hermeneutically obtain the theoretical concepts and criteria needful for the ranking from everyday language of a League. We have proposed a framework that puts the issue of ranking into a new space that we have applied in soccer as a case study. This is an experimental and theoretical study on the issue of ranking in a professional soccer league based on theoretical mathematics, followed by theoretical statistics. First, we showed the theoretical definition of constant number Є = 1.33 or ‘golden number’ of a soccer league. Then, we have defined the ‘efficiency of a team’ by this number and formula of μ = (Pts / (k.Є)) – 1, in which Pts is a point obtained by a team in k number of games played. Moreover, K.Є index has been used to show the theoretical median line in the league table and to compare top teams and bottom teams. Theoretical coefficient of σ= 1 / (1+ (Ptx / Ptxn)) has also been defined that in every match between the teams x, xn, with respect to the ability of a team and the points of both of them Ptx, Ptxn, and it gives a performance point resulting in a special ranking for the League. And it has been useful particularly in evaluating the performance of weaker teams. The current theory has been examined for the statistical data of 4 major European Leagues during the period of 1998-2014. Results of this study showed that the issue of ranking is dependent on appropriate theoretical indicators of a League. These indicators allowed us to find different forms of ranking of teams in a league including the ‘special table’ of a league. Furthermore, on this basis the issue of a record of team has been revised and amended. In addition, the theory of ranking can be used to compare and classify the different leagues and tournaments. Experimental results obtained from archival statistics of major professional leagues in the world in the past two decades have confirmed the theory. This topic introduces a new theory for ranking of a soccer league. Moreover, this theory can be used to compare different leagues and tournaments.Keywords: efficiency of a team, ranking, special table, theoretical mathematic
Procedia PDF Downloads 418124 To Investigate a Discharge Planning Connect with Long Term Care 2.0 Program in a Medical Center in Taiwan
Authors: Chan Hui-Ya, Ding Shin-Tan
Abstract:
Background and Aim: The discharge planning is considered helpful to reduce the hospital length of stay and readmission rate, and then increased satisfaction with healthcare for patients and professionals. In order to decrease the waiting time of long-term care and boost the care quality of patients after discharge from the hospital, the Ministry of Health and Welfare department in Taiwan initiates a program “discharge planning connects with long-term care 2.0 services” in 2017. The purpose of this study is to investigate the outcome of the pilot of this program in a medical center. Methods: By purpose sampling, the study chose five wards in a medical center as pilot units. The researchers compared the beds of service, the numbers of cases which were transferred to the long-term care center and transferred rates per month between the pilot units and the other units, and analyze the basic data, the long-term care service needs and the approval service items of cases transfer to the long-term care center in pilot units. Results: From June to September 2017, a total of 92 referrals were made, and 51 patients were enrolled into the pilot program. There is a significant difference of transferring rate between the pilot units and the other units (χ = 702.6683, p < 0.001). Only 20 cases (39.2% success rate) were approved to accept the parts of service items of long-term care in the pilot units. The most approval item was respite care service (n = 13; 65%), while it was third at needs ranking of service lists during linking services process. Among the reasons of patients who cancelled the request, 38.71% reasons were related to the services which could not match the patients’ needs and expectation. Conclusion: The results indicate there is a requirement to modify the long-term care services to fit the needs of cases. The researchers suggest estimating the potential cases by screening data from hospital informatics systems and to hire more case manager according the service time of potential cases. Meanwhile, the strategies shortened the assessment scale and authorized hospital case managers to approve some items of long-term care should be considered.Keywords: discharge planning, long-term care, case manager, patient care
Procedia PDF Downloads 286123 Evaluation of Different Food Baits by Using Kill Traps for the Control of Lesser Bandicoot Rat (Bandicota bengalensis) in Field Crops of Pothwar Plateau, Pakistan
Authors: Nadeem Munawar, Iftikhar Hussain, Tariq Mahmood
Abstract:
The lesser bandicoot rat (Bandicota bengalensis) is widely distributed and a serious agricultural pest in Pakistan. It has wide adaptation with rice-wheat-sugarcane cropping systems of Punjab, Sindh and Khyber Pakhtunkhwa and wheat-groundnut cropping system of Pothwar area, thus inflicting heavy losses to these crops. Comparative efficacies of four food baits (onion, guava, potato and peanut butter smeared bread/Chapatti) were tested in multiple feeding tests for kill trapping of this rat species in the Pothwar Plateau between October 2013 to July 2014 at the sowing, tilling, flowering and maturity stages of wheat, groundnut and millet crops. The results revealed that guava was the most preferred bait as compared to the rest of three, presumably due to particular taste and smell of the guava. The relative efficacies of all four tested baits guava also scoring the highest trapping success of 16.94 ± 1.42 percent, followed by peanut butter, potato, and onion with trapping successes of 10.52 ± 1.30, 7.82 ± 1.21 and 4.5 ± 1.10 percent, respectively. In various crop stages and season-wise the highest trapping success was achieved at maturity stages of the crops, presumably due to higher surface activity of the rat because of favorable climatic conditions, good shelter, and food abundance. Moreover, the maturity stage of wheat crop coincided with spring breeding season and maturity stages of millet and groundnut match with monsoon/autumn breeding peak of the lesser bandicoot rat in Pothwar area. The preferred order among four baits tested was guava > peanut butter > potato > onion. The study recommends that the farmers should periodically carry out rodent trapping at the beginning of each crop season and during non-breeding seasons of this rodent pest when the populations are low in numbers and restricted under crop boundary vegetation, particularly during very hot and cold months.Keywords: Bandicota bengalensis, efficacy, food baits, Pothwar
Procedia PDF Downloads 269122 Crime Prevention with Artificial Intelligence
Authors: Mehrnoosh Abouzari, Shahrokh Sahraei
Abstract:
Today, with the increase in quantity and quality and variety of crimes, the discussion of crime prevention has faced a serious challenge that human resources alone and with traditional methods will not be effective. One of the developments in the modern world is the presence of artificial intelligence in various fields, including criminal law. In fact, the use of artificial intelligence in criminal investigations and fighting crime is a necessity in today's world. The use of artificial intelligence is far beyond and even separate from other technologies in the struggle against crime. Second, its application in criminal science is different from the discussion of prevention and it comes to the prediction of crime. Crime prevention in terms of the three factors of the offender, the offender and the victim, following a change in the conditions of the three factors, based on the perception of the criminal being wise, and therefore increasing the cost and risk of crime for him in order to desist from delinquency or to make the victim aware of self-care and possibility of exposing him to danger or making it difficult to commit crimes. While the presence of artificial intelligence in the field of combating crime and social damage and dangers, like an all-seeing eye, regardless of time and place, it sees the future and predicts the occurrence of a possible crime, thus prevent the occurrence of crimes. The purpose of this article is to collect and analyze the studies conducted on the use of artificial intelligence in predicting and preventing crime. How capable is this technology in predicting crime and preventing it? The results have shown that the artificial intelligence technologies in use are capable of predicting and preventing crime and can find patterns in the data set. find large ones in a much more efficient way than humans. In crime prediction and prevention, the term artificial intelligence can be used to refer to the increasing use of technologies that apply algorithms to large sets of data to assist or replace police. The use of artificial intelligence in our debate is in predicting and preventing crime, including predicting the time and place of future criminal activities, effective identification of patterns and accurate prediction of future behavior through data mining, machine learning and deep learning, and data analysis, and also the use of neural networks. Because the knowledge of criminologists can provide insight into risk factors for criminal behavior, among other issues, computer scientists can match this knowledge with the datasets that artificial intelligence uses to inform them.Keywords: artificial intelligence, criminology, crime, prevention, prediction
Procedia PDF Downloads 77121 Association of 105A/C IL-18 Gene Single Nucleotide Polymorphism with House Dust Mite Allergy in an Atopic Filipino Population
Authors: Eisha Vienna M. Fernandez, Cristan Q. Cabanilla, Hiyasmin Lim, John Donnie A. Ramos
Abstract:
Allergy is a multifactorial disease affecting a significant proportion of the population. It is developed through the interaction of allergens and the presence of certain polymorphisms in various susceptibility genes. In this study, the correlation of the 105A/C single nucleotide polymorphism (SNP) of the IL-18 gene and house dust mite-specific IgE among Filipino allergic and non-allergic population was investigated. Atopic status was defined by serum total IgE concentration of ≥100 IU/mL, while house dust mite allergy was defined by specific IgE value ≥ +1SD of IgE of nonatopic participants. Two hundred twenty match-paired Filipino cases and controls aged 6-60 were the subjects of this investigation. The level of total IgE and Specific IgE were measured using Enzyme-Linked Immunosorbent Assay (ELISA) while Polymerase Chain Reaction – Restriction Fragment Length Polymorphism (PCR-RFLP) analysis was used in the SNP detection. Sensitization profiles of the allergic patients revealed that 97.3% were sensitized to Blomia tropicalis, 40.0% to Dermatophagoides farinae, and 29.1% to Dermatophagoides pteronyssinus. Multiple sensitization to HDMs was also observed among the 47.27% of the atopic participants. Any of the allergy classes of the atopic triad were exhibited by the cases (allergic asthma: 48.18%; allergic rhinitis: 62.73%; atopic dermatitis: 19.09%), and two or all of these atopic states are concurrently occurring in 26.36% of the cases. A greater proportion of the atopic participants with allergic asthma and allergic rhinitis were sensitized to D. farinae, and D. pteronyssinus, while more of those with atopic dermatitis were sensitized to D. pteronyssinus than D. farinae. Results show that there is overrepresentation of the allele “A” of the 105A/C IL-18 gene SNP in both cases and control groups of the population. The genotype that predominate the population is the heterozygous “AC”, followed by the homozygous wild “AA”, and the homozygous variant “CC” being the least. The study confirmed a positive association between serum specific IgE against B. tropicalis and D. pteronyssinus and the allele “C” (Bt P=0.021, Dp P=0.027) and “AC” (Bt P=0.003, Dp P=0.026) genotype. Findings also revealed that the genotypes “AA” (OR:1.217; 95% CI: 0.701-2.113) and “CC” (OR, 3.5; 95% CI: 0.727-16.849) increase the risk of developing allergy. This indicates that the 105A/C IL-18 gene SNP is a candidate genetic marker for HDM allergy among Filipino patients.Keywords: house dust mite allergy, interleukin-18 (IL-18), single nucleotide polymorphism,
Procedia PDF Downloads 459120 Effect of Different Ground Motion Scaling Methods on Behavior of 40 Story RC Core Wall Building
Authors: Muhammad Usman, Munir Ahmed
Abstract:
The demand of high-rise buildings has grown fast during the past decades. The design of these buildings by using RC core wall have been widespread nowadays in many countries. The RC core wall (RCCW) buildings encompasses central core wall and boundary columns joined through post tension slab at different floor levels. The core wall often provides greater stiffness as compared to the collective stiffness of the boundary columns. Hence, the core wall dominantly resists lateral loading i.e. wind or earthquake load. Non-linear response history analysis (NLRHA) procedure is the finest seismic design procedure of the times for designing high-rise buildings. The modern design tools for nonlinear response history analysis and performance based design has provided more confidence to design these structures for high-rise buildings. NLRHA requires selection and scaling of ground motions to match design spectrum for site specific conditions. Designers use several techniques for scaling ground motion records (time series). Time domain and frequency domain scaling are most commonly used which comprises their own benefits and drawbacks. Due to lengthy process of NLRHA, application of only one technique is conceivable. To the best of author’s knowledge, no consensus on the best procedures for the selection and scaling of the ground motions is available in literature. This research aims to provide the finest ground motion scaling technique specifically for designing 40 story high-rise RCCW buildings. Seismic response of 40 story RCCW building is checked by applying both the frequency domain and time domain scaling. Variable sites are selected in three critical seismic zones of Pakistan. The results indicates that there is extensive variation in seismic response of building for these scaling. There is still a need to build a consensus on the subjected research by investigating variable sites and buildings heights.Keywords: 40-storied RC core wall building, nonlinear response history analysis, ground motions, time domain scaling, frequency domain scaling
Procedia PDF Downloads 132119 Study of Proton-9,11Li Elastic Scattering at 60~75 MeV/Nucleon
Authors: Arafa A. Alholaisi, Jamal H. Madani, M. A. Alvi
Abstract:
The radial form of nuclear matter distribution, charge and the shape of nuclei are essential properties of nuclei, and hence, are of great attention for several areas of research in nuclear physics. More than last three decades have witnessed a range of experimental means employing leptonic probes (such as muons, electrons etc.) for exploring nuclear charge distributions, whereas the hadronic probes (for example alpha particles, protons, etc.) have been used to investigate the nuclear matter distributions. In this paper, p-9,11Li elastic scattering differential cross sections in the energy range to MeV have been studied by means of Coulomb modified Glauber scattering formalism. By applying the semi-phenomenological Bhagwat-Gambhir-Patil [BGP] nuclear density for loosely bound neutron rich 11Li nucleus, the estimated matter radius is found to be 3.446 fm which is quite large as compared to so known experimental value 3.12 fm. The results of microscopic optical model based calculation by applying Bethe-Brueckner–Hartree–Fock formalism (BHF) have also been compared. It should be noted that in most of phenomenological density model used to reproduce the p-11Li differential elastic scattering cross sections data, the calculated matter radius lies between 2.964 and 3.55 fm. The calculated results with phenomenological BGP model density and with nucleon density calculated in the relativistic mean-field (RMF) reproduces p-9Li and p-11Li experimental data quite nicely as compared to Gaussian- Gaussian or Gaussian-Oscillator densities at all energies under consideration. In the approach described here, no free/adjustable parameter has been employed to reproduce the elastic scattering data as against the well-known optical model based studies that involve at least four to six adjustable parameters to match the experimental data. Calculated reaction cross sections σR for p-11Li at these energies are quite large as compared to estimated values reported by earlier works though so far no experimental studies have been performed to measure it.Keywords: Bhagwat-Gambhir-Patil density, Coulomb modified Glauber model, halo nucleus, optical limit approximation
Procedia PDF Downloads 163118 The Thinking of Dynamic Formulation of Rock Aging Agent Driven by Data
Authors: Longlong Zhang, Xiaohua Zhu, Ping Zhao, Yu Wang
Abstract:
The construction of mines, railways, highways, water conservancy projects, etc., have formed a large number of high steep slope wounds in China. Under the premise of slope stability and safety, the minimum cost, green and close to natural wound space repair, has become a new problem. Nowadays, in situ element testing and analysis, monitoring, field quantitative factor classification, and assignment evaluation will produce vast amounts of data. Data processing and analysis will inevitably differentiate the morphology, mineral composition, physicochemical properties between rock wounds, by which to dynamically match the appropriate techniques and materials for restoration. In the present research, based on the grid partition of the slope surface, tested the content of the combined oxide of rock mineral (SiO₂, CaO, MgO, Al₂O₃, Fe₃O₄, etc.), and classified and assigned values to the hardness and breakage of rock texture. The data of essential factors are interpolated and normalized in GIS, which formed the differential zoning map of slope space. According to the physical and chemical properties and spatial morphology of rocks in different zones, organic acids (plant waste fruit, fruit residue, etc.), natural mineral powder (zeolite, apatite, kaolin, etc.), water-retaining agent, and plant gum (melon powder) were mixed in different proportions to form rock aging agents. To spray the aging agent with different formulas on the slopes in different sections can affectively age the fresh rock wound, providing convenience for seed implantation, and reducing the transformation of heavy metals in the rocks. Through many practical engineering practices, a dynamic data platform of rock aging agent formula system is formed, which provides materials for the restoration of different slopes. It will also provide a guideline for the mixed-use of various natural materials to solve the complex, non-uniformity ecological restoration problem.Keywords: data-driven, dynamic state, high steep slope, rock aging agent, wounds
Procedia PDF Downloads 116117 NanoFrazor Lithography for advanced 2D and 3D Nanodevices
Authors: Zhengming Wu
Abstract:
NanoFrazor lithography systems were developed as a first true alternative or extension to standard mask-less nanolithography methods like electron beam lithography (EBL). In contrast to EBL they are based on thermal scanning probe lithography (t-SPL). Here a heatable ultra-sharp probe tip with an apex of a few nm is used for patterning and simultaneously inspecting complex nanostructures. The heat impact from the probe on a thermal responsive resist generates those high-resolution nanostructures. The patterning depth of each individual pixel can be controlled with better than 1 nm precision using an integrated in-situ metrology method. Furthermore, the inherent imaging capability of the Nanofrazor technology allows for markerless overlay, which has been achieved with sub-5 nm accuracy as well as it supports stitching layout sections together with < 10 nm error. Pattern transfer from such resist features below 10 nm resolution were demonstrated. The technology has proven its value as an enabler of new kinds of ultra-high resolution nanodevices as well as for improving the performance of existing device concepts. The application range for this new nanolithography technique is very broad spanning from ultra-high resolution 2D and 3D patterning to chemical and physical modification of matter at the nanoscale. Nanometer-precise markerless overlay and non-invasiveness to sensitive materials are among the key strengths of the technology. However, while patterning at below 10 nm resolution is achieved, significantly increasing the patterning speed at the expense of resolution is not feasible by using the heated tip alone. Towards this end, an integrated laser write head for direct laser sublimation (DLS) of the thermal resist has been introduced for significantly faster patterning of micrometer to millimeter-scale features. Remarkably, the areas patterned by the tip and the laser are seamlessly stitched together and both processes work on the very same resist material enabling a true mix-and-match process with no developing or any other processing steps in between. The presentation will include examples for (i) high-quality metal contacting of 2D materials, (ii) tuning photonic molecules, (iii) generating nanofluidic devices and (iv) generating spintronic circuits. Some of these applications have been enabled only due to the various unique capabilities of NanoFrazor lithography like the absence of damage from a charged particle beam.Keywords: nanofabrication, grayscale lithography, 2D materials device, nano-optics, photonics, spintronic circuits
Procedia PDF Downloads 72116 Revision of Arthroplasty in Rheumatoid and Osteoarthritis: Methotrexate and Radiographic Lucency in RA Patients
Authors: Mike T. Wei, Douglas N. Mintz, Lisa A. Mandl, Arielle W. Fein, Jayme C. Burket, Yuo-Yu Lee, Wei-Ti Huang, Vivian P. Bykerk, Mark P. Figgie, Edward F. Di Carlo, Bruce N. Cronstein, Susan M. Goodman
Abstract:
Background/Purpose: Rheumatoid arthritis (RA) patients have excellent total hip arthroplasty (THA) survival, and methotrexate (MTX), an anti-inflammatory disease modifying drug which may affect bone reabsorption, may play a role. The purpose of this study is to determine the diagnosis leading to revision THA (rTHA) in RA patients and to assess the association of radiographic lucency with MTX use. Methods: All patients with validated diagnosis of RA in the institution’s THA registry undergoing rTHA from May 2007 - February 2011 were eligible. Diagnosis leading to rTHA and medication use was determined by chart review. Osteolysis was evaluated on available radiographs by measuring maximum lucency in each Gruen zone. Differences within RA patients with/without MTX in osteolysis, demographics, and medications were assessed with chi-squared, Fisher's exact tests or Mann-Whitney U tests as appropriate. The error rate for multiple comparisons of lucency in the different Gruen zones was corrected via false discovery rate methods. A secondary analysis was performed to determine differences in diagnoses leading to revision between RA and matched OA controls (2:1 match by sex age +/- 5 years). OA exclusion criteria included presence of rheumatic diseases, use of MTX, and lack of records. Results: 51 RA rTHA were identified and compared with 103 OA. Mean age for RA was 57.7 v 59.4 years for OA (p = 0.240). 82.4% RA were female v 83.5% OA (p = 0.859). RA had lower BMI than OA (25.5 v 28.2; p = 0.166). There was no difference in diagnosis leading to rTHA, including infection (RA 3.9 v OA 6.8%; p = 0.719) or dislocation (RA 23.5 v OA 23.3%; p = 0.975). There was no significant difference in the length of time the implant was in before revision: RA 11.0 v OA 8.8 years (p = 0.060). Among RA with/without MTX, there was no difference in use of biologics (30.0 v 43.3%, p = 0.283), steroids (47.6 v 50.0%, p = 0.867) or bisphosphonates (23.8 v 33.3%, p = 0.543). There was no difference in rTHA diagnosis with/without MTX, including loosening (52.4 v 56.7%, p = 0.762). There was no significant difference in lucencies with MTX use in any Gruen zone. Patients with MTX had femoral stem subsidence of 3.7mm v no subsidence without MTX (p = 0.006). Conclusion: There was no difference in the diagnosis leading to rTHR in RA and OA, although RA trended longer prior to rTHA. In this small retrospective study, there were no significant differences associated with MTX exposure or radiographic lucency among RA patients. The significance of subsidence is not clear. Further study of arthroplasty survival in RA patients is warranted.Keywords: hip arthroplasty, methotrexate, revision arthroplasty, rheumatoid arthritis
Procedia PDF Downloads 250115 Evaluation and Proposal for Improvement of the Flow Measurement Equipment in the Bellavista Drinking Water System of the City of Azogues
Authors: David Quevedo, Diana Coronel
Abstract:
The present article carries out an evaluation of the drinking water system in the Bellavista sector of the city of Azogues, with the purpose of determining the appropriate equipment to record the actual consumption flows of the inhabitants in said sector. Taking into account that the study area is located in a rural and economically disadvantaged area, there is an urgent need to establish a control system for the consumption of drinking water in order to conserve and manage the vital resource in the best possible way, considering that the water source supplying this sector is approximately 9km away. The research began with the collection of cartographic, demographic, and statistical data of the sector, determining the coverage area, population projection, and a provision that guarantees the supply of drinking water to meet the water needs of the sector's inhabitants. By using hydraulic modeling through the United States Environmental Protection Agency Application for Modeling Drinking Water Distribution Systems EPANET 2.0 software, theoretical hydraulic data were obtained, which were used to design and justify the most suitable measuring equipment for the Bellavista drinking water system. Taking into account a minimum service life of the drinking water system of 30 years, future flow rates were calculated for the design of the macro-measuring device. After analyzing the network, it was evident that the Bellavista sector has an average consumption of 102.87 liters per person per day, but considering that Ecuadorian regulations recommend a provision of 180 liters per person per day for the geographical conditions of the sector, this value was used for the analysis. With all the collected and calculated information, the conclusion was reached that the Bellavista drinking water system needs to have a 125mm electromagnetic macro-measuring device for the first three quinquenniums of its service life and a 150mm diameter device for the following three quinquenniums. The importance of having equipment that provides real and reliable data will allow for the control of water consumption by the population of the sector, measured through micro-measuring devices installed at the entrance of each household, which should match the readings of the macro-measuring device placed after the water storage tank outlet, in order to control losses that may occur due to leaks in the drinking water system or illegal connections.Keywords: macrometer, hydraulics, endowment, water
Procedia PDF Downloads 74114 Geographic Information System-Based Map for Best Suitable Place for Cultivating Permanent Trees in South-Lebanon
Authors: Allaw Kamel, Al-Chami Leila
Abstract:
It is important to reduce the human influence on natural resources by identifying an appropriate land use. Moreover, it is essential to carry out the scientific land evaluation. Such kind of analysis allows identifying the main factors of agricultural production and enables decision makers to develop crop management in order to increase the land capability. The key is to match the type and intensity of land use with its natural capability. Therefore; in order to benefit from these areas and invest them to obtain good agricultural production, they must be organized and managed in full. Lebanon suffers from the unorganized agricultural use. We take south Lebanon as a study area, it is the most fertile ground and has a variety of crops. The study aims to identify and locate the most suitable area to cultivate thirteen type of permanent trees which are: apples, avocados, stone fruits in coastal regions and stone fruits in mountain regions, bananas, citrus, loquats, figs, pistachios, mangoes, olives, pomegranates, and grapes. Several geographical factors are taken as criterion for selection of the best location to cultivate. Soil, rainfall, PH, temperature, and elevation are main inputs to create the final map. Input data of each factor is managed, visualized and analyzed using Geographic Information System (GIS). Management GIS tools are implemented to produce input maps capable of identifying suitable areas related to each index. The combination of the different indices map generates the final output map of the suitable place to get the best permanent tree productivity. The output map is reclassified into three suitability classes: low, moderate, and high suitability. Results show different locations suitable for different kinds of trees. Results also reflect the importance of GIS in helping decision makers finding a most suitable location for every tree to get more productivity and a variety in crops.Keywords: agricultural production, crop management, geographical factors, Geographic Information System, GIS, land capability, permanent trees, suitable location
Procedia PDF Downloads 142113 Microscale observations of a gas cell wall rupture in bread dough during baking and confrontation to 2/3D Finite Element simulations of stress concentration
Authors: Kossigan Bernard Dedey, David Grenier, Tiphaine Lucas
Abstract:
Bread dough is often described as a dispersion of gas cells in a continuous gluten/starch matrix. The final bread crumb structure is strongly related to gas cell walls (GCWs) rupture during baking. At the end of proofing and during baking, part of the thinnest GCWs between expanding gas cells is reduced to a gluten film of about the size of a starch granule. When such size is reached gluten and starch granules must be considered as interacting phases in order to account for heterogeneities and appropriately describe GCW rupture. Among experimental investigations carried out to assess GCW rupture, no experimental work was performed to observe the GCW rupture in the baking conditions at GCW scale. In addition, attempts to numerically understand GCW rupture are usually not performed at the GCW scale and often considered GCWs as continuous. The most relevant paper that accounted for heterogeneities dealt with the gluten/starch interactions and their impact on the mechanical behavior of dough film. However, stress concentration in GCW was not discussed. In this study, both experimental and numerical approaches were used to better understand GCW rupture in bread dough during baking. Experimentally, a macro-scope placed in front of a two-chamber device was used to observe the rupture of a real GCW of 200 micrometers in thickness. Special attention was paid in order to mimic baking conditions as far as possible (temperature, gas pressure and moisture). Various differences in pressure between both sides of GCW were applied and different modes of fracture initiation and propagation in GCWs were observed. Numerically, the impact of gluten/starch interactions (cohesion or non-cohesion) and rheological moduli ratio on the mechanical behavior of GCW under unidirectional extension was assessed in 2D/3D. A non-linear viscoelastic and hyperelastic approach was performed to match the finite strain involved in GCW during baking. Stress concentration within GCW was identified. Simulated stresses concentration was discussed at the light of GCW failure observed in the device. The gluten/starch granule interactions and rheological modulus ratio were found to have a great effect on the amount of stress possibly reached in the GCW.Keywords: dough, experimental, numerical, rupture
Procedia PDF Downloads 122112 Narrative Family Therapy and the Treatment of Perinatal Mood and Anxiety Disorders
Authors: Jamie E. Banker
Abstract:
For many families, pregnancy and the postpartum time are filled with both anticipation and change. For some pregnant or postpartum women, this time is marked by the onset of a mood or anxiety disorder. Experiencing a mood or anxiety disorders during this time of life differs from depression or anxiety at other times of life. Not only because of the physical changes occurring in the mother’s body but also the mental and physical preparation necessary to redefine family roles, responsibilities, and develop new identities in the life transition. The presence of a mood or anxiety disorder can influence the way in which a mother defines herself and can complicate her understanding of her abilities and competencies as a mother. The complexity of experiencing a mood or anxiety disorder in the midst of these changes necessitates specific treatment interventions to match both the symptomatology and psychological adjustments. This study explores the use of narrative family therapy techniques when treating a mother who is experiencing postpartum depression. Externalization is a common technique used in narrative family therapy and can help client’s separate their identity from the problems they are experiencing. This is crucial to a new mom who is in the middle of defining her identity during her transition to parenthood. The goal of this study is to examine how the use of externalization techniques help postpartum women separate their mood and anxiety symptoms from their identity as a mother. An exploratory case study design was conducted in a single setting, private practice therapy office, and explored how a narrative family therapy approach can be used to treat perinatal mood and anxiety disorders. The therapy sessions were audio recorded and transcribed. Constructivism and narrative theory are used as theoretical frameworks and data from the therapy sessions, and a follow-up survey was triangulated and analyzed. During the course of the treatment, the participant reports using the new externalizing labels for her symptoms. Within one month of treatment, the participant reports that she could stop herself from thinking the harmful thoughts faster, and within three months, the harmful thoughts went away. The main themes in this study were building courage and less self-blame. This case highlights the role narrative family therapy can play in the treatment of perinatal mood and anxiety disorders and the importance of separating a women’s mood from her identity as a mother. This conceptual framework was beneficial to the postpartum mother when treating perinatal mood and anxiety disorder symptoms.Keywords: externalizing techniques, narrative family therapy, perinatal mood and anxiety disorders, postpartum depression
Procedia PDF Downloads 275111 Gaze Behaviour of Individuals with and without Intellectual Disability for Nonaccidental and Metric Shape Properties
Authors: S. Haider, B. Bhushan
Abstract:
Eye Gaze behaviour of individuals with and without intellectual disability are investigated in an eye tracking study in terms of sensitivity to Nonaccidental (NAPs) and Metric (MPs) shape properties. Total fixation time is used as an indirect measure of attention allocation. Studies have found Mean reaction times for non accidental properties (NAPs) to be shorter than for metric (MPs) when the MP and NAP differences were equalized. METHODS: Twenty-five individuals with intellectual disability (mild and moderate level of Mental Retardation) and twenty-seven normal individuals were compared on mean total fixation duration, accuracy level and mean reaction time for mild NAPs, extreme NAPs and metric properties of images. 2D images of cylinders were adapted and made into forced choice match-to-sample tasks. Tobii TX300 Eye Tracker was used to record total fixation duration and data obtained from the Areas of Interest (AOI). Variable trial duration (total reaction time of each participant) and fixed trail duration (data taken at each second from one to fifteen seconds) data were used for analyses. Both groups did not differ in terms of fixation times (fixed as well as variable) across any of the three image manipulations but differed in terms of reaction time and accuracy. Normal individuals had longer reaction time compared to individuals with intellectual disability across all types of images. Both the groups differed significantly on accuracy measure across all image types. Normal individuals performed better across all three types of images. Mild NAPs vs. Metric differences: There was significant difference between mild NAPs and metric properties of images in terms of reaction times. Mild NAPs images had significantly longer reaction time compared to metric for normal individuals but this difference was not found for individuals with intellectual disability. Mild NAPs images had significantly better accuracy level compared to metric for both the groups. In conclusion, type of image manipulations did not result in differences in attention allocation for individuals with and without intellectual disability. Mild Nonaccidental properties facilitate better accuracy level compared to metric in both the groups but this advantage is seen only for normal group in terms of mean reaction time.Keywords: eye gaze fixations, eye movements, intellectual disability, stimulus properties
Procedia PDF Downloads 553110 Remote Sensing Application in Environmental Researches: Case Study of Iran Mangrove Forests Quantitative Assessment
Authors: Neda Orak, Mostafa Zarei
Abstract:
Environmental assessment is an important session in environment management. Since various methods and techniques have been produces and implemented. Remote sensing (RS) is widely used in many scientific and research fields such as geology, cartography, geography, agriculture, forestry, land use planning, environment, etc. It can show earth surface objects cyclical changes. Also, it can show earth phenomena limits on basis of electromagnetic reflectance changes and deviations records. The research has been done on mangrove forests assessment by RS techniques. Mangrove forests quantitative analysis in Basatin and Bidkhoon estuaries was the aim of this research. It has been done by Landsat satellite images from 1975- 2013 and match to ground control points. This part of mangroves are the last distribution in northern hemisphere. It can provide a good background to improve better management on this important ecosystem. Landsat has provided valuable images to earth changes detection to researchers. This research has used MSS, TM, +ETM, OLI sensors from 1975, 1990, 2000, 2003-2013. Changes had been studied after essential corrections such as fix errors, bands combination, georeferencing on 2012 images as basic image, by maximum likelihood and IPVI Index. It was done by supervised classification. 2004 google earth image and ground points by GPS (2010-2012) was used to compare satellite images obtained changes. Results showed mangrove area in bidkhoon was 1119072 m2 by GPS and 1231200 m2 by maximum likelihood supervised classification and 1317600 m2 by IPVI in 2012. Basatin areas is respectively: 466644 m2, 88200 m2, 63000 m2. Final results show forests have been declined naturally. It is due to human activities in Basatin. The defect was offset by planting in many years. Although the trend has been declining in recent years again. So, it mentioned satellite images have high ability to estimation all environmental processes. This research showed high correlation between images and indexes such as IPVI and NDVI with ground control points.Keywords: IPVI index, Landsat sensor, maximum likelihood supervised classification, Nayband National Park
Procedia PDF Downloads 294109 How Childhood Trauma Changes the Recovery Models
Authors: John Michael Weber
Abstract:
The following research results spanned six months and 175 people addicted to some form of substance, from alcohol to heroin. One question was asked, and the answers were amazing and consistent. The following work is the detailed results of this writer’s answer to his own question and the 175 that followed. A constant pattern took shape throughout the bio-psycho-social assessments, these addicts had “first memories,” the memories were vivid and took place between the ages of three to six years old, to a person those first memories were traumatic. This writer’s personal search into his childhood was not to find an excuse for the way he became, but to explain the reason for becoming an addict. To treat addiction, these memories that have caused Post Traumatic Stress Disorder (PTSD), must be recognized as the catalyst that sparked a predisposition. Cognitive Behavioral Therapy (CBT), integrated with treatment specifically focused on PTSD, gives the addict a better chance at recovery sans relapse. This paper seeks to give the findings of first memories of the addicts assessed and provide the best treatment plan for such an addict, considering, the childhood trauma in congruence with treatment of the Substance Use Disorder (SUD). The question posed was concerning what their first life memory wa It is the hope of this author to take the knowledge that trauma is one of the main catalysts for addiction, will allow therapists to provide better treatment and reduce relapse from abstinence from drugs and alcohol. This research led this author to believe that if treatment of childhood trauma is not a priority, the twelve steps of Alcoholics Anonymous, specifically steps 4 and 5, will not be thoroughly addressed and odds for relapse increase. With this knowledge, parents can be educated on childhood trauma and the effect it has on their children. Parents could be mindful of the fact that the things they perceive as traumatic, do not match what a child, in the developmental years, absorbs as traumatic. It is this author’s belief that what has become the status quo in treatment facilities has not been working for a long time. It is for that reason this author believes things need to change. Relapse has been woven into the fabric of standard operating procedure and that, in this authors view, is not necessary. Childhood Trauma is not being addressed early in recovery and that creates an environment of inevitable relapse. This paper will explore how to break away from the status -quo and rethink the current “evidencebased treatments.” To begin breaking away from status-quo, this ends the Abstract, with hopes an interest has been peaked to read on.Keywords: childood, trauma, treatment, addiction, change
Procedia PDF Downloads 79108 The Aspect of Animal Welfare in Garut Ram’s Event (Seni Ketangkasan Domba Garut) in Indonesia
Authors: Aliyatul Widyan, Denie Heriyadi, An An Nurmeidiansyah
Abstract:
Garut Sheep is a commodity of sheep originally from West Java Indonesia, specifically it has combination rumpung ears less than 4 cm or ngadaun hiris (4-8cm) with ngabuntut bagong, or ngabuntut beurit. West Java culture diversity one of those is the Garut Ram’s Art and Fighting Contest. Garut Ram’s Art and Fighting Contest is an activity of competitive fighting between sheep which comes from Garut. The method used is a survey method in which watching and directly interviewing the farmers who competed in the event. This activity had some aspects of animal welfare in the context of the assessment of the fighting sheep, which are health 10%, performance and body conformation called adeg-adeg 25%, courage 10%, technical field 30% called with teknik pamidangan, technical crash 25%, the health assessment is the assessment conducted during registration by showing a letter issued by related agency declaring that the sheep is eligible to compete in the event, and then when the fighting time the health also will be assessed. Adeg-adeg assessed an aspect of conformity assessment of body posture Garut ram from the physical performance is assessed on the body posture, horn, and the face. Technical of pamidangan assessed by the harmony of music and the movement of sheep to carry out the attack. Courage is assessed based on a mental condition and stamina when the fighting time, in addition to the assessments the activity has some other the component of culture and arts, such as, the audience called bobotoh, the clothes worn called pangsi, tarumpah or sandals, belts, and totopong, hats called laken, instructor of the match, and nayaga or group of people who play traditional Sundanese music to accompany this activity. Art aspect of animal welfare of this activity included the percentage of stroke technique is only around 25%, it makes the beauty of this art is not only measured by the Technical crash but also health, courage, and technique in the field has the highest mark in the assessment with 75 %, the event is certainly very different from sports such as boxing, taekwondo, karate or other martial sports which 100% only based on stroke or crash technique. Local culture value of Garut Ram’s Art and Fighting Contest results in the art of the local animal welfare.Keywords: Garut sheep, Indonesia, the art of Garut Ram’s Art and Fighting Contest , animal welfare
Procedia PDF Downloads 308107 Estimation of Relative Subsidence of Collapsible Soils Using Electromagnetic Measurements
Authors: Henok Hailemariam, Frank Wuttke
Abstract:
Collapsible soils are weak soils that appear to be stable in their natural state, normally dry condition, but rapidly deform under saturation (wetting), thus generating large and unexpected settlements which often yield disastrous consequences for structures unwittingly built on such deposits. In this study, a prediction model for the relative subsidence of stressed collapsible soils based on dielectric permittivity measurement is presented. Unlike most existing methods for soil subsidence prediction, this model does not require moisture content as an input parameter, thus providing the opportunity to obtain accurate estimation of the relative subsidence of collapsible soils using dielectric measurement only. The prediction model is developed based on an existing relative subsidence prediction model (which is dependent on soil moisture condition) and an advanced theoretical frequency and temperature-dependent electromagnetic mixing equation (which effectively removes the moisture content dependence of the original relative subsidence prediction model). For large scale sub-surface soil exploration purposes, the spatial sub-surface soil dielectric data over wide areas and high depths of weak (collapsible) soil deposits can be obtained using non-destructive high frequency electromagnetic (HF-EM) measurement techniques such as ground penetrating radar (GPR). For laboratory or small scale in-situ measurements, techniques such as an open-ended coaxial line with widely applicable time domain reflectometry (TDR) or vector network analysers (VNAs) are usually employed to obtain the soil dielectric data. By using soil dielectric data obtained from small or large scale non-destructive HF-EM investigations, the new model can effectively predict the relative subsidence of weak soils without the need to extract samples for moisture content measurement. Some of the resulting benefits are the preservation of the undisturbed nature of the soil as well as a reduction in the investigation costs and analysis time in the identification of weak (problematic) soils. The accuracy of prediction of the presented model is assessed by conducting relative subsidence tests on a collapsible soil at various initial soil conditions and a good match between the model prediction and experimental results is obtained.Keywords: collapsible soil, dielectric permittivity, moisture content, relative subsidence
Procedia PDF Downloads 363106 Generating 3D Battery Cathode Microstructures using Gaussian Mixture Models and Pix2Pix
Authors: Wesley Teskey, Vedran Glavas, Julian Wegener
Abstract:
Generating battery cathode microstructures is an important area of research, given the proliferation of the use of automotive batteries. Currently, finite element analysis (FEA) is often used for simulations of battery cathode microstructures before physical batteries can be manufactured and tested to verify the simulation results. Unfortunately, a key drawback of using FEA is that this method of simulation is very slow in terms of computational runtime. Generative AI offers the key advantage of speed when compared to FEA, and because of this, generative AI is capable of evaluating very large numbers of candidate microstructures. Given AI generated candidate microstructures, a subset of the promising microstructures can be selected for further validation using FEA. Leveraging the speed advantage of AI allows for a better final microstructural selection because high speed allows for the evaluation of many more candidate microstructures. For the approach presented, battery cathode 3D candidate microstructures are generated using Gaussian Mixture Models (GMMs) and pix2pix. This approach first uses GMMs to generate a population of spheres (representing the “active material” of the cathode). Once spheres have been sampled from the GMM, they are placed within a microstructure. Subsequently, the pix2pix sweeps over the 3D microstructure (iteratively) slice by slice and adds details to the microstructure to determine what portions of the microstructure will become electrolyte and what part of the microstructure will become binder. In this manner, each subsequent slice of the microstructure is evaluated using pix2pix, where the inputs into pix2pix are the previously processed layers of the microstructure. By feeding into pix2pix previously fully processed layers of the microstructure, pix2pix can be used to ensure candidate microstructures represent a realistic physical reality. More specifically, in order for the microstructure to represent a realistic physical reality, the locations of electrolyte and binder in each layer of the microstructure must reasonably match the locations of electrolyte and binder in previous layers to ensure geometric continuity. Using the above outlined approach, a 10x to 100x speed increase was possible when generating candidate microstructures using AI when compared to using a FEA only approach for this task. A key metric for evaluating microstructures was the battery specific power value that the microstructures would be able to produce. The best generative AI result obtained was a 12% increase in specific power for a candidate microstructure when compared to what a FEA only approach was capable of producing. This 12% increase in specific power was verified by FEA simulation.Keywords: finite element analysis, gaussian mixture models, generative design, Pix2Pix, structural design
Procedia PDF Downloads 109105 Analysis and Optimized Design of a Packaged Liquid Chiller
Authors: Saeed Farivar, Mohsen Kahrom
Abstract:
The purpose of this work is to develop a physical simulation model for the purpose of studying the effect of various design parameters on the performance of packaged-liquid chillers. This paper presents a steady-state model for predicting the performance of package-Liquid chiller over a wide range of operation condition. The model inputs are inlet conditions; geometry and output of model include system performance variable such as power consumption, coefficient of performance (COP) and states of refrigerant through the refrigeration cycle. A computer model that simulates the steady-state cyclic performance of a vapor compression chiller is developed for the purpose of performing detailed physical design analysis of actual industrial chillers. The model can be used for optimizing design and for detailed energy efficiency analysis of packaged liquid chillers. The simulation model takes into account presence of all chiller components such as compressor, shell-and-tube condenser and evaporator heat exchangers, thermostatic expansion valve and connection pipes and tubing’s by thermo-hydraulic modeling of heat transfer, fluids flow and thermodynamics processes in each one of the mentioned components. To verify the validity of the developed model, a 7.5 USRT packaged-liquid chiller is used and a laboratory test stand for bringing the chiller to its standard steady-state performance condition is build. Experimental results obtained from testing the chiller in various load and temperature conditions is shown to be in good agreement with those obtained from simulating the performance of the chiller using the computer prediction model. An entropy-minimization-based optimization analysis is performed based on the developed analytical performance model of the chiller. The variation of design parameters in construction of shell-and-tube condenser and evaporator heat exchangers are studied using the developed performance and optimization analysis and simulation model and a best-match condition between the physical design and construction of chiller heat exchangers and its compressor is found to exist. It is expected that manufacturers of chillers and research organizations interested in developing energy-efficient design and analysis of compression chillers can take advantage of the presented study and its results.Keywords: optimization, packaged liquid chiller, performance, simulation
Procedia PDF Downloads 278104 The Changing Landscape of Fire Safety in Covered Car Parks with the Arrival of Electric Vehicles
Authors: Matt Stallwood, Michael Spearpoint
Abstract:
In 2020, the UK government announced that sales of new petrol and diesel cars would end in 2030, and battery-powered cars made up 1 in 8 new cars sold in 2021 – more than the total from the previous five years. The guidance across the UK for the fire safety design of covered car parks is changing in response to the projected rapid growth in electric vehicle (EV) use. This paper discusses the current knowledge on the fire safety concerns posed by EVs, in particular those powered by lithium-ion batteries, when considering the likelihood of vehicle ignition, fire severity and spread of fire to other vehicles. The paper builds on previous work that has investigated the frequency of fires starting in cars powered by internal combustion engines (ICE), the hazard posed by such fires in covered car parks and the potential for neighboring vehicles to become involved in an incident. Historical data has been used to determine the ignition frequency of ICE car fires, whereas such data is scarce when it comes to EV fires. Should a fire occur, then the fire development has conventionally been assessed to match a ‘medium’ growth rate and to have a 95th percentile peak heat release of 9 MW. The paper examines recent literature in which researchers have measured the burning characteristics of EVs to assess whether these values need to be changed. These findings are used to assess the risk posed by EVs when compared to ICE vehicles. The paper examines what new design guidance is being issued by various organizations across the UK, such as fire and rescue services, insurers, local government bodies and regulators and discusses the impact these are having on the arrangement of parking bays, particularly in residential and mixed-use buildings. For example, the paper illustrates how updated guidance published by the Fire Protection Association (FPA) on the installation of sprinkler systems has increased the hazard classification of parking buildings that can have a considerable impact on the feasibility of a building to meet all its design intents when specifying water supply tanks. Another guidance on the provision of smoke ventilation systems and structural fire resistance is also presented. The paper points to where further research is needed on the fire safety risks posed by EVs in covered car parks. This will ensure that any guidance is commensurate with the need to provide an adequate level of life and property safety in the built environment.Keywords: covered car parks, electric vehicles, fire safety, risk
Procedia PDF Downloads 73103 The Maps of Meaning (MoM) Consciousness Theory
Authors: Scott Andersen
Abstract:
Perhaps simply and rather unadornedly, consciousness is having multiple goals for action and the continuously adjudication of such goals to implement action, referred to as the Maps of Meaning (MoM) Consciousness Theory. The MoM theory triangulates through three parallel corollaries, action (behavior), mechanism (morphology/pathophysiology), and goals (teleology). (1) An organism’s consciousness contains a fluid, nested goals. These goals are not intentionality, but intersectionality, embodiment meeting the world. i.e., Darwinian inclusive fitness or randomization, then survival of the fittest. These goals form via gradual descent under inclusive fitness, the goals being the abstraction of a ‘match’ between the evolutionary environment and organism. Human consciousness implements the brain efficiency hypothesis, genetics, epigenetics, and experience crystallize efficiencies, not necessitating best or objective but fitness, i.e., perceived efficiency based on one’s adaptive environment. These efficiencies are objectively arbitrary, but determine the operation and level of one’s consciousness, termed extreme thrownness. Since inclusive fitness drives efficiencies in physiologic mechanism, morphology and behavior (action) and originates one’s goals, embodiment is necessarily entangled to human consciousness as its the intersection of mechanism or action (both necessitating embodiment) occurring in the world that determines fitness. Perception is the operant process of consciousness and is the consciousness’ de facto goal adjudication process. Goal operationalization is fundamentally efficiency-based via one’s unique neuronal mapping as a byproduct of genetics, epigenetics, and experience. Perception involves information intake and information discrimination, equally underpinned by efficiencies of inclusive fitness via extreme thrownness. Perception isn’t a ‘frame rate,’ but Bayesian priors of efficiency based on one’s extreme thrownness. Consciousness and human consciousness is a modular (i.e., a scalar level of richness, which builds up like building blocks) and dimensionalized (i.e., cognitive abilities become possibilities as emergent phenomena at various modularities, like stratified factors in factor analysis). The meta dimensions of human consciousness seemingly include intelligence quotient, personality (five-factor model), richness of perception intake, and richness of perception discrimination, among other potentialities. Future consciousness research should utilize factor analysis to parse modularities and dimensions of human consciousness and animal models.Keywords: consciousness, perception, prospection, embodiment
Procedia PDF Downloads 62102 Learnings From Sri Lanka: Theorizing of Grassroots Women’s Participation in NGO Peacebuilding Activism Against Transnational and Third-World Feminist Perspectives
Authors: Piumi L. Denagamage, Vibusha Madanayake
Abstract:
At the end of a 30-year civil war in Sri Lanka in 2009, Non-Governmental Organizations (NGOs) played a prominent role in post-war development and peacebuilding. Women were a major “beneficiary” of NGO activities on socio-economic empowerment, capacity building for advocacy, and grassroots participation in activism. Undoubtedly, their contribution to Sri Lanka’s post-war transition is tremendous. As development practitioners and researchers who have worked closely with several international and national NGOs in Sri Lanka’s post-war setting, the authors, while practicing self-reflexivity, intend to theorize the grey literature prepared by NGOs against the theoretical frameworks of Transnational and Third World feminisms. Using examples of the grassroots activities conducted by the NGOs with war-affected women, the paper questions whether Colombo-based feminism represents the lived realities of grassroots women at the transnational level. It argues that Colombo-based feminists use their power and exposure to Western feminist approaches to portray diverse forms of oppression women face at grassroots levels, their needs for advocacy, and different modes of resistance on the ground. Many NGOs depend on international donor funding for their grassroots work, which also contributes to their utilization of Western-led knowledge. Despite their efforts to “save marginalized women from oppression,” these modes of intervention are often rejected by the public, including women at local levels. This has also resulted in the rejection of feminism entirely as a culturally root-less alien Western ideology. The analysis connects with the Transnational and Third World theoretical feminist perspectives to problematize the power relations between Western knowledge systems and the lived experiences of grassroots women in the peacebuilding process through NGO activism in Sri Lanka. It also emphasizes that the infiltration of Western knowledge through NGOs has led to the participation of grassroots women only through adjustments of their lived experiences to match the alien knowledge rather than theorizing based on their own lived realities. While sharing a concern that NGOs’ power to adopt Western knowledge systems is often unchecked and unmitigated, the paper signifies the importance of adopting the methods of alternative theorizing to ensure meaningful participation of Third World women in peacebuilding.Keywords: alternative theorizing, colombo-based feminism, grassroots women in peacebuilding, NGO activism, transnational and third world feminisms
Procedia PDF Downloads 57101 An Alternative Credit Scoring System in China’s Consumer Lendingmarket: A System Based on Digital Footprint Data
Authors: Minjuan Sun
Abstract:
Ever since the late 1990s, China has experienced explosive growth in consumer lending, especially in short-term consumer loans, among which, the growth rate of non-bank lending has surpassed bank lending due to the development in financial technology. On the other hand, China does not have a universal credit scoring and registration system that can guide lenders during the processes of credit evaluation and risk control, for example, an individual’s bank credit records are not available for online lenders to see and vice versa. Given this context, the purpose of this paper is three-fold. First, we explore if and how alternative digital footprint data can be utilized to assess borrower’s creditworthiness. Then, we perform a comparative analysis of machine learning methods for the canonical problem of credit default prediction. Finally, we analyze, from an institutional point of view, the necessity of establishing a viable and nationally universal credit registration and scoring system utilizing online digital footprints, so that more people in China can have better access to the consumption loan market. Two different types of digital footprint data are utilized to match with bank’s loan default records. Each separately captures distinct dimensions of a person’s characteristics, such as his shopping patterns and certain aspects of his personality or inferred demographics revealed by social media features like profile image and nickname. We find both datasets can generate either acceptable or excellent prediction results, and different types of data tend to complement each other to get better performances. Typically, the traditional types of data banks normally use like income, occupation, and credit history, update over longer cycles, hence they can’t reflect more immediate changes, like the financial status changes caused by the business crisis; whereas digital footprints can update daily, weekly, or monthly, thus capable of providing a more comprehensive profile of the borrower’s credit capabilities and risks. From the empirical and quantitative examination, we believe digital footprints can become an alternative information source for creditworthiness assessment, because of their near-universal data coverage, and because they can by and large resolve the "thin-file" issue, due to the fact that digital footprints come in much larger volume and higher frequency.Keywords: credit score, digital footprint, Fintech, machine learning
Procedia PDF Downloads 164100 Common Soccer Injuries and Its Risk Factors: A Systematic Review
Authors: C. Brandt, R. Christopher, N. Damons
Abstract:
Background: Soccer is one of the most common sports in the world. It is associated with a significant chance of injury either during training or during the course of an actual match. Studies on the epidemiology of soccer injuries have been widely conducted, but methodological appraisal is lacking to make evidence-based decisions. Objectives: The purpose of this study was to conduct a systematic review of common injuries in soccer and their risk factors. Methods: A systematic review was performed based on the Joanna Briggs Institute procedure for conducting systematic reviews. Databases such as SPORT Discus, Cinahl, Medline, Science Direct, PubMed, and grey literature were searched. The quality of selected studies was rated, and data extracted and tabulated. Plot data analysis was done, and incidence rates and odds ratios were calculated, with their respective 95% confidence intervals. I² statistic was used to determine the proportion of variation across studies. Results: The search yielded 62 studies, of which 21 were screened for inclusion. A total of 16 studies were included for the analysis, ten for qualitative and six for quantitative analysis. The included studies had, on average, a low risk of bias and good methodological quality. The heterogeneity amongst the pooled studies was, however, statistically significant (χ²-p value < 0.001). The pooled results indicated a high incidence of soccer injuries at an incidence rate of 6.83 per 1000 hours of play. The pooled results also showed significant evidence of risk factors and the likelihood of injury occurrence in relation to these risk factors (OR=1.12 95% CI 1.07; 1.17). Conclusion: Although multiple studies are available on the epidemiology of soccer injuries and risk factors, only a limited number of studies were of sound methodology to be included in a review. There was also significant heterogeneity amongst the studies. The incidence rate of common soccer injuries was found to be 6.83 per 1000 hours of play. This incidence rate is lower than the values reported by the majority of previous studies on the occurrence of common soccer injuries. The types of common soccer injuries found by this review support the soccer injuries pattern reported in existing literature as muscle strain and ligament sprain of varying severity, especially in the lower limbs. The risk factors that emerged from this systematic review are predominantly intrinsic risk factors. The risk factors increase the risk of traumatic and overuse injuries of the lower extremities such as hamstrings and groin strains, knee and ankle sprains, and contusion.Keywords: incidence, prevalence, risk factors, soccer injuries
Procedia PDF Downloads 18499 Review of the Model-Based Supply Chain Management Research in the Construction Industry
Authors: Aspasia Koutsokosta, Stefanos Katsavounis
Abstract:
This paper reviews the model-based qualitative and quantitative Operations Management research in the context of Construction Supply Chain Management (CSCM). Construction industry has been traditionally blamed for low productivity, cost and time overruns, waste, high fragmentation and adversarial relationships. The construction industry has been slower than other industries to employ the Supply Chain Management (SCM) concept and develop models that support the decision-making and planning. However the last decade there is a distinct shift from a project-based to a supply-based approach of construction management. CSCM comes up as a new promising management tool of construction operations and improves the performance of construction projects in terms of cost, time and quality. Modeling the Construction Supply Chain (CSC) offers the means to reap the benefits of SCM, make informed decisions and gain competitive advantage. Different modeling approaches and methodologies have been applied in the multi-disciplinary and heterogeneous research field of CSCM. The literature review reveals that a considerable percentage of CSC modeling accommodates conceptual or process models which discuss general management frameworks and do not relate to acknowledged soft OR methods. We particularly focus on the model-based quantitative research and categorize the CSCM models depending on their scope, mathematical formulation, structure, objectives, solution approach, software used and decision level. Although over the last few years there has been clearly an increase of research papers on quantitative CSC models, we identify that the relevant literature is very fragmented with limited applications of simulation, mathematical programming and simulation-based optimization. Most applications are project-specific or study only parts of the supply system. Thus, some complex interdependencies within construction are neglected and the implementation of the integrated supply chain management is hindered. We conclude this paper by giving future research directions and emphasizing the need to develop robust mathematical optimization models for the CSC. We stress that CSC modeling needs a multi-dimensional, system-wide and long-term perspective. Finally, prior applications of SCM to other industries have to be taken into account in order to model CSCs, but not without the consequential reform of generic concepts to match the unique characteristics of the construction industry.Keywords: construction supply chain management, modeling, operations research, optimization, simulation
Procedia PDF Downloads 50398 Impact of Non-Parental Early Childhood Education on Digital Friendship Tendency
Authors: Sheel Chakraborty
Abstract:
Modern society in developed countries has distanced itself from the earlier norm of joint family living, and with the increase of economic pressure, parents' availability for their children during their infant years has been consistently decreasing over the past three decades. During the same time, the pre-primary education system - built mainly on the developmental psychology theory framework of Jean Piaget and Lev Vygotsky, has been promoted in the US through the legislature and funding. Early care and education may have a positive impact on young minds, but a growing number of kids facing social challenges in making friendships in their teenage years raises serious concerns about its effectiveness. The survey-based primary research presented here shows a statistically significant number of millennials between the ages of 10 and 25 prefer to build friendships virtually than face-to-face interactions. Moreover, many teenagers depend more on their virtual friends whom they never met. Contrary to the belief that early social interactions in a non-home setup make the kids confident and more prepared for the real world, many shy-natured kids seem to develop a sense of shakiness in forming social relationships, resulting in loneliness by the time they are young adults. Reflecting on George Mead’s theory of self that is made up of “I” and “Me”, most functioning homes provide the required freedom and forgivable, congenial environment for building the "I" of a toddler; however, daycare or preschools can barely match that. It seems social images created from the expectations perceived by preschoolers “Me" in a non-home setting may interfere and greatly overpower the formation of a confident "I" thus creating a crisis around the inability to form friendships face to face when they grow older. Though the pervasive nature of social media can’t be ignored, the non-parental early care and education practices adopted largely by the urban population have created a favorable platform of teen psychology on which social media popularity thrived, especially providing refuge to shy Gen-Z teenagers. This can explain why young adults today perceive social media as their preferred outlet of expression and a place to form dependable friendships, despite the risk of being cyberbullied.Keywords: digital socialization, shyness, developmental psychology, friendship, early education
Procedia PDF Downloads 12897 Influence of Pretreatment Magnetic Resonance Imaging on Local Therapy Decisions in Intermediate-Risk Prostate Cancer Patients
Authors: Christian Skowronski, Andrew Shanholtzer, Brent Yelton, Muayad Almahariq, Daniel J. Krauss
Abstract:
Prostate cancer has the third highest incidence rate and is the second leading cause of cancer death for men in the United States. Of the diagnostic tools available for intermediate-risk prostate cancer, magnetic resonance imaging (MRI) provides superior soft tissue delineation serving as a valuable tool for both diagnosis and treatment planning. Currently, there is minimal data regarding the practical utility of MRI for evaluation of intermediate-risk prostate cancer. As such, the National Comprehensive Cancer Network’s guidelines indicate MRI as optional in intermediate-risk prostate cancer evaluation. This project aims to elucidate whether MRI affects radiation treatment decisions for intermediate-risk prostate cancer. This was a retrospective study evaluating 210 patients with intermediate-risk prostate cancer, treated with definitive radiotherapy at our institution between 2019-2020. NCCN risk stratification criteria were used to define intermediate-risk prostate cancer. Patients were divided into two groups: those with pretreatment prostate MRI, and those without pretreatment prostate MRI. We compared the use of external beam radiotherapy, brachytherapy alone, brachytherapy boost, and androgen depravation therapy between the two groups. Inverse probability of treatment weighting was used to match the two groups for age, comorbidity index, American Urologic Association symptoms index, pretreatment PSA, grade group, and percent core involvement on prostate biopsy. Wilcoxon Rank Sum and Chi-squared tests were used to compare continuous and categorical variables. Of the patients who met the study’s eligibility criteria, 133 had a prostate MRI and 77 did not. Following propensity matching, there were no differences between baseline characteristics between the two groups. There were no statistically significant differences in treatments pursued between the two groups: 42% vs 47% were treated with brachytherapy alone, 40% vs 42% were treated with external beam radiotherapy alone, 18% vs 12% were treated with external beam radiotherapy with a brachytherapy boost, and 24% vs 17% received androgen deprivation therapy in the non-MRI and MRI groups, respectively. This analysis suggests that pretreatment MRI does not significantly impact radiation therapy or androgen deprivation therapy decisions in patients with intermediate-risk prostate cancer. Obtaining a pretreatment prostate MRI should be used judiciously and pursued only to answer a specific question, for which the answer is likely to impact treatment decision. Further follow up is needed to correlate MRI findings with their impacts on specific oncologic outcomes.Keywords: magnetic resonance imaging, prostate cancer, definitive radiotherapy, gleason score 7
Procedia PDF Downloads 92