Search results for: original metaphor
506 Hepatoprotective Assessment of L-Ascorbate 1-(2-Hydroxyethyl)-4,6-Dimethyl-1, 2-Dihydropyrimidine-2-on in Toxic Liver Damage Test
Authors: Vladimir Zobov, Nail Nazarov, Alexandra Vyshtakalyuk, Vyacheslav Semenov, Irina Galyametdinova, Vladimir Reznik
Abstract:
The aim of this study was to investigate hepatoprotective properties of the Xymedon derivative L-ascorbate 1- (2-hydroxyethyl)-4,6-dimethyl-1,2-dihydropyrimidine-2-one (XD), which exhibits high efficiency as actoprotector. The study was carried out on 68 male albino rats weighing 250-400 g using preventive exposure to the test preparation. Effectiveness of XD win comparison with effectiveness of Xymedon (original substance) after administration of the compounds in identical doses. Maximum dose was 20 mg/kg. The animals orally received Xymedon or its derivative in doses of 10 and 20 mg/kg over 4 days. In 1-1.5 h after drug administration, CCl4 in vegetable oil (1:1) in a dose of 2 ml/kg. Controls received CCl4 but without hepatoprotectors. Intact control group consisted of rats, not receiving CCl4 or other compounds. The next day after the last administration of CCl4 and compounds under study animals were dehematized under ether anesthesia, blood and liver samples were taken for biochemical and histological analysis. Xymedon and XD administered according to the preventice scheme, exerted hepatoprotective effects: Xymedon — in the dose of 20 mg/kg, XD — in doses of 10 and 20 mg/kg. The drugs under study had different effects on liver condition, affected by induction with CCl4. Xymedon had a more pronounced effect both on the ALT level, which can be elevated not only due to destructive changes in hepatocytes, but also as a cholestasis manifestation, and on the serum total protein level, which reflects protein synthesis in liver. XD had a more pronounced effect on AST level, which is one of the markers of hepatocyte damage. Lower effective dose of XD — 10 mg/kg, compared to Xymedon effective according to, and its pronounced effect on AST, the hepatocyte cytolysis marker, is indicative of its higher preventive effectiveness, compared to Xymedon. This work was performed with the financial support of Russian Science Foundation (grant No: 14-50-00014).Keywords: hepatoprotectors, pyrimidine derivatives, toxic liver damage, xymedon
Procedia PDF Downloads 304505 Using the Weakest Precondition to Achieve Self-Stabilization in Critical Networks
Authors: Antonio Pizzarello, Oris Friesen
Abstract:
Networks, such as the electric power grid, must demonstrate exemplary performance and integrity. Integrity depends on the quality of both the system design model and the deployed software. Integrity of the deployed software is key, for both the original versions and the many that occur throughout numerous maintenance activity. Current software engineering technology and practice do not produce adequate integrity. Distributed systems utilize networks where each node is an independent computer system. The connections between them is realized via a network that is normally redundantly connected to guarantee the presence of a path between two nodes in the case of failure of some branch. Furthermore, at each node, there is software which may fail. Self-stabilizing protocols are usually present that recognize failure in the network and perform a repair action that will bring the node back to a correct state. These protocols first introduced by E. W. Dijkstra are currently present in almost all Ethernets. Super stabilization protocols capable of reacting to a change in the network topology due to the removal or addition of a branch in the network are less common but are theoretically defined and available. This paper describes how to use the Software Integrity Assessment (SIA) methodology to analyze self-stabilizing software. SIA is based on the UNITY formalism for parallel and distributed programming, which allows the analysis of code for verifying the progress property p leads-to q that describes the progress of all computations starting in a state satisfying p to a state satisfying q via the execution of one or more system modules. As opposed to demonstrably inadequate test and evaluation methods SIA allows the analysis and verification of any network self-stabilizing software as well as any other software that is designed to recover from failure without external intervention of maintenance personnel. The model to be analyzed is obtained by automatic translation of the system code to a transition system that is based on the use of the weakest precondition.Keywords: network, power grid, self-stabilization, software integrity assessment, UNITY, weakest precondition
Procedia PDF Downloads 225504 Engineered Bio-Coal from Pressed Seed Cake for Removal of 2, 4, 6-Trichlorophenol with Parametric Optimization Using Box–Behnken Method
Authors: Harsha Nagar, Vineet Aniya, Alka Kumari, Satyavathi B.
Abstract:
In the present study, engineered bio-coal was produced from pressed seed cake, which otherwise is non-edible in origin. The production process involves a slow pyrolysis wherein, based on the optimization of process parameters; a substantial reduction in H/C and O/C of 77% was achieved with respect to the original ratio of 1.67 and 0.8, respectively. The bio-coal, so the product was found to have a higher heating value of 29899 kJ/kg with surface area 17 m²/g and pore volume of 0.002 cc/g. The functional characterization of bio-coal and its subsequent modification was carried out to enhance its active sites, which were further used as an adsorbent material for removal of 2,4,6-Trichlorophenol (2,4,6-TCP) herbicide from the aqueous stream. The point of zero charge for the bio-coal was found to be pH < 3 where its surface is positively charged and attracts anions resulting in the maximum 2, 4, 6-TCP adsorption at pH 2.0. The parametric optimization of the adsorption process was studied based on the Box-Behken design with the desirability approach. The results showed optimum values of adsorption efficiency of 74.04% and uptake capacity of 118.336 mg/g for an initial metal concentration of 250 mg/l and particle size of 0.12 mm at pH 2.0 and 1 g/L of bio-coal loading. Negative Gibbs free energy change values indicated the feasibility of 2,4,6-TCP adsorption on biochar. Decreasing the ΔG values with the rise in temperature indicated high favourability at low temperatures. The equilibrium modeling results showed that both isotherms (Langmuir and Freundlich) accurately predicted the equilibrium data, which may be attributed to the different affinity of the functional groups of bio-coal for 2,4,6-TCP removal. The possible mechanism for 2,4,6-TCP adsorption is found to be physisorption (pore diffusion, p*_p electron donor-acceptor interaction, H-bonding, and van der Waals dispersion forces) and chemisorption (phenolic and amine groups chemical bonding) based on the kinetics data modeling.Keywords: engineered biocoal, 2, 4, 6-trichlorophenol, box behnken design, biosorption
Procedia PDF Downloads 117503 Scoping Review of Biological Age Measurement Composed of Biomarkers
Authors: Diego Alejandro Espíndola-Fernández, Ana María Posada-Cano, Dagnóvar Aristizábal-Ocampo, Jaime Alberto Gallo-Villegas
Abstract:
Background: With the increase in life expectancy, aging has been subject of frequent research, and therefore multiple strategies have been proposed to quantify the advance of the years based on the known physiology of human senescence. For several decades, attempts have been made to characterize these changes through the concept of biological age, which aims to integrate, in a measure of time, structural or functional variation through biomarkers in comparison with simple chronological age. The objective of this scoping review is to deepen the updated concept of measuring biological age composed of biomarkers in the general population and to summarize recent evidence to identify gaps and priorities for future research. Methods: A scoping review was conducted according to the five-phase methodology developed by Arksey and O'Malley through a search of five bibliographic databases to February 2021. Original articles were included with no time or language limit that described the biological age composed of at least two biomarkers in those over 18 years of age. Results: 674 articles were identified, of which 105 were evaluated for eligibility and 65 were included with information on the measurement of biological age composed of biomarkers. Articles from 1974 of 15 nationalities were found, most observational studies, in which clinical or paraclinical biomarkers were used, and 11 different methods described for the calculation of the composite biological age were informed. The outcomes reported were the relationship with the same measured biomarkers, specified risk factors, comorbidities, physical or cognitive functionality, and mortality. Conclusions: The concept of biological age composed of biomarkers has evolved since the 1970s and multiple methods of its quantification have been described through the combination of different clinical and paraclinical variables from observational studies. Future research should consider the population characteristics, and the choice of biomarkers against the proposed outcomes to improve the understanding of aging variables to direct effective strategies for a proper approach.Keywords: biological age, biological aging, aging, senescence, biomarker
Procedia PDF Downloads 188502 Enhancing Self-Assessment and Management Potentials by Modifying Option Selections on Hartman’s Personality Test
Authors: Daniel L. Clinciu, IkromAbdulaev, Brian D. Oscar
Abstract:
Various personality profile tests are used to identify personality strengths and limits in individuals, helping both individuals and managers to optimize work and team effort in organizations. One such test, Hartman’s personality profile emphasizes four driving "core motives" influenced or affected by both strengths and limitations. The driving core motives are classified into four colors: Red-motivated by power; Blue-discipline and loyalty; White-peace; and Yellow–fun loving. Two shortcomings of Hartman’s personality test are noted; 1) only one choice for every item/situation allowed and 2) selection of a choice even if not applicable. A test taker may be as much nurturing as he is opinionated but since “opinionated” seems less attractive the individual would likely select nurturing, causing a misidentification in personality strengths and limits. Since few individuals have a "strong" personality, it is difficult to assess their true personality strengths and limits allowing either only one choice or requiring unwanted choices, undermining the potential of the test. We modified Hartman’s personality profile allowing test takers to make either multiple choices for any item/situation or leave them blank when not applying. Sixty-eight participants (38 males and 30 females), 17-49 years old, from countries in Asia, Europe, N. America, CIS, Africa, Latin America, and Oceania were included. 58 participants (85.3%) reported the modified test, allowing either multiple or no choices better identified their personality strengths and limits, while 10 participants (14.7%) expressed the original (one choice version) is sufficient. The overall results show our modified test enhanced the identification and balance of personality strengths and limits, aiding test takers, managers, and firms to better understand personality strengths and limits, particularly useful in making task-related, teamwork, and management decisions.Keywords: organizational behavior, personality tests, personality limitations, personality strengths, task management, team work
Procedia PDF Downloads 363501 DNA Barcoding of Selected Fin Fishes from New Calabar River in Rivers State, South-South Nigeria Using Cytochrome C Oxidase Subunit 1 Gene
Authors: Anukwu J. U., Nwamba H. O., Njom V. S., Achikanu C. E., Edoga C. O. Chiaha I. E.
Abstract:
The major environmental crisis is the loss of biodiversity and the decline is predominant in the fish population. Although taxonomic history began 250 years ago, there are still undiscovered members of species and new species are waiting to be uncovered. The failure of the traditional taxonomic method to address this issue has resulted to the adoption of a molecular approach-DNA barcoding. It was proposed that DNA barcoding using the mitochondrion cytochrome oxidase subunit I (COI) gene has the capability to serve as a barcode for fish. The aim of this study was to use DNA barcoding in the identification of fish species in the New Calabar River, Rivers State. BLAST result showed the correlation between the sequence queried and the biological sequences with the NCBI database. The names of the samples, percentage ID, predicted organisms, and GenBank Accession numbers were clearly identified. A total of 18 sequences (all > 600bp) belonging to 8 species, 7 genera, 7 families, and 5 orders were validated and submitted to the NCBI database. Each nucleotide peak was represented by a single colour with various percentage occurrences. Two (22%) out of the 9 original samples analyzed corresponded with the predicted organisms from the BLAST result.) There were a total of 712 positions in the final dataset. Evolutionary analyses were conducted in MEGA11. Pairwise sequence alignment showed different consensus positions and a total of 30 mutations. There was one insertion from Polynemus dubius and 29 substitutions (transition-15 and transversion-14) mutations. No deletion and nonsense codons were detected in all the amplified sequences. This work will facilitate more research in other keys areas such as the identification of mislabeled fish products, illegal trading of endangered species, and effective tracking of fish biodiversity.Keywords: DNA barcoding, Imo river, phylogenetic tree, mutation.
Procedia PDF Downloads 8500 Driving towards Sustainability with Shared Electric Mobility: A Case Study of Time-Sharing Electric Cars on University’s Campus
Authors: Jiayi Pan, Le Qin, Shichan Zhang
Abstract:
Following the worldwide growing interest in the sharing economy, especially in China, innovations within the field are rapidly emerging. It is, therefore, appropriate to address the under-investigated sustainability issues related to the development of shared mobility. In 2019, Shanghai Jiao Tong University (SJTU) introduced one of the first on-campus Time-sharing Electric Cars (TEC) that counts now about 4000 users. The increasing popularity of this original initiative highlights the necessity to assess its sustainability and find ways to extend the performance and availability of this new transport option. This study used an online questionnaire survey on TEC usage and experience to collect answers among students and university staff. The study also conducted interviews with TEC’s team in order to better understand its motivations and operating model. Data analysis underscores that TEC’s usage frequency is positively associated with a lower carbon footprint, showing that this scheme contributes to improving the environmental sustainability of transportation on campus. This study also demonstrates that TEC provides a convenient solution to those not owning a car in situations where soft mobility cannot satisfy their needs, this contributing to a globally positive assessment of TEC in the social domains of sustainability. As SJTU’s TEC project belongs to the non-profit sector and aims at serving current research, its economical sustainability is not among the main preoccupations, and TEC, along with similar projects, could greatly benefit from this study’s findings to better evaluate the overall benefits and develop operation on a larger scale. This study suggests various ways to further improve the TEC users’ experience and enhance its promotion. This research believably provides meaningful insights on the position of shared transportation within transport mode choice and how to assess the overall sustainability of such innovations.Keywords: shared mobility, sharing economy, sustainability assessment, sustainable transportation, urban electric transportation
Procedia PDF Downloads 215499 An Exploration of Architecture Design Methods in Urban Fringe Belt Based on Typo-Morphological Research- A Case of Expansion Project of the Second Middle School in Xuancheng, China
Authors: Dong Yinan, Zhou Zijie
Abstract:
Urban fringe belt is an important part of urban morphology research. Different from the relatively fixed central district of city, the position of fringe belt is changing. In the process of urban expansion, the original fringe belt is likely to be merged by the new-built city, even become new city public center. During the change, we are facing the dialectic between restoring the organicity of old urban form and creating new urban image. There are lots of relevant research in urban scale, but when we focus on building scale, rare design method can be proposed, thus some new individual building cannot match the overall urban planning intent. The expansion project of the second middle school in Xuancheng is facing this situation. The existing campus is located in the south fringe belt of Xuancheng, Anhui province, China, adjacent to farmland and ponds. While based on the Xucheng urban planning, the farmland and ponds will be transformed into a big lake, around which new public center will be built; the expansion of the school becomes an important part of the boundary of the new public center. Therefore, the expansion project faces challenges from both urban and building scale. In urban scale, we analyze and summarize the fringe belt characters through the reading of existing and future urban organism, in order to determine the form of the expansion project. Meanwhile, in building scale, we study on different types of school buildings and select appropriate type which can satisfy to both urban form and school function. This research attempts to investigate design methods based on an under construction project in Xuancheng, a historic city in southeast China. It also aims to bridge the gap from urban design to individual building design through the typo-morphological research.Keywords: design methods, urban fringe belt, typo-morphological research, middle school
Procedia PDF Downloads 508498 Brown Macroalgae L. hyperborea as Natural Cation Exchanger and Electron Donor for the Treatment of a Zinc and Hexavalent Chromium Containing Galvanization Wastewater
Authors: Luciana P. Mazur, Tatiana A. Pozdniakova, Rui A. R. Boaventura, Vitor J. P. Vilar
Abstract:
The electroplating industry requires a lot of process water, which generates a large volume of wastewater loaded with heavy metals. Two different wastewaters were collected in a company’s wastewater treatment plant, one after the use of zinc in the metal plating process and the other after the use of chromium. The main characteristics of the Zn(II) and Cr(VI) wastewaters are: pH = 6.7/5.9; chemical oxygen demand = 55/<5 mg/L; sodium, potassium, magnesium and calcium ions concentrations of 326/28, 4/28, 11/7 and 46/37 mg/L, respectively; zinc(II) = 11 mg/L and Cr(VI) = 39 mg/L. Batch studies showed that L. hyperborea can be established as a natural cation exchanger for heavy metals uptake mainly due to the presence of negatively charged functional groups in the surface of the biomass. Beyond that, L. hyperborea can be used as a natural electron donor for hexavalent chromium reduction to trivalent chromium at acidic medium through the oxidation of the biomass, and Cr(III) can be further bound to the negatively charged functional groups. The uptake capacity of Cr(III) by the oxidized biomass after Cr(VI) reduction was higher than by the algae in its original form. This can be attributed to the oxidation of the biomass during Cr(VI) reduction, turning other active sites available for Cr(III) binding. The brown macroalgae Laminaria hyperborea was packed in a fixed-bed column in order to evaluate the feasibility of the system for the continuous treatment of the two galvanization wastewaters. The column, with an internal diameter of 4.8 cm, was packed with 59 g of algae up to a bed height of 27 cm. The operation strategy adopted for the treatment of the two wastewaters consisted in: i) treatment of the Zn(II) wastewater in the first sorption cycle; ii) desorption of pre-loaded Zn(II) using an 1.0 M HCl solution; iii) treatment of the Cr(VI) wastewater, taking advantage of the acidic conditions of the column after the desorption cycle, for the reduction of the Cr(VI) to Cr(III), in the presence of the electrons resulting from the biomass oxidation. This cycle ends when all the oxidizing groups are used.Keywords: biosorption, brown marine macroalgae, zinc, chromium
Procedia PDF Downloads 324497 Enhancing Model Interoperability and Reuse by Designing and Developing a Unified Metamodel Standard
Authors: Arash Gharibi
Abstract:
Mankind has always used models to solve problems. Essentially, models are simplified versions of reality, whose need stems from having to deal with complexity; many processes or phenomena are too complex to be described completely. Thus a fundamental model requirement is that it contains the characteristic features that are essential in the context of the problem to be solved or described. Models are used in virtually every scientific domain to deal with various problems. During the recent decades, the number of models has increased exponentially. Publication of models as part of original research has traditionally been in in scientific periodicals, series, monographs, agency reports, national journals and laboratory reports. This makes it difficult for interested groups and communities to stay informed about the state-of-the-art. During the modeling process, many important decisions are made which impact the final form of the model. Without a record of these considerations, the final model remains ill-defined and open to varying interpretations. Unfortunately, the details of these considerations are often lost or in case there is any existing information about a model, it is likely to be written intuitively in different layouts and in different degrees of detail. In order to overcome these issues, different domains have attempted to implement their own approaches to preserve their models’ information in forms of model documentation. The most frequently cited model documentation approaches show that they are domain specific, not to applicable to the existing models and evolutionary flexibility and intrinsic corrections and improvements are not possible with the current approaches. These issues are all because of a lack of unified standards for model documentation. As a way forward, this research will propose a new standard for capturing and managing models’ information in a unified way so that interoperability and reusability of models become possible. This standard will also be evolutionary, meaning members of modeling realm could contribute to its ongoing developments and improvements. In this paper, the current 3 of the most common metamodels are reviewed and according to pros and cons of each, a new metamodel is proposed.Keywords: metamodel, modeling, interoperability, reuse
Procedia PDF Downloads 198496 Psychometric Properties of the Social Skills Rating System: Teacher Version
Authors: Amani Kappi, Ana Maria Linares, Gia Mudd-Martin
Abstract:
Children with Attention Deficit Hyperactivity Disorder (ADHD) are more likely to develop social skills deficits that can lead to academic underachievement, peer rejection, and maladjustment. Surveying teachers about children's social skills with ADHD will become a significant factor in identifying whether the children will be diagnosed with social skills deficits. The teacher-specific version of the Social Skills Rating System scale (SSRS-T) has been used as a screening tool for children's social behaviors. The psychometric properties of the SSRS-T have been evaluated in various populations and settings, such as when used by teachers to assess social skills for children with learning disabilities. However, few studies have been conducted to examine the psychometric properties of the SSRS-T when used to assess children with ADHD. The purpose of this study was to examine the psychometric properties of the SSRS-T and two SSRS-T subscales, Social Skills and Problem Behaviors. This was a secondary analysis of longitudinal data from the Fragile Families and Child Well-Being Study. This study included a sample of 194 teachers who used the SSRS-T to assess the social skills of children aged 8 to 10 years with ADHD. Exploratory principal components factor analysis was used to assess the construct validity of the SSRS-T scale. Cronbach’s alpha value was used to assess the internal consistency reliability of the total SSRS-T scale and the subscales. Item analyses included item-item intercorrelations, item-to-subscale correlations, and Cronbach’s alpha value changes with item deletion. The results of internal consistency reliability for both the total scale and subscales were acceptable. The results of the exploratory factor analysis supported the five factors of SSRS-T (Cooperation, Self-control, Assertion, Internalize behaviors, and Externalize behaviors) reported in the original version. Findings indicated that SSRS-T is a reliable and valid tool for assessing the social behaviors of children with ADHD.Keywords: ADHD, children, social skills, SSRS-T, psychometric properties
Procedia PDF Downloads 132495 Hydrological Revival Possibilities for River Assi: A Tributary of the River Ganga in the Middle Ganga Basin
Authors: Anurag Mishra, Prabhat Kumar Singh, Anurag Ohri, Shishir Gaur
Abstract:
Streams and rivulets are crucial in maintaining river networks and their hydrology, influencing downstream ecosystems, and connecting different watersheds of urban and rural areas. The river Assi, an urban river, once a lifeline for the locals, has degraded over time. Evidence, such as the presence of paleochannels and patterns of water bodies and settlements, suggests that the river Assi was initially an alluvial stream or rivulet that originated near Rishi Durvasha Ashram near Prayagraj, flowing approximately 120 km before joining the river Ganga at Assi ghat in Varanasi. Presently, a major challenge is that nearly 90% of its original channel has been silted and disappeared, with only the last 8 km retaining some semblance of a river. It is possible that initially, the river Assi branched off from the river Ganga and functioned as a Yazoo stream. In this study, paleochannels of the river Assi were identified using Landsat 5 imageries and SRTM DEM. The study employed the Normalized Difference Vegetation Seasonality Index (NDVSI) and Principal Component Analysis (PCA) of the Normalized Difference Vegetation Index (NDVI) to detect these paleochannels. The average elevation of the sub-basin at the Durvasha Rishi Ashram of river Assi is 96 meters, while it reduces to 80 meters near its confluence with the Ganga in Varanasi, resulting in a 16-meter elevation drop along its course. There are 81 subbasins covering an area of 83,241 square kilometers. It is possible that due to the increased resistance in the flow of river Assi near urban areas of Varanasi, a new channel, Morwa, has originated at an elevation of 87 meters, meeting river Varuna at an elevation of 79 meters. The difference in elevation is 8 meters. Furthermore, the study explored the possibility of restoring the paleochannel of the river Assi and nearby ponds and water bodies to improve the river's base flow and overall hydrological conditions.Keywords: River Assi, small river restoration, paleochannel identification, remote sensing, GIS
Procedia PDF Downloads 73494 Status of Alien Invasive Trees on the Grassland Plateau in Nyika National Park
Authors: Andrew Kanzunguze, Sopani Sichinga, Paston Simkoko, George Nxumayo, Cosmas, V. B. Dambo
Abstract:
Early detection of plant invasions is a necessary prerequisite for effective invasive plant management in protected areas. This study was conducted to determine the distribution and abundance of alien invasive trees in Nyika National Park (NNP). Data on species' presence and abundance were collected from belt transects (n=31) in a 100 square kilometer area on the central plateau. The data were tested for normality using the Shapiro-Wilk test; Mann-Whitney test was carried out to compare frequencies and abundances between the species, and geographical information systems were used for spatial analyses. Results revealed that Black Wattle (Acacia mearnsii), Mexican Pine (Pinus patula) and Himalayan Raspberry (Rubus ellipticus) were the main alien invasive trees on the plateau. A. mearnsii was localized in the areas where it was first introduced, whereas P. patula and R. ellipticus were spread out beyond original points of introduction. R. ellipticus occurred as dense, extensive (up to 50 meters) thickets on the margins of forest patches and pine stands, whilst P. patula trees were frequent in the valleys, occurring most densely (up to 39 stems per 100 square meters) south-west of Chelinda camp on the central plateau with high variation in tree heights. Additionally, there were no significant differences in abundance between R. ellipticus (48) and P. patula (48) in the study area (p > 0.05) It was concluded that R. ellipticus and P. patula require more attention as compared to A. mearnsii. Howbeit, further studies into the invasion ecology of both P. patula and R. ellipticus on the Nyika plateau are highly recommended so as to assess the threat posed by the species on biodiversity, and recommend appropriate conservation measures in the national park.Keywords: alien-invasive trees, Himalayan raspberry, Nyika National Park, Mexican pine
Procedia PDF Downloads 210493 Tracing Ethnic Identity through Prehistoric Paintings and Tribal Art in Central India
Authors: Indrani Chattopadhyaya
Abstract:
This paper seeks to examine how identity – a cultural self-image of a group of people develops – how they live, they think, they celebrate and express their world view through language, gesture, symbols, and rituals. 'Culture' is a way of life and 'identity' is assertion of that cultural self-image practiced by the group. The way in which peoples live varies from time to time and from place to place. This variation is important for their identity. Archaeologists have classified these patterns of spacial variations as 'archaeological culture.' These cultures are identified 'self-consciously' with a particular social group indicating ethnicity. The ethnic identity as archaeological cultures also legitimizes the claims of modern groups to territory. In prehistoric research problems of ethnicity and multiculturalism, stylistic attributes significantly reflect both group membership and individuality. In India, anthropologists feel that though tribes have suffered relative isolation through history, they have remained an integral part of Indian civilization. The term 'tribe' calls for substitution with a more meaningful name with an indigenous flavour 'Adivasi' (original inhabitants of the land).While studying prehistoric rock paintings from central India - Sonbhadra (Uttar Pradesh) and Bhimbetka (Madhya Pradesh), one is struck by the similarity between stylistic attributes of painted motifs in the prehistoric rock shelters and the present day indigenous art of Kol and Bhil tribes in the area, who have not seen these prehistoric rock paintings, yet are carrying on with the tradition of painting and decorating their houses in the same way. They worship concretionary sandstone blocks with triangular laminae as Goddess, Devi, Shakti. This practice is going on since Upper Palaeolithic period confirmed by archaeological excavation. The past is legitimizing the role of the present groups by allowing them to trace their roots from earlier times.Keywords: ethnic identity, hermeneutics, semiotics, Adivasi
Procedia PDF Downloads 310492 The Role of Non-Governmental Organizations in Combating Human Trafficking in South India: An Overview
Authors: Kumudini Achchi
Abstract:
India, being known for its rich cultural values has given a special place to women who are also been victims of humiliation, torture, and exploitation. The major share of Human Trafficking goes to sex trafficking which is recognised as world’s second most huge social evil. The original form of sex trafficking in India is prostitution with and without religious sanction. Today the situation of such women reached as an issue of human rights where they rights are denied severely. This situation demanded intervention to protect them from the exploitative situation. NGO are the proactive initiatives which offer support to the exploited women in sex trade. To understand the intervention programs of NGOs in South India, a study was conducted covering four states and a union territory considering 32 NGOs based on their preparedness to participate in the research study. Descriptive and diagnostic research design was adopted along with interview schedule as a tool for collecting data. The study reveals that these NGOs believes in the possibility of mainstreaming commercially sexually exploited women and found adopted seven different programs in the process such as rescue, rehabilitation, reintegration, prevention, developmental, advocacy and research. Each area involves different programs to reach and prepare the exploited women towards mainstreamed society which has been discussed in the paper. Implementation of these programs is not an easy task for the organizations rather they are facing hardships in the areas such as social, legal, financial, political which are hindering the successful operations. Rescue, advocacy, and research are the least adopted areas by the NGOs because of lack of support as well as knowledge in the area. Rehabilitation stands as the most adopted area in implementation. The paper further deals with the challenges in the implementation of the programs as well as the remedial measures in social work point of view having Indian cultural background.Keywords: NGOs, commercially sexually exploited women, programmes, South India
Procedia PDF Downloads 248491 Marketing Strategy of Agricultural Products in Remote Districts: A Case Study of Mudan Township, Taiwan
Authors: Ying-Hsiang Ho, Hsiao-Tseng Lin
Abstract:
Mudan Township is a remote mountainous area in Taiwan. In recent years, due to the migration of the population, inconvenient transportation, digital divide, and low production, agricultural products marketing have become a major issue. This research aims to develop the marketing strategy suitable for the agricultural products of the rural areas. The main objective of this work is to conduct in-depth interviews with scholars and experts in the marketing field, combined with the marketing 4P combination, to analyze and summarize the possible marketing strategies for agricultural products for remote districts. The interviews consist of seven experts from industry who have practical experience in producing, marketing, and selling agricultural products and three professors that have experience in teaching marketing management. The in-depth interviews are conducted for about an hour using a pre-drafted interview outline. The results of the interviews are summarized by semantic analysis and presented in a marketing 4P combination. The results indicate that in terms of products, high-quality products with original characteristics can be added through the implementation of production history, organic certification, and cultural packaging. In the place part, we found that the use of emerging communities, the emphasis on cross-industry alliances, the improvement of information application capabilities of rural households, production and marketing group, and contractual farming system are the development priorities. In terms of promotion, it should be an emphasis on the management of internet social media and word-of-mouth marketing. Mudan Township may consider promoting agricultural products through special festivals such as farmer's market, wild ginger flower season and hot spring season. This research also proposes relevant recommendations for the government's public sector and related industry reference for the promotion of agricultural products for remote area.Keywords: marketing strategy, remote districts, agricultural products, in-depth interviews
Procedia PDF Downloads 128490 Coping with the Stress and Negative Emotions of Care-Giving by Using Techniques from Seneca, Epictetus, and Marcus Aurelius
Authors: Arsalan Memon
Abstract:
There are many challenges that a caregiver faces in average everyday life. One such challenge is coping with the stress and negative emotions of caregiving. The Stoics (i.e. Lucius Annaeus Seneca [4 B.C.E. - 65 C.E.], Epictetus [50-135 C.E.], and Marcus Aurelius [121-180 C.E.]) have provided coping techniques that are useful for dealing with stress and negative emotions. This paper lists and explains some of the fundamental coping techniques provided by the Stoics. For instance, some Stoic coping techniques thus follow (the list is far from exhaustive): a) mindfulness: to the best of your ability, constantly being aware of your thoughts, habits, desires, norms, memories, likes/dislikes, beliefs, values, and of everything outside of you in the world (b) constantly adjusting one’s expectations in accordance with reality, c) memento mori: constantly reminding oneself that death is inevitable and that death is not to be seen as evil, and d) praemeditatio malorum: constantly detaching oneself from everything that is so dear to one so that the least amount of suffering follows from the loss, damage, or ceasing to be of such entities. All coping techniques will be extracted from the following original texts by the Stoics: Seneca’s Letters to Lucilius, Epictetus’ Discourses and the Encheiridion, and Marcus Aurelius’ Meditations. One major finding is that the usefulness of each Stoic coping technique can be empirically tested by anyone in the sense of applying it one’s own life especially when one is facing real-life challenges. Another major finding is that all of the Stoic coping techniques are predicated upon, and follow from, one fundamental principle: constantly differentiate what is and what is not in one’s control. After differentiating it, one should constantly habituate oneself in not controlling things that are beyond one’s control. For example, the following things are beyond one’s control (all things being equal): death, certain illnesses, being born in a particular socio-economic family, etc. The conclusion is that if one habituates oneself by practicing to the best of one’s ability both the fundamental Stoic principle and the Stoic coping techniques, then such a habitual practice can eventually decrease the stress and negative emotions that one experiences by being a caregiver.Keywords: care-giving, coping techniques, negative emotions, stoicism, stress
Procedia PDF Downloads 141489 The Work Book Tool, a Lifelong Chronicle: Part of the "Designprogrammet" at the Design School of the University in Kalmar, Sweden
Authors: Henriette Jarild-Koblanck, Monica Moro
Abstract:
The research has been implemented at the Kalmar University now LNU Linnaeus University inside the Design Program (Designprogrammet) for several years. The Work Book tool was created using the framework of the Bologna declaration. The project concerns primarily pedagogy and design methodology, focusing on how we evaluate artistic work processes and projects and on how we can develop the preconditions for cross-disciplinary work. The original idea of the Work Book springs from the steady habit of the Swedish researcher and now retired full professor and dean Henriette Koblanck to put images, things and colours in a notebook, right from her childhood, writing down impressions and reflections. On this preliminary thought of making use of a work book, in a form freely chosen by the user, she began to develop the Design Program (Designprogrammet) that was applied at the Kalmar University now LNU Linnaeus University, where she called a number of professionals to collaborate, among them Monica Moro an Italian designer, researcher, and teacher in the field of colour and shape. The educational intention is that the Work Book should become a tool that is both inspirational for the process of thinking and intuitional creating, and personal support for both rational and technical thinking. The students were to use the Work Book not only to visually and graphically document their results from investigations, experiments and thoughts but also as a tool to present their works to others, -students, tutors and teachers, or to other stakeholders they discussed the proceedings with. To help the students a number of matrixes were developed oriented to evaluate the projects in elaboration, based on the Bologna Declaration. In conclusion, the feedback from the students is excellent; many are still using the Work Book as a professional tool as in their words they consider it a rather accurate representation of their working process, and furthermore of themselves, so much that many of them have used it as a portfolio when applying for jobs.Keywords: academic program, art, assessment of student’s progress, Bologna Declaration, design, learning, self-assessment
Procedia PDF Downloads 338488 Design and Development of Engine Valve Train Wear Test Rig for the Assessment of Valve Train Tribochemistry
Authors: V. Manjunath, C. V. Chandrashekara
Abstract:
Ecosystem authority calls for the use of lubricants with less effect on the nature in terms of exhaust emission, while engine user demands more mileage per liter of fuel without any compromise on engine durability. From this viewpoint, engine manufacturers require the optimum combination of materials and lubricant additive package to minimize friction and wear in the engine components like piston, crankshaft and valve train etc. The demands are placed for requirements to operate at higher speeds, loads, temperature and for extended replacement intervals of engine oil. Besides, it is necessary to accurately predict the lubricant life or the replacement interval to prevent lubrication and valve-train components failure. Experimental tribology evaluation of new engine oils requires large amount of time and energy. Hence low cost bench test is necessary for industries and original equipment manufacturing companies (OEM) to study the performance of lubricants. The present work outlines the procedure for the design and development of a valve train wear rig (MCR) to simulate the ASTMD-6891 and to develop new engine test for Indian automobile sector to evaluate lubricants for Indian automobile market. In order to improve the lubrication between cam and follower of internal combustion engine, the influence of materials or oils viscosity and additives on the friction and wear characteristics are examined with test rig by increasing the contact load at two different revolution speed. From the experimentation following results are made obvious. Temperature, Torque, speed and wear plots are used to validate the data obtained from the newly developed multi-cam cam rig (MCR) with follower against a cast iron camshaft. Camshaft lobe wear is measured at seven different locations on cam profile. Tribofilm formed using 5W-30 oil is evaluated and correlated with the standard test results.Keywords: ASTMD-6891, multi-cam rig (MCR), 5W-30, cam-profile
Procedia PDF Downloads 177487 Homogenization of Cocoa Beans Fermentation to Upgrade Quality Using an Original Improved Fermenter
Authors: Aka S. Koffi, N’Goran Yao, Philippe Bastide, Denis Bruneau, Diby Kadjo
Abstract:
Cocoa beans (Theobroma cocoa L.) are the main components for chocolate manufacturing. The beans must be correctly fermented at first. Traditional process to perform the first fermentation (lactic fermentation) often consists in confining cacao beans using banana leaves or a fermentation basket, both of them leading to a poor product thermal insulation and to an inability to mix the product. Box fermenter reduces this loss by using a wood with large thickness (e>3cm), but mixing to homogenize the product is still hard to perform. Automatic fermenters are not rentable for most of producers. Heat (T>45°C) and acidity produced during the fermentation by microbiology activity of yeasts and bacteria are enabling the emergence of potential flavor and taste of future chocolate. In this study, a cylindro-rotative fermenter (FCR-V1) has been built and coconut fibers were used in its structure to confine heat. An axis of rotation (360°) has been integrated to facilitate the turning and homogenization of beans in the fermenter. This axis permits to put fermenter in a vertical position during the anaerobic alcoholic phase of fermentation, and horizontally during acetic phase to take advantage of the mid height filling. For circulation of air flow during turning in acetic phase, two woven rattan with grid have been made, one for the top and second for the bottom of the fermenter. In order to reduce air flow during acetic phase, two airtight covers are put on each grid cover. The efficiency of the turning by this kind of rotation, coupled with homogenization of the temperature, caused by the horizontal position in the acetic phase of the fermenter, contribute to having a good proportion of well-fermented beans (83.23%). In addition, beans’pH values ranged between 4.5 and 5.5. These values are ideal for enzymatic activity in the production of the aromatic compounds inside beans. The regularity of mass loss during all fermentation makes it possible to predict the drying surface corresponding to the amount being fermented.Keywords: cocoa fermentation, fermenter, microbial activity, temperature, turning
Procedia PDF Downloads 263486 Size Optimization of Microfluidic Polymerase Chain Reaction Devices Using COMSOL
Authors: Foteini Zagklavara, Peter Jimack, Nikil Kapur, Ozz Querin, Harvey Thompson
Abstract:
The invention and development of the Polymerase Chain Reaction (PCR) technology have revolutionised molecular biology and molecular diagnostics. There is an urgent need to optimise their performance of those devices while reducing the total construction and operation costs. The present study proposes a CFD-enabled optimisation methodology for continuous flow (CF) PCR devices with serpentine-channel structure, which enables the trade-offs between competing objectives of DNA amplification efficiency and pressure drop to be explored. This is achieved by using a surrogate-enabled optimisation approach accounting for the geometrical features of a CF μPCR device by performing a series of simulations at a relatively small number of Design of Experiments (DoE) points, with the use of COMSOL Multiphysics 5.4. The values of the objectives are extracted from the CFD solutions, and response surfaces created using the polyharmonic splines and neural networks. After creating the respective response surfaces, genetic algorithm, and a multi-level coordinate search optimisation function are used to locate the optimum design parameters. Both optimisation methods produced similar results for both the neural network and the polyharmonic spline response surfaces. The results indicate that there is the possibility of improving the DNA efficiency by ∼2% in one PCR cycle when doubling the width of the microchannel to 400 μm while maintaining the height at the value of the original design (50μm). Moreover, the increase in the width of the serpentine microchannel is combined with a decrease in its total length in order to obtain the same residence times in all the simulations, resulting in a smaller total substrate volume (32.94% decrease). A multi-objective optimisation is also performed with the use of a Pareto Front plot. Such knowledge will enable designers to maximise the amount of DNA amplified or to minimise the time taken throughout thermal cycling in such devices.Keywords: PCR, optimisation, microfluidics, COMSOL
Procedia PDF Downloads 162485 Study on Eco-Feedback of Thermal Comfort and Cost Efficiency for Low Energy Residence
Authors: Y. Jin, N. Zhang, X. Luo, W. Zhang
Abstract:
China with annual increasing 0.5-0.6 billion squares city residence has brought in enormous energy consumption by HVAC facilities and other appliances. In this regard, governments and researchers are encouraging renewable energy like solar energy, geothermal energy using in houses. However, high cost of equipment and low energy conversion result in a very low acceptable to residents. So what’s the equilibrium point of eco-feedback to reach economic benefit and thermal comfort? That is the main question should be answered. In this paper, the objective is an on-site solar PV and heater house, which has been evaluated as a low energy building. Since HVAC system is considered as main energy consumption equipment, the residence with 24-hour monitoring system set to measure temperature, wind velocity and energy in-out value with no HVAC system for one month of summer and winter. Thermal comfort time period will be analyzed and confirmed; then the air-conditioner will be started within thermal discomfort time for the following one summer and winter month. The same data will be recorded to calculate the average energy consumption monthly for a purpose of whole day thermal comfort. Finally, two analysis work will be done: 1) Original building thermal simulation by computer at design stage with actual measured temperature after construction will be contrastive analyzed; 2) The cost of renewable energy facilities and power consumption converted to cost efficient rate to assess the feasibility of renewable energy input for residence. The results of the experiment showed that a certain deviation exists between actual measured data and simulated one for human thermal comfort, especially in summer period. Moreover, the cost-effectiveness is high for a house in targeting city Guilin now with at least 11 years of cost-covering. The conclusion proves that an eco-feedback of a low energy residence is never only consideration of its energy net value, but also the cost efficiency that is the critical factor to push renewable energy acceptable by the public.Keywords: cost efficiency, eco-feedback, low energy residence, thermal comfort
Procedia PDF Downloads 257484 Effect of Modulation Factors on Tomotherapy Plans and Their Quality Analysis
Authors: Asawari Alok Pawaskar
Abstract:
This study was aimed at investigating quality assurance (QA) done with IBA matrix, the discrepancies observed for helical tomotherapy plans. A selection of tomotherapy plans that initially failed the with Matrix process was chosen for this investigation. These plans failed the fluence analysis as assessed using gamma criteria (3%, 3 mm). Each of these plans was modified (keeping the planning constraints the same), beamlets rebatched and reoptimized. By increasing and decreasing the modulation factor, the fluence in a circumferential plane as measured with a diode array was assessed. A subset of these plans was investigated using varied pitch values. Factors for each plan that were examined were point doses, fluences, leaf opening times, planned leaf sinograms, and uniformity indices. In order to ensure that the treatment constraints remained the same, the dose-volume histograms (DVHs) of all the modulated plans were compared to the original plan. It was observed that a large increase in the modulation factor did not significantly improve DVH uniformity, but reduced the gamma analysis pass rate. This also increased the treatment delivery time by slowing down the gantry rotation speed which then increases the maximum to mean non-zero leaf open time ratio. Increasing and decreasing the pitch value did not substantially change treatment time, but the delivery accuracy was adversely affected. This may be due to many other factors, such as the complexity of the treatment plan and site. Patient sites included in this study were head and neck, breast, abdomen. The impact of leaf timing inaccuracies on plans was greater with higher modulation factors. Point-dose measurements were seen to be less susceptible to changes in pitch and modulation factors. The initial modulation factor used by the optimizer, such that the TPS generated ‘actual’ modulation factor within the range of 1.4 to 2.5, resulted in an improved deliverable plan.Keywords: dose volume histogram, modulation factor, IBA matrix, tomotherapy
Procedia PDF Downloads 178483 Simulation Study on Effects of Surfactant Properties on Surfactant Enhanced Oil Recovery from Fractured Reservoirs
Authors: Xiaoqian Cheng, Jon Kleppe, Ole Torsaeter
Abstract:
One objective of this work is to analyze the effects of surfactant properties (viscosity, concentration, and adsorption) on surfactant enhanced oil recovery at laboratory scale. The other objective is to obtain the functional relationships between surfactant properties and the ultimate oil recovery and oil recovery rate. A core is cut into two parts from the middle to imitate the matrix with a horizontal fracture. An injector and a producer are at the left and right sides of the fracture separately. The middle slice of the core is used as the model in this paper, whose size is 4cm x 0.1cm x 4.1cm, and the space of the fracture in the middle is 0.1 cm. The original properties of matrix, brine, oil in the base case are from Ekofisk Field. The properties of surfactant are from literature. Eclipse is used as the simulator. The results are followings: 1) The viscosity of surfactant solution has a positive linear relationship with surfactant oil recovery time. And the relationship between viscosity and oil production rate is an inverse function. The viscosity of surfactant solution has no obvious effect on ultimate oil recovery. Since most of the surfactant has no big effect on viscosity of brine, the viscosity of surfactant solution is not a key parameter of surfactant screening for surfactant flooding in fractured reservoirs. 2) The increase of surfactant concentration results a decrease of oil recovery rate and an increase of ultimate oil recovery. However, there are no functions could describe the relationships. Study on economy should be conducted because of the price of surfactant and oil. 3) In the study of surfactant adsorption, assume that the matrix wettability is changed to water-wet when the surfactant adsorption is to the maximum at all cases. And the ratio of surfactant adsorption and surfactant concentration (Cads/Csurf) is used to estimate the functional relationship. The results show that the relationship between ultimate oil recovery and Cads/Csurf is a logarithmic function. The oil production rate has a positive linear relationship with exp(Cads/Csurf). The work here could be used as a reference for the surfactant screening of surfactant enhanced oil recovery from fractured reservoirs. And the functional relationships between surfactant properties and the oil recovery rate and ultimate oil recovery help to improve upscaling methods.Keywords: fractured reservoirs, surfactant adsorption, surfactant concentration, surfactant EOR, surfactant viscosity
Procedia PDF Downloads 174482 Waste Derived from Refinery and Petrochemical Plants Activities: Processing of Oil Sludge through Thermal Desorption
Authors: Anna Bohers, Emília Hroncová, Juraj Ladomerský
Abstract:
Oil sludge with its main characteristic of high acidity is a waste product generated from the operation of refinery and petrochemical plants. Former refinery and petrochemical plant - Petrochema Dubová is present in Slovakia as well. Its activities was to process the crude oil through sulfonation and adsorption technology for production of lubricating and special oils, synthetic detergents and special white oils for cosmetic and medical purposes. Seventy years ago – period, when this historical acid sludge burden has been created – comparing to the environmental awareness the production was in preference. That is the reason why, as in many countries, also in Slovakia a historical environmental burden is present until now – 229 211 m3 of oil sludge in the middle of the National Park of Nízke Tatry mountain chain. Neither one of tried treatment methods – bio or non-biologic one - was proved as suitable for processing or for recovery in the reason of different factors admission: i.e. strong aggressivity, difficulty with handling because of its sludgy and liquid state et sim. As a potential solution, also incineration was tested, but it was not proven as a suitable method, as the concentration of SO2 in combustion gases was too high, and it was not possible to decrease it under the acceptable value of 2000 mg.mn-3. That is the reason why the operation of incineration plant has been terminated, and the acid sludge landfills are present until nowadays. The objective of this paper is to present a new possibility of processing and valorization of acid sludgy-waste. The processing of oil sludge was performed through the effective separation - thermal desorption technology, through which it is possible to split the sludgy material into the matrix (soil, sediments) and organic contaminants. In order to boost the efficiency in the processing of acid sludge through thermal desorption, the work will present the possibility of application of an original technology – Method of Blowing Decomposition for recovering of organic matter into technological lubricating oil.Keywords: hazardous waste, oil sludge, remediation, thermal desorption
Procedia PDF Downloads 200481 Impact of Treatment of Fragility Fractures Due to Osteoporosis as an Economic Burden Worldwide: A Systematic Review
Authors: Fabiha Tanzeem
Abstract:
BACKGROUND: Osteoporosis is a skeletal disease that is associated with a reduction in bone mass and microstructures of the bone and deterioration of bone tissue. Fragility fracture due to osteoporosis is the most significant complication of osteoporosis. The increasing prevalence of fragility fractures presents a growing burden on the global economy. There is a rapidly evolving need to improve awareness of the costs associated with these types of fractures and to review current policies and practices for the prevention and management of the disease. This systematic review will identify and describe the direct and indirect costs associated with osteoporotic fragility fractures from a global perspective from the included studies. The review will also find out whether the costs required for the treatment of fragility fractures due to osteoporosis impose an economic burden on the global healthcare system. METHODS: Four major databases were systematically searched for direct and indirect costs of osteoporotic fragility fracture studies in the English Language. PubMed, Cochrane Library, Embase and Google Scholar were searched for suitable articles published between 1990 and July 2020. RESULTS: The original search yielded 1166 papers; from these, 27 articles were selected for this review according to the inclusion and exclusion criteria. In the 27 studies, the highest direct costs were associated with the treatment of pelvic fractures, with the majority of the expenditure due to hospitalization and surgical treatments. It is also observed that most of the articles are from developed countries. CONCLUSION: This review indicates the significance of the economic burden of osteoporosis globally, although more research needs to be done in developing countries. In the treatment of fragility fractures, direct costs were the main reported expenditure in this review. The healthcare costs incurred globally can be significantly reduced by implementing measures to effectively prevent the disease. Raising awareness in children and adults by improving the quality of the information available and standardising policies and planning of services requires further research.Keywords: systematic review, osteoporosis, cost of illness
Procedia PDF Downloads 169480 Non-State Actors and Their Liabilities in International Armed Conflicts
Authors: Shivam Dwivedi, Saumya Kapoor
Abstract:
The Israeli Supreme Court in Public Committee against Torture in Israel v. Government of Israel observed the presence of non-state actors in cross-border terrorist activities thereby making the role of non-state actors in terrorism the center of discussion under the scope of International Humanitarian Law. Non-state actors and their role in a conflict have also been traversed upon by the Tadic case decided by the International Criminal Tribunal for the former Yugoslavia. However, there still are lacunae in International Humanitarian Law when it comes to determining the nature of a conflict, especially when non-state groups act within the ambit of various states, for example, Taliban in Afghanistan or the groups operating in Ukraine and Georgia. Thus, the objective of writing this paper would be to observe the ways by which non-state actors particularly terrorist organizations could be brought under the ambit of Additional Protocol I. Additional Protocol I is a 1977 amendment protocol to the Geneva Conventions relating to the protection of victims of international conflicts which basically outlaws indiscriminate attacks on civilian populations, forbids conscription of children and preserves various other human rights during the war. In general, the Additional Protocol I reaffirms the provisions of the original four Geneva Conventions. Since provisions of Additional Protocol I apply only to cases pertaining to International Armed Conflicts, the answer to the problem should lie in including the scope for ‘transnational armed conflict’ in the already existing definition of ‘International Armed Conflict’ within Common Article 2 of the Geneva Conventions. This would broaden the applicability of the provisions in cases of non-state groups and render an international character to the conflict. Also, the non-state groups operating or appearing to operate should be determined by the test laid down in the Nicaragua case by the International Court of Justice and not under the Tadic case decided by the International Criminal Tribunal for Former Yugoslavia in order to provide a comprehensive system to deal with such groups. The result of the above proposal, therefore, would enhance the scope of the application of International Humanitarian Law to non-state groups and individuals.Keywords: Geneva Conventions, International Armed Conflict, International Humanitarian Law, non-state actors
Procedia PDF Downloads 379479 Assessing an Instrument Usability: Response Interpolation and Scale Sensitivity
Authors: Betsy Ng, Seng Chee Tan, Choon Lang Quek, Peter Looker, Jaime Koh
Abstract:
The purpose of the present study was to determine the particular scale rating that stands out for an instrument. The instrument was designed to assess student perceptions of various learning environments, namely face-to-face, online and blended. The original instrument had a 5-point Likert items (1 = strongly disagree and 5 = strongly agree). Alternate versions were modified with a 6-point Likert scale and a bar scale rating. Participants consisted of undergraduates in a local university were involved in the usability testing of the instrument in an electronic setting. They were presented with the 5-point, 6-point and percentage-bar (100-point) scale ratings, in response to their perceptions of learning environments. The 5-point and 6-point Likert scales were presented in the form of radio button controls for each number, while the percentage-bar scale was presented with a sliding selection. Among these responses, 6-point Likert scale emerged to be the best overall. When participants were confronted with the 5-point items, they either chose 3 or 4, suggesting that data loss could occur due to the insensitivity of instrument. The insensitivity of instrument could be due to the discreet options, as evidenced by response interpolation. To avoid the constraint of discreet options, the percentage-bar scale rating was tested, but the participant responses were not well-interpolated. The bar scale might have provided a variety of responses without a constraint of a set of categorical options, but it seemed to reflect a lack of perceived and objective accuracy. The 6-point Likert scale was more likely to reflect a respondent’s perceived and objective accuracy as well as higher sensitivity. This finding supported the conclusion that 6-point Likert items provided a more accurate measure of the participant’s evaluation. The 5-point and bar scale ratings might not be accurately measuring the participants’ responses. This study highlighted the importance of the respondent’s perception of accuracy, respondent’s true evaluation, and the scale’s ease of use. Implications and limitations of this study were also discussed.Keywords: usability, interpolation, sensitivity, Likert scales, accuracy
Procedia PDF Downloads 406478 Adolescent Obesity Leading to Adulthood Cardiovascular Diseases among Punjabi Population
Authors: Manpreet Kaur, Badaruddoza, Sandeep Kaur Brar
Abstract:
The increasing prevalence of adolescent obesity is one of the major causes to be hypertensive in adulthood. Various statistical methods have been applied to examine the performance of anthropometric indices for the identification of adverse cardiovascular risk profile. The present work was undertaken to determine the significant traditional risk factors through principal component factor analysis (PCFA) among population based Punjabi adolescents aged 10-18 years. Data was collected among adolescent children from different schools situated in urban areas of Punjab, India. Principal component factor analysis (PCFA) was applied to extract orthogonal components from anthropometric and physiometric variables. Association between components were explained by factor loadings. The PCFA extracted four factors, which explained 84.21%, 84.06% and 83.15% of the total variance of the 14 original quantitative traits among boys, girls and combined subjects respectively. Factor 1 has high loading of the traits that reflect adiposity such as waist circumference, BMI and skinfolds among both sexes. However, waist circumference and body mass index are the indicator of abdominal obesity which increases the risk of cardiovascular diseases. The loadings of these two traits have found maximum in girls adolescents (WC=0.924; BMI=0.905). Therefore, factor 1 is the strong indicator of atherosclerosis in adolescents. Factor 2 is predominantly loaded with blood pressures and related traits (SBP, DBP, MBP and pulse rate) which reflect the risk of essential hypertension in adolescent girls and combined subjects, whereas, factor 2 loaded with obesity related traits in boys (weight and hip circumferences). Comparably, factor 3 is loaded with blood pressures in boys and with height and WHR in girls, while factor 4 contains high loading of pulse pressure among boys, girls and combined group of adolescents.Keywords: adolescent obesity, cvd, hypertension, punjabi population
Procedia PDF Downloads 372477 Argumentation Frameworks and Theories of Judging
Authors: Sonia Anand Knowlton
Abstract:
With the rise of artificial intelligence, computer science is becoming increasingly integrated in virtually every area of life. Of course, the law is no exception. Through argumentation frameworks (AFs), computer scientists have used abstract algebra to structure the legal reasoning process in a way that allows conclusions to be drawn from a formalized system of arguments. In AFs, arguments compete against each other for logical success and are related to one another through the binary operation of the attack. The prevailing arguments make up the preferred extension of the given argumentation framework, telling us what set of arguments must be accepted from a logical standpoint. There have been several developments of AFs since its original conception in the early 90’s in efforts to make them more aligned with the human reasoning process. Generally, these developments have sought to add nuance to the factors that influence the logical success of competing arguments (e.g., giving an argument more logical strength based on the underlying value it promotes). The most cogent development was that of the Extended Argumentation Framework (EAF), in which attacks can themselves be attacked by other arguments, and the promotion of different competing values can be formalized within the system. This article applies the logical structure of EAFs to current theoretical understandings of judicial reasoning to contribute to theories of judging and to the evolution of AFs simultaneously. The argument is that the main limitation of EAFs, when applied to judicial reasoning, is that they require judges to themselves assign values to different arguments and then lexically order these values to determine the given framework’s preferred extension. Drawing on John Rawls’ Theory of Justice, the examination that follows is whether values are lexical and commensurable to this extent. The analysis that follows then suggests a potential extension of the EAF system with an approach that formalizes different “planes of attack” for competing arguments that promote lexically ordered values. This article concludes with a summary of how these insights contribute to theories of judging and of legal reasoning more broadly, specifically in indeterminate cases where judges must turn to value-based approaches.Keywords: computer science, mathematics, law, legal theory, judging
Procedia PDF Downloads 60