Search results for: Trend Detection
680 Authorship Attribution Using Sociolinguistic Profiling When Considering Civil and Criminal Cases
Authors: Diana A. Sokolova
Abstract:
This article is devoted to one of the possibilities for identifying the author of an oral or written text - sociolinguistic profiling. Sociolinguistic profiling is utilized as a forensic linguistics technique to identify individuals through language patterns, particularly in criminal cases. It examines how social factors influence language use. This study aims to showcase the significance of linguistic profiling for attributing authorship in texts and emphasizes the necessity for its continuous enhancement while considering its strengths and weaknesses. The study employs semantic-syntactic, lexical-semantic, linguopragmatic, logical, presupposition, authorization, and content analysis methods to investigate linguistic profiling. The research highlights the relevance of sociolinguistic profiling in authorship attribution and underscores the importance of ongoing refinement of the technique, considering its limitations. This study emphasizes the practical application of linguistic profiling in legal settings and underscores the impact of social factors on language use, contributing to the field of forensic linguistics. Data collection involves collecting oral and written texts from criminal and civil court cases to analyze language patterns for authorship attribution. The collected data is analyzed using various linguistic analysis methods to identify individual characteristics and patterns that can aid in authorship attribution. The study addresses the effectiveness of sociolinguistic profiling in identifying authors of texts and explores the impact of social factors on language use in legal contexts. In spite of advantages challenges in linguistics profiling have spurred debates and controversies in academic circles, legal environments, and the public sphere. So, this research highlights the significance of sociolinguistic profiling in authorship attribution and emphasizes the need for further development of this method, considering its strengths and weaknesses.Keywords: authorship attribution, detection of identifying, dialect, features, forensic linguistics, social influence, sociolinguistics, unique speech characteristics
Procedia PDF Downloads 36679 Submarine Topography and Beach Survey of Gang-Neung Port in South Korea, Using Multi-Beam Echo Sounder and Shipborne Mobile Light Detection and Ranging System
Authors: Won Hyuck Kim, Chang Hwan Kim, Hyun Wook Kim, Myoung Hoon Lee, Chan Hong Park, Hyeon Yeong Park
Abstract:
We conducted submarine topography & beach survey from December 2015 and January 2016 using multi-beam echo sounder EM3001(Kongsberg corporation) & Shipborne Mobile LiDAR System. Our survey area were the Anmok beach in Gangneung, South Korea. We made Shipborne Mobile LiDAR System for these survey. Shipborne Mobile LiDAR System includes LiDAR (RIEGL LMS-420i), IMU ((Inertial Measurement Unit, MAGUS Inertial+) and RTKGNSS (Real Time Kinematic Global Navigation Satellite System, LEIAC GS 15 GS25) for beach's measurement, LiDAR's motion compensation & precise position. Shipborne Mobile LiDAR System scans beach on the movable vessel using the laser. We mounted Shipborne Mobile LiDAR System on the top of the vessel. Before beach survey, we conducted eight circles IMU calibration survey for stabilizing heading of IMU. This exploration should be as close as possible to the beach. But our vessel could not come closer to the beach because of latency objects in the water. At the same time, we conduct submarine topography survey using multi-beam echo sounder EM3001. A multi-beam echo sounder is a device observing and recording the submarine topography using sound wave. We mounted multi-beam echo sounder on left side of the vessel. We were equipped with a motion sensor, DGNSS (Differential Global Navigation Satellite System), and SV (Sound velocity) sensor for the vessel's motion compensation, vessel's position, and the velocity of sound of seawater. Shipborne Mobile LiDAR System was able to reduce the consuming time of beach survey rather than previous conventional methods of beach survey.Keywords: Anmok, beach survey, Shipborne Mobile LiDAR System, submarine topography
Procedia PDF Downloads 429678 The Impact of Encapsulated Raspberry Juice on the Surface Colour of Enriched White Chocolate
Authors: Ivana Loncarevic, Biljana Pajin, Jovana Petrovic, Aleksandar Fistes, Vesna Tumbas Saponjac, Danica Zaric
Abstract:
Chocolate is a complex rheological system usually defined as a suspension consisting of non-fat particles dispersed in cocoa butter as a continuous fat phase. Dark chocolate possesses polyphenols as major constituents whose dietary consumption has been associated with beneficial effects. Milk chocolate is formulated with a lower percentage of cocoa bean liquor than dark chocolate and it often contains lower amounts of polyphenols, while in white chocolate the fat-free cocoa solids are left out completely. Following the current trend of development of functional foods, there is an idea to create enriched white chocolate with the addition of encapsulated bioactive compounds from berry fruits. The aim of this study was to examine the surface colour of enriched white chocolate with the addition of 6, 8, and 10% of raspberry juice encapsulated in maltodextrins, in order to preserve the stability, bioactivity, and bioavailability of the active ingredients. The surface color of samples was measured by MINOLTA Chroma Meter CR-400 (Minolta Co., Ltd., Osaka, Japan) using D 65 lighting, a 2º standard observer angle and an 8-mm aperture in the measuring head. The following CIELab color coordinates were determined: L* – lightness, a* – redness to greenness and b* – yellowness to blueness. The addition of raspberry encapsulates led to the creation of new type of enriched chocolate. Raspberry encapsulate changed the values of the lightness (L*), a* (red tone) and b* (yellow tone) measured on the surface of enriched chocolate in accordance with applied concentrations. White chocolate has significantly (p < 0.05) highest L* (74.6) and b* (20.31) values of all samples indicating the bright surface of the white chocolate, as well as a high share of a yellow tone. At the same time, white chocolate has the negative a* value (-1.00) on its surface which includes green tones. Raspberry juice encapsulate has the darkest surface with significantly (p < 0.05) lowest value of L* (42.75), where increasing of its concentration in enriched chocolates decreases their L* values. Chocolate with 6% of encapsulate has significantly (p < 0.05) highest value of L* (60.56) in relation to enriched chocolate with 8% of encapsulate (53.57), and 10% of encapsulate (51.01). a* value measured on the surface of white chocolate is negative (-1.00) tending towards green tones. Raspberry juice encapsulates increases red tone in enriched chocolates in accordance with the added amounts (23.22, 30.85, and 33.32 in enriched chocolates with 6, 8, and 10% encapsulated raspberry juice, respectively). The presence of yellow tones in enriched chocolates significantly (p < 0.05) decreases with the addition of E (with b* value 5.21), from 10.01 in enriched chocolate with a minimal amount of raspberry juice encapsulates to 8.91 in chocolate with a maximum concentration of raspberry juice encapsulate. The addition of encapsulated raspberry juice to white chocolate led to the creation of new type of enriched chocolate with attractive color. The research in this paper was conducted within the project titled ‘Development of innovative chocolate products fortified with bioactive compounds’ (Innovation Fund Project ID 50051).Keywords: color, encapsulated raspberry juice, polyphenols, white chocolate
Procedia PDF Downloads 183677 Seashore Debris Detection System Using Deep Learning and Histogram of Gradients-Extractor Based Instance Segmentation Model
Authors: Anshika Kankane, Dongshik Kang
Abstract:
Marine debris has a significant influence on coastal environments, damaging biodiversity, and causing loss and damage to marine and ocean sector. A functional cost-effective and automatic approach has been used to look up at this problem. Computer vision combined with a deep learning-based model is being proposed to identify and categorize marine debris of seven kinds on different beach locations of Japan. This research compares state-of-the-art deep learning models with a suggested model architecture that is utilized as a feature extractor for debris categorization. The model is being proposed to detect seven categories of litter using a manually constructed debris dataset, with the help of Mask R-CNN for instance segmentation and a shape matching network called HOGShape, which can then be cleaned on time by clean-up organizations using warning notifications of the system. The manually constructed dataset for this system is created by annotating the images taken by fixed KaKaXi camera using CVAT annotation tool with seven kinds of category labels. A pre-trained HOG feature extractor on LIBSVM is being used along with multiple templates matching on HOG maps of images and HOG maps of templates to improve the predicted masked images obtained via Mask R-CNN training. This system intends to timely alert the cleanup organizations with the warning notifications using live recorded beach debris data. The suggested network results in the improvement of misclassified debris masks of debris objects with different illuminations, shapes, viewpoints and litter with occlusions which have vague visibility.Keywords: computer vision, debris, deep learning, fixed live camera images, histogram of gradients feature extractor, instance segmentation, manually annotated dataset, multiple template matching
Procedia PDF Downloads 106676 Perception of the End of a Same Sex Relationship and Preparation towards It: A Qualitative Research about Anticipation, Coping and Conflict Management against the Backdrop of Partial Legal Recognition
Authors: Merav Meiron-Goren, Orna Braun-Lewensohn, Tal Litvak-Hirsh
Abstract:
In recent years, there has been an increasing tendency towards separation and divorce in relationships. Nevertheless, many couples in a first marriage do not anticipate this as a probable possibility and do not make any preparation for it. Same sex couples establishing a family encounter a much more complicated situation than do heterosexual couples. Although there is a trend towards legal recognition of same sex marriage, many countries, including Israel, do not recognize it. The absence of legal recognition or the existence of partial recognition creates complexity for these couples. They have to fight for their right to establish a family, like the recognition of the biological child of a woman, as a child of her woman spouse too, or the option of surrogacy for a male couple who want children, and more. The lack of legal recognition is burden on the lives of these couples. In the absence of clear norms regarding the conduct of the family unit, the couples must define for themselves the family structure, and deal with everyday dilemmas that lack institutional solutions. This may increase the friction between the two couple members, and it is one of the factors that make it difficult for them to maintain the relationship. This complexity exists, perhaps even more so, in separation. The end of relationship is often accompanied by a deep crisis, causing pain and stress. In most cases, there are also other conflicts that must be settled. These are more complicated when rights are in doubt or do not exist at all. Complex issues for separating same sex couples may include matters of property, recognition of parenthood, and care and support for the children. The significance of the study is based on the fact that same sex relationships are becoming more and more widespread, and are an integral part of the society. Even so, there is still an absence of research focusing on such relationships and their ending. The objective of the study is to research the perceptions of same sex couples regarding the possibility of separation, preparing for it, conflict management and resolving disputes through the separation process. It is also important to understand the point of view of couples that have gone through separation, how they coped with the emotional and practical difficulties involved in the separation process. The doctoral research will use a qualitative research method in a phenomenological approach, based on semi-structured in-depth interviews. The interviewees will be divided into three groups- at the beginning of a relationship, during the separation crisis and after separation, with a time perspective, with about 10 couples from each group. The main theoretical model serving as the basis of the study will be the Lazarus and Folkman theory of coping with stress. This model deals with the coping process, including cognitive appraisal of an experience as stressful, appraisal of the coping resources, and using strategies of coping. The strategies are divided into two main groups, emotion-focused forms of coping and problem-focused forms of coping.Keywords: conflict management, coping, legal recognition, same-sex relationship, separation
Procedia PDF Downloads 142675 Forensic Medical Capacities of Research of Saliva Stains on Physical Evidence after Washing
Authors: Saule Mussabekova
Abstract:
Recent advances in genetics have allowed increasing acutely the capacities of the formation of reliable evidence in conducting forensic examinations. Thus, traces of biological origin are important sources of information about a crime. Currently, around the world, sexual offenses have increased, and among them are those in which the criminals use various detergents to remove traces of their crime. A feature of modern synthetic detergents is the presence of biological additives - enzymes. Enzymes purposefully destroy stains of biological origin. To study the nature and extent of the impact of modern washing powders on saliva stains on the physical evidence, specially prepared test specimens of different types of tissues to which saliva was applied have been examined. Materials and Methods: Washing machines of famous manufacturers of household appliances have been used with different production characteristics and advertised brands of washing powder for test washing. Over 3,500 experimental samples were tested. After washing, the traces of saliva were identified using modern research methods of forensic medicine. Results: The influence was tested and the dependence of the use of different washing programs, types of washing machines and washing powders in the process of establishing saliva trace and identify of the stains on the physical evidence while washing was revealed. The results of experimental and practical expert studies have shown that in most cases it is not possible to draw the conclusions in the identification of saliva traces on physical evidence after washing. This is a consequence of the effect of biological additives and other additional factors on traces of saliva during washing. Conclusions: On the basis of the results of the study, the feasibility of saliva traces of the stains on physical evidence after washing is established. The use of modern molecular genetic methods makes it possible to partially solve the problems arising in the study of unlaundered evidence. Additional study of physical evidence after washing facilitates detection and investigation of sexual offenses against women and children.Keywords: saliva research, modern synthetic detergents, laundry detergents, forensic medicine
Procedia PDF Downloads 216674 SLAPP Suits: An Encroachment On Human Rights Of A Global Proportion And What Can Be Done About It
Authors: Laura Lee Prather
Abstract:
A functioning democracy is defined by various characteristics, including freedom of speech, equality, human rights, rule of law and many more. Lawsuits brought to intimidate speakers, drain the resources of community members, and silence journalists and others who speak out in support of matters of public concern are an abuse of the legal system and an encroachment of human rights. The impact can have a broad chilling effect, deterring others from speaking out against abuse. This article aims to suggest ways to address this form of judicial harassment. In 1988, University of Denver professors George Pring and Penelope Canan coined the term “SLAPP” when they brought to light a troubling trend of people getting sued for speaking out about matters of public concern. Their research demonstrated that thousands of people engaging in public debate and citizen involvement in government have been and will be the targets of multi-million-dollar lawsuits for the purpose of silencing them and dissuading others from speaking out in the future. SLAPP actions chill information and harm the public at large. Professors Pring and Canan catalogued a tsunami of SLAPP suits filed by public officials, real estate developers and businessmen against environmentalists, consumers, women’s rights advocates and more. SLAPPs are now seen in every region of the world as a means to intimidate people into silence and are viewed as a global affront to human rights. Anti-SLAPP laws are the antidote to SLAPP suits and while commonplace in the United States are only recently being considered in the EU and the UK. This researcher studied more than thirty years of Anti-SLAPP legislative policy in the U.S., the call for evidence and resultant EU Commission’s Anti-SLAPP Directive and Member States Recommendations, the call for evidence by the UK Ministry of Justice, response and Model Anti-SLAPP law presented to UK Parliament, as well as, conducted dozens of interviews with NGO’s throughout the EU, UK, and US to identify varying approaches to SLAPP lawsuits, public policy, and support for SLAPP victims. This paper identifies best practices taken from the US, EU and UK that can be implemented globally to help combat SLAPPs by: (1) raising awareness about SLAPPs, how to identify them, and recognizing habitual abusers of the court system; (2) engaging governments in the policy discussion in combatting SLAPPs and supporting SLAPP victims; (3) educating judges in recognizing SLAPPs an general training on encroachment of human rights; (4) and holding lawyers accountable for ravaging the rule of law.Keywords: Anti-SLAPP Laws and Policy, Comparative media law and policy, EU Anti-SLAPP Directive and Member Recommendations, International Human Rights of Freedom of Expression
Procedia PDF Downloads 68673 Highly Responsive p-NiO/n-rGO Heterojunction Based Self-Powered UV Photodetectors
Authors: P. Joshna, Souvik Kundu
Abstract:
Detection of ultraviolet (UV) radiation is very important as it has exhibited a profound influence on humankind and other existences, including military equipment. In this work, a self-powered UV photodetector was reported based on oxides heterojunctions. The thin films of p-type nickel oxide (NiO) and n-type reduced graphene oxide (rGO) were used for the formation of p-n heterojunction. Low-Cost and low-temperature chemical synthesis was utilized to prepare the oxides, and the spin coating technique was employed to deposit those onto indium doped tin oxide (ITO) coated glass substrates. The top electrode platinum was deposited utilizing physical vapor evaporation technique. NiO offers strong UV absorption with high hole mobility, and rGO prevents the recombination rate by separating electrons out from the photogenerated carriers. Several structural characterizations such as x-ray diffraction, atomic force microscope, scanning electron microscope were used to study the materials crystallinity, microstructures, and surface roughness. On one side, the oxides were found to be polycrystalline in nature, and no secondary phases were present. On the other side, surface roughness was found to be low with no pit holes, which depicts the formation of high-quality oxides thin films. Whereas, x-ray photoelectron spectroscopy was employed to study the chemical compositions and oxidation structures. The electrical characterizations such as current-voltage and current response were also performed on the device to determine the responsivity, detectivity, and external quantum efficiency under dark and UV illumination. This p-n heterojunction device offered faster photoresponse and high on-off ratio under 365 nm UV light illumination of zero bias. The device based on the proposed architecture shows the efficacy of the oxides heterojunction for efficient UV photodetection under zero bias, which opens up a new path towards the development of self-powered photodetector for environment and health monitoring sector.Keywords: chemical synthesis, oxides, photodetectors, spin coating
Procedia PDF Downloads 123672 Developing a Roadmap by Integrating of Environmental Indicators with the Nitrogen Footprint in an Agriculture Region, Hualien, Taiwan
Authors: Ming-Chien Su, Yi-Zih Chen, Nien-Hsin Kao, Hideaki Shibata
Abstract:
The major component of the atmosphere is nitrogen, yet atmospheric nitrogen has limited availability for biological use. Human activities have produced different types of nitrogen related compounds such as nitrogen oxides from combustion, nitrogen fertilizers from farming, and the nitrogen compounds from waste and wastewater, all of which have impacted the environment. Many studies have indicated the N-footprint is dominated by food, followed by housing, transportation, and goods and services sectors. To solve the impact issues from agricultural land, nitrogen cycle research is one of the key solutions. The study site is located in Hualien County, Taiwan, a major rice and food production area of Taiwan. Importantly, environmentally friendly farming has been promoted for years, and an environmental indicator system has been established by previous authors based on the concept of resilience capacity index (RCI) and environmental performance index (EPI). Nitrogen management is required for food production, as excess N causes environmental pollution. Therefore it is very important to develop a roadmap of the nitrogen footprint, and to integrate it with environmental indicators. The key focus of the study thus addresses (1) understanding the environmental impact caused by the nitrogen cycle of food products and (2) uncovering the trend of the N-footprint of agricultural products in Hualien, Taiwan. The N-footprint model was applied, which included both crops and energy consumption in the area. All data were adapted from government statistics databases and crosschecked for consistency before modeling. The actions involved with agricultural production were evaluated and analyzed for nitrogen loss to the environment, as well as measuring the impacts to humans and the environment. The results showed that rice makes up the largest share of agricultural production by weight, at 80%. The dominant meat production is pork (52%) and poultry (40%); fish and seafood were at similar levels to pork production. The average per capita food consumption in Taiwan is 2643.38 kcal capita−1 d−1, primarily from rice (430.58 kcal), meats (184.93 kcal) and wheat (ca. 356.44 kcal). The average protein uptake is 87.34 g capita−1 d−1, and 51% is mainly from meat, milk, and eggs. The preliminary results showed that the nitrogen footprint of food production is 34 kg N per capita per year, congruent with the results of Shibata et al. (2014) for Japan. These results provide a better understanding of the nitrogen demand and loss in the environment, and the roadmap can furthermore support the establishment of nitrogen policy and strategy. Additionally, the results serve to develop a roadmap of the nitrogen cycle of an environmentally friendly farming area, thus illuminating the nitrogen demand and loss of such areas.Keywords: agriculture productions, energy consumption, environmental indicator, nitrogen footprint
Procedia PDF Downloads 302671 Coupling Static Multiple Light Scattering Technique With the Hansen Approach to Optimize Dispersibility and Stability of Particle Dispersions
Authors: Guillaume Lemahieu, Matthias Sentis, Giovanni Brambilla, Gérard Meunier
Abstract:
Static Multiple Light Scattering (SMLS) has been shown to be a straightforward technique for the characterization of colloidal dispersions without dilution, as multiply scattered light in backscattered and transmitted mode is directly related to the concentration and size of scatterers present in the sample. In this view, the use of SMLS for stability measurement of various dispersion types has already been widely described in the literature. Indeed, starting from a homogeneous dispersion, the variation of backscattered or transmitted light can be attributed to destabilization phenomena, such as migration (sedimentation, creaming) or particle size variation (flocculation, aggregation). In a view to investigating more on the dispersibility of colloidal suspensions, an experimental set-up for “at the line” SMLS experiment has been developed to understand the impact of the formulation parameters on particle size and dispersibility. The SMLS experiment is performed with a high acquisition rate (up to 10 measurements per second), without dilution, and under direct agitation. Using such experimental device, SMLS detection can be combined with the Hansen approach to optimize the dispersing and stabilizing properties of TiO₂ particles. It appears that the dispersibility and the stability spheres generated are clearly separated, arguing that lower stability is not necessarily a consequence of poor dispersibility. Beyond this clarification, this combined SMLS-Hansen approach is a major step toward the optimization of dispersibility and stability of colloidal formulations by finding solvents having the best compromise between dispersing and stabilizing properties. Such study can be intended to find better dispersion media, greener and cheaper solvents to optimize particles suspensions, reduce the content of costly stabilizing additives or satisfy product regulatory requirements evolution in various industrial fields using suspensions (paints & inks, coatings, cosmetics, energy).Keywords: dispersibility, stability, Hansen parameters, particles, solvents
Procedia PDF Downloads 109670 Resting-State Functional Connectivity Analysis Using an Independent Component Approach
Authors: Eric Jacob Bacon, Chaoyang Jin, Dianning He, Shuaishuai Hu, Lanbo Wang, Han Li, Shouliang Qi
Abstract:
Objective: Refractory epilepsy is a complicated type of epilepsy that can be difficult to diagnose. Recent technological advancements have made resting-state functional magnetic resonance (rsfMRI) a vital technique for studying brain activity. However, there is still much to learn about rsfMRI. Investigating rsfMRI connectivity may aid in the detection of abnormal activities. In this paper, we propose studying the functional connectivity of rsfMRI candidates to diagnose epilepsy. Methods: 45 rsfMRI candidates, comprising 26 with refractory epilepsy and 19 healthy controls, were enrolled in this study. A data-driven approach known as independent component analysis (ICA) was used to achieve our goal. First, rsfMRI data from both patients and healthy controls were analyzed using group ICA. The components that were obtained were then spatially sorted to find and select meaningful ones. A two-sample t-test was also used to identify abnormal networks in patients and healthy controls. Finally, based on the fractional amplitude of low-frequency fluctuations (fALFF), a chi-square statistic test was used to distinguish the network properties of the patient and healthy control groups. Results: The two-sample t-test analysis yielded abnormal in the default mode network, including the left superior temporal lobe and the left supramarginal. The right precuneus was found to be abnormal in the dorsal attention network. In addition, the frontal cortex showed an abnormal cluster in the medial temporal gyrus. In contrast, the temporal cortex showed an abnormal cluster in the right middle temporal gyrus and the right fronto-operculum gyrus. Finally, the chi-square statistic test was significant, producing a p-value of 0.001 for the analysis. Conclusion: This study offers evidence that investigating rsfMRI connectivity provides an excellent diagnosis option for refractory epilepsy.Keywords: ICA, RSN, refractory epilepsy, rsfMRI
Procedia PDF Downloads 76669 Attitude and Knowledge of Primary Health Care Physicians and Local Inhabitants about Leishmaniasis and Sandfly in West Alexandria, Egypt
Authors: Randa M. Ali, Naguiba F. Loutfy, Osama M. Awad
Abstract:
Background: Leishmaniasis is a worldwide disease, affecting 88 countries, it is estimated that about 350 million people are at risk of leishmaniasis. Overall prevalence is 12 million people with annual mortality of about 60,000. Annual incidence is 1,500,000 cases of cutaneous leishmaniasis (CL) worldwide and half million cases of visceral Leishmaniasis (VL). Objectives: The objective of this study was to assess primary health care physicians knowledge (PHP) and attitude about leishmaniasis and to assess awareness of local inhabitants about the disease and its vector in four areas in west Alexandria, Egypt. Methods: This study was a cross sectional survey that was conducted in four PHC units in west Alexandria. All physicians currently working in these units during the study period were invited to participate in the study, only 20 PHP completed the questionnaire. 60 local inhabitant were selected randomly from the four areas of the study, 15 from each area; Data was collected through two different specially designed questionnaires. Results: 11(55%) percent of the physicians had satisfactory knowledge, they answered more than 9 (60%) questions out of a total 14 questions about leishmaniasis and sandfly. The second part of the questionnaire is concerned with attitude of the primary health care physicians about leishmaniasis, 17 (85%) had good attitude and 3 (15%) had poor attitude. The second questionnaire showed that the awareness of local inhabitants about leishmaniasis and sandly as a vector of the disease is poor and needs to be corrected. Most of the respondents (90%) had not heard about leishmaniasis, Only 3 (5%) of the interviewed inhabitants said they know sandfly and its role in transmission of leishmaniasis. Conclusions: knowledge and attitudes of physicians are acceptable. However, there is, room for improvement and could be done through formal training courses and distribution of guidelines. In addition to raising the awareness of primary health care physicians about the importance of early detection and notification of cases of lesihmaniasis. Moreover, health education for raising awareness of the public regarding the vector and the disease is necessary because related studies have demonstrated that if the inhabitants do not perceive mosquitoes to be responsible for diseases such as malaria they do not take enough measures to protect themselves against the vector.Keywords: leishmaniasis, PHP, knowledge, attitude, local inhabitants
Procedia PDF Downloads 449668 Kernel-Based Double Nearest Proportion Feature Extraction for Hyperspectral Image Classification
Authors: Hung-Sheng Lin, Cheng-Hsuan Li
Abstract:
Over the past few years, kernel-based algorithms have been widely used to extend some linear feature extraction methods such as principal component analysis (PCA), linear discriminate analysis (LDA), and nonparametric weighted feature extraction (NWFE) to their nonlinear versions, kernel principal component analysis (KPCA), generalized discriminate analysis (GDA), and kernel nonparametric weighted feature extraction (KNWFE), respectively. These nonlinear feature extraction methods can detect nonlinear directions with the largest nonlinear variance or the largest class separability based on the given kernel function. Moreover, they have been applied to improve the target detection or the image classification of hyperspectral images. The double nearest proportion feature extraction (DNP) can effectively reduce the overlap effect and have good performance in hyperspectral image classification. The DNP structure is an extension of the k-nearest neighbor technique. For each sample, there are two corresponding nearest proportions of samples, the self-class nearest proportion and the other-class nearest proportion. The term “nearest proportion” used here consider both the local information and other more global information. With these settings, the effect of the overlap between the sample distributions can be reduced. Usually, the maximum likelihood estimator and the related unbiased estimator are not ideal estimators in high dimensional inference problems, particularly in small data-size situation. Hence, an improved estimator by shrinkage estimation (regularization) is proposed. Based on the DNP structure, LDA is included as a special case. In this paper, the kernel method is applied to extend DNP to kernel-based DNP (KDNP). In addition to the advantages of DNP, KDNP surpasses DNP in the experimental results. According to the experiments on the real hyperspectral image data sets, the classification performance of KDNP is better than that of PCA, LDA, NWFE, and their kernel versions, KPCA, GDA, and KNWFE.Keywords: feature extraction, kernel method, double nearest proportion feature extraction, kernel double nearest feature extraction
Procedia PDF Downloads 344667 Controlled Growth of Au Hierarchically Ordered Crystals Architectures for Electrochemical Detection of Traces of Molecules
Authors: P. Bauer, K. Mougin, V. Vignal, A. Buch, P. Ponthiaux, D. Faye
Abstract:
Nowadays, noble metallic nanostructures with unique morphology are widely used as new sensors due to their fascinating optical, electronic and catalytic properties. Among various shapes, dendritic nanostructures have attracted much attention because of their large surface-to-volume ratio, high sensitivity and special texture with sharp tips and nanoscale junctions. Several methods have been developed to fabricate those specific structures such as electrodeposition, photochemical way, seed-mediated growth or wet chemical method. The present study deals with a novel approach for a controlled growth pattern-directed organisation of Au flower-like crystals (NFs) deposited onto stainless steel plates to achieve large-scale functional surfaces. This technique consists in the deposition of a soft nanoporous template on which Au NFs are grown by electroplating and seed-mediated method. Size, morphology, and interstructure distance have been controlled by a site selective nucleation process. Dendritic Au nanostructures have appeared as excellent Raman-active candidates due to the presence of very sharp tips of multi-branched Au nanoparticles that leads to a large local field enhancement and a good SERS sensitivity. In addition, these structures have also been used as electrochemical sensors to detect traces of molecules present in a solution. A correlation of the number of active sites on the surface and the current charge by both colorimetric method and cyclic voltammetry of gold structures have allowed a calibration of the system. This device represents a first step for the fabrication of MEMs platform that could ultimately be integrated into a lab-on-chip system. It also opens pathways to several technologically large-scale nanomaterials fabrication such as hierarchically ordered crystal architectures for sensor applications.Keywords: dendritic, electroplating, gold, template
Procedia PDF Downloads 186666 Vehicles Analysis, Assessment and Redesign Related to Ergonomics and Human Factors
Authors: Susana Aragoneses Garrido
Abstract:
Every day, the roads are scenery of numerous accidents involving vehicles, producing thousands of deaths and serious injuries all over the world. Investigations have revealed that Human Factors (HF) are one of the main causes of road accidents in modern societies. Distracted driving (including external or internal aspects of the vehicle), which is considered as a human factor, is a serious and emergent risk to road safety. Consequently, a further analysis regarding this issue is essential due to its transcendence on today’s society. The objectives of this investigation are the detection and assessment of the HF in order to provide solutions (including a better vehicle design), which might mitigate road accidents. The methodology of the project is divided in different phases. First, a statistical analysis of public databases is provided between Spain and The UK. Second, data is classified in order to analyse the major causes involved in road accidents. Third, a simulation between different paths and vehicles is presented. The causes related to the HF are assessed by Failure Mode and Effects Analysis (FMEA). Fourth, different car models are evaluated using the Rapid Upper Body Assessment (RULA). Additionally, the JACK SIEMENS PLM tool is used with the intention of evaluating the Human Factor causes and providing the redesign of the vehicles. Finally, improvements in the car design are proposed with the intention of reducing the implication of HF in traffic accidents. The results from the statistical analysis, the simulations and the evaluations confirm that accidents are an important issue in today’s society, especially the accidents caused by HF resembling distractions. The results explore the reduction of external and internal HF through the global analysis risk of vehicle accidents. Moreover, the evaluation of the different car models using RULA method and the JACK SIEMENS PLM prove the importance of having a good regulation of the driver’s seat in order to avoid harmful postures and therefore distractions. For this reason, a car redesign is proposed for the driver to acquire the optimum position and consequently reducing the human factors in road accidents.Keywords: analysis vehicles, asssesment, ergonomics, car redesign
Procedia PDF Downloads 335665 Development and Validation of a Carbon Dioxide TDLAS Sensor for Studies on Fermented Dairy Products
Authors: Lorenzo Cocola, Massimo Fedel, Dragiša Savić, Bojana Danilović, Luca Poletto
Abstract:
An instrument for the detection and evaluation of gaseous carbon dioxide in the headspace of closed containers has been developed in the context of Packsensor Italian-Serbian joint project. The device is based on Tunable Diode Laser Absorption Spectroscopy (TDLAS) with a Wavelength Modulation Spectroscopy (WMS) technique in order to accomplish a non-invasive measurement inside closed containers of fermented dairy products (yogurts and fermented cheese in cups and bottles). The purpose of this instrument is the continuous monitoring of carbon dioxide concentration during incubation and storage of products over a time span of the whole shelf life of the product, in the presence of different microorganisms. The instrument’s optical front end has been designed to be integrated in a thermally stabilized incubator. An embedded computer provides processing of spectral artifacts and storage of an arbitrary set of calibration data allowing a properly calibrated measurement on many samples (cups and bottles) of different shapes and sizes commonly found in the retail distribution. A calibration protocol has been developed in order to be able to calibrate the instrument on the field also on containers which are notoriously difficult to seal properly. This calibration protocol is described and evaluated against reference measurements obtained through an industry standard (sampling) carbon dioxide metering technique. Some sets of validation test measurements on different containers are reported. Two test recordings of carbon dioxide concentration evolution are shown as an example of instrument operation. The first demonstrates the ability to monitor a rapid yeast growth in a contaminated sample through the increase of headspace carbon dioxide. Another experiment shows the dissolution transient with a non-saturated liquid medium in presence of a carbon dioxide rich headspace atmosphere.Keywords: TDLAS, carbon dioxide, cups, headspace, measurement
Procedia PDF Downloads 324664 Jurisdiction Conflicts in Contracts of International Maritime Transport: The Application of the Forum Selection Clause in Brazilian Courts
Authors: Renan Caseiro De Almeida, Mateus Mello Garrute
Abstract:
The world walks to be ever more globalised. This trend promotes an increase on the number of transnational commercial transactions. The main modal for carriage of goods is by sea, and many countries have their economies dependent on the maritime freightage – it could be because they exercise largely this activity or because they follow the tendency of using the maritime logistic widely. Among these ones, Brazil is included. This nation counts with sixteen ports with good capacities, which receive most of the international income by sea. It is estimated that 85 per cent of the total influx of goods in Brazil is by maritime modal, leaving mere 15 per cent for the other ones. This made it necessary to develop maritime law in international and national basis, to create a standard to be applied with the intention to harmonize the transnational carriage of goods by sea. Maritime contracts are very specific and have interesting peculiarities, but in their range, little research has been made on what causes the main divergences when it comes to international contracts: the jurisdiction conflict. Likewise any other international contract, it is common for the parties to set a forum selection clause to choose the forum which will be able to judge the litigations that could rise from a maritime transport contract and, consequently, also which law should be applied to the cases. However, the forum choice in Brazil has always been somewhat polemical – not only in the maritime law sphere - for sometimes national tribunals overlook the parties’ choice and call the competence for themselves. In this sense, it is interesting to mention that the Mexico Convention of 1994 about the law applicable to international contracts did not gain strength in Brazil, nor even reached the Congress to be considered for ratification. Furthermore, it is also noteworthy that Brazil has a new Civil Procedure Code, which was put into reinforcement in 2016 bringing new legal provisions specifically about the forum selection. This represented a mark in the national legal system in this matter. Therefore, this paper intends to give an insight through Brazilian jurisprudence, making an analysis of how this issue has been treated on litigations about maritime contracts in the national tribunals, as well as the solutions found by the Brazilian legal system for the jurisdiction conflicts in those cases. To achieve the expected results, the hypothetical-deductive method will be used in combination with researches on doctrine and legislations. Also, jurisprudential research and case law study will have a special role, since the main point of this paper is to verify and study the position of the courts in Brazil in a specific matter. As a country of civil law, the Brazilian judges and tribunals are very attached to the rules displayed on codes. However, the jurisprudential understanding has been changing during the years and with the advent of the new rules about the applicable law and forum selection clause, it is noticeable that new winds are being blown.Keywords: applicable law, forum selection clause, international business, international maritime contracts, litigation in courts
Procedia PDF Downloads 274663 Phantom and Clinical Evaluation of Block Sequential Regularized Expectation Maximization Reconstruction Algorithm in Ga-PSMA PET/CT Studies Using Various Relative Difference Penalties and Acquisition Durations
Authors: Fatemeh Sadeghi, Peyman Sheikhzadeh
Abstract:
Introduction: Block Sequential Regularized Expectation Maximization (BSREM) reconstruction algorithm was recently developed to suppress excessive noise by applying a relative difference penalty. The aim of this study was to investigate the effect of various strengths of noise penalization factor in the BSREM algorithm under different acquisition duration and lesion sizes in order to determine an optimum penalty factor by considering both quantitative and qualitative image evaluation parameters in clinical uses. Materials and Methods: The NEMA IQ phantom and 15 clinical whole-body patients with prostate cancer were evaluated. Phantom and patients were injected withGallium-68 Prostate-Specific Membrane Antigen(68 Ga-PSMA)and scanned on a non-time-of-flight Discovery IQ Positron Emission Tomography/Computed Tomography(PET/CT) scanner with BGO crystals. The data were reconstructed using BSREM with a β-value of 100-500 at an interval of 100. These reconstructions were compared to OSEM as a widely used reconstruction algorithm. Following the standard NEMA measurement procedure, background variability (BV), recovery coefficient (RC), contrast recovery (CR) and residual lung error (LE) from phantom data and signal-to-noise ratio (SNR), signal-to-background ratio (SBR) and tumor SUV from clinical data were measured. Qualitative features of clinical images visually were ranked by one nuclear medicine expert. Results: The β-value acts as a noise suppression factor, so BSREM showed a decreasing image noise with an increasing β-value. BSREM, with a β-value of 400 at a decreased acquisition duration (2 min/ bp), made an approximately equal noise level with OSEM at an increased acquisition duration (5 min/ bp). For the β-value of 400 at 2 min/bp duration, SNR increased by 43.7%, and LE decreased by 62%, compared with OSEM at a 5 min/bp duration. In both phantom and clinical data, an increase in the β-value is translated into a decrease in SUV. The lowest level of SUV and noise were reached with the highest β-value (β=500), resulting in the highest SNR and lowest SBR due to the greater noise reduction than SUV reduction at the highest β-value. In compression of BSREM with different β-values, the relative difference in the quantitative parameters was generally larger for smaller lesions. As the β-value decreased from 500 to 100, the increase in CR was 160.2% for the smallest sphere (10mm) and 12.6% for the largest sphere (37mm), and the trend was similar for SNR (-58.4% and -20.5%, respectively). BSREM visually was ranked more than OSEM in all Qualitative features. Conclusions: The BSREM algorithm using more iteration numbers leads to more quantitative accuracy without excessive noise, which translates into higher overall image quality and lesion detectability. This improvement can be used to shorter acquisition time.Keywords: BSREM reconstruction, PET/CT imaging, noise penalization, quantification accuracy
Procedia PDF Downloads 96662 Off-Line Text-Independent Arabic Writer Identification Using Optimum Codebooks
Authors: Ahmed Abdullah Ahmed
Abstract:
The task of recognizing the writer of a handwritten text has been an attractive research problem in the document analysis and recognition community with applications in handwriting forensics, paleography, document examination and handwriting recognition. This research presents an automatic method for writer recognition from digitized images of unconstrained writings. Although a great effort has been made by previous studies to come out with various methods, their performances, especially in terms of accuracy, are fallen short, and room for improvements is still wide open. The proposed technique employs optimal codebook based writer characterization where each writing sample is represented by a set of features computed from two codebooks, beginning and ending. Unlike most of the classical codebook based approaches which segment the writing into graphemes, this study is based on fragmenting a particular area of writing which are beginning and ending strokes. The proposed method starting with contour detection to extract significant information from the handwriting and the curve fragmentation is then employed to categorize the handwriting into Beginning and Ending zones into small fragments. The similar fragments of beginning strokes are grouped together to create Beginning cluster, and similarly, the ending strokes are grouped to create the ending cluster. These two clusters lead to the development of two codebooks (beginning and ending) by choosing the center of every similar fragments group. Writings under study are then represented by computing the probability of occurrence of codebook patterns. The probability distribution is used to characterize each writer. Two writings are then compared by computing distances between their respective probability distribution. The evaluations carried out on ICFHR standard dataset of 206 writers using Beginning and Ending codebooks separately. Finally, the Ending codebook achieved the highest identification rate of 98.23%, which is the best result so far on ICFHR dataset.Keywords: off-line text-independent writer identification, feature extraction, codebook, fragments
Procedia PDF Downloads 512661 Discovering the Effects of Meteorological Variables on the Air Quality of Bogota, Colombia, by Data Mining Techniques
Authors: Fabiana Franceschi, Martha Cobo, Manuel Figueredo
Abstract:
Bogotá, the capital of Colombia, is its largest city and one of the most polluted in Latin America due to the fast economic growth over the last ten years. Bogotá has been affected by high pollution events which led to the high concentration of PM10 and NO2, exceeding the local 24-hour legal limits (100 and 150 g/m3 each). The most important pollutants in the city are PM10 and PM2.5 (which are associated with respiratory and cardiovascular problems) and it is known that their concentrations in the atmosphere depend on the local meteorological factors. Therefore, it is necessary to establish a relationship between the meteorological variables and the concentrations of the atmospheric pollutants such as PM10, PM2.5, CO, SO2, NO2 and O3. This study aims to determine the interrelations between meteorological variables and air pollutants in Bogotá, using data mining techniques. Data from 13 monitoring stations were collected from the Bogotá Air Quality Monitoring Network within the period 2010-2015. The Principal Component Analysis (PCA) algorithm was applied to obtain primary relations between all the parameters, and afterwards, the K-means clustering technique was implemented to corroborate those relations found previously and to find patterns in the data. PCA was also used on a per shift basis (morning, afternoon, night and early morning) to validate possible variation of the previous trends and a per year basis to verify that the identified trends have remained throughout the study time. Results demonstrated that wind speed, wind direction, temperature, and NO2 are the most influencing factors on PM10 concentrations. Furthermore, it was confirmed that high humidity episodes increased PM2,5 levels. It was also found that there are direct proportional relationships between O3 levels and wind speed and radiation, while there is an inverse relationship between O3 levels and humidity. Concentrations of SO2 increases with the presence of PM10 and decreases with the wind speed and wind direction. They proved as well that there is a decreasing trend of pollutant concentrations over the last five years. Also, in rainy periods (March-June and September-December) some trends regarding precipitations were stronger. Results obtained with K-means demonstrated that it was possible to find patterns on the data, and they also showed similar conditions and data distribution among Carvajal, Tunal and Puente Aranda stations, and also between Parque Simon Bolivar and las Ferias. It was verified that the aforementioned trends prevailed during the study period by applying the same technique per year. It was concluded that PCA algorithm is useful to establish preliminary relationships among variables, and K-means clustering to find patterns in the data and understanding its distribution. The discovery of patterns in the data allows using these clusters as an input to an Artificial Neural Network prediction model.Keywords: air pollution, air quality modelling, data mining, particulate matter
Procedia PDF Downloads 258660 Multimodal Ophthalmologic Evaluation Can Detect Retinal Injuries in Asymptomatic Patients With Primary Antiphospholipid Syndrome
Authors: Taurino S. R. Neto, Epitácio D. S. Neto, Flávio Signorelli, Gustavo G. M. Balbi, Alex H. Higashi, Mário Luiz R. Monteiro, Eloisa Bonfá, Danieli C. O. Andrade, Leandro C. Zacharias
Abstract:
Purpose: To perform a multimodal evaluation, including the use of Optical Coherence Angiotomography (OCTA), in patients with primary antiphospholipid syndrome (PAPS) without ocular complaints and to compare them with healthy individuals. Methods: A complete structural and functional ophthalmological evaluation using OCTA and microperimetry (MP) exam in patients with PAPS, followed at a tertiary rheumatology outpatient clinic, was performed. All ophthalmologic manifestations were recorded and then statistical analysis was performed for comparative purposes; p <0.05 was considered statistically significant. Results: 104 eyes of 52 subjects (26 patients with PAPS without ocular complaints and 26 healthy individuals) were included. Among PAPS patients, 21 were female (80.8%) and 21 (80.8%) were Caucasians. Thrombotic PAPS was the main clinical criteria manifestation (100%); 65.4% had venous and 34.6% had arterial thrombosis. Obstetrical criteria were present in 34.6% of all thrombotic PAPS patients. Lupus anticoagulant was present in all patients. 19.2% of PAPS patients presented ophthalmologic findings against none of the healthy individuals. The most common retinal change was paracentral acute middle maculopathy (PAMM) (3 patients, 5 eyes), followed by drusen-like deposits (1 patient, 2 eyes) and pachychoroid pigment epitheliopathy (1 patient, 1 eye). Systemic hypertension and hyperlipidaemia were present in 100% of the PAPS patients with PAMM, while only six patients (26.1%) with PAPS without PAMM presented these two risk factors together. In the quantitative OCTA evaluation, we found significant differences between PAPS patients and controls in both the superficial vascular complex (SVC) and deep vascular complex (DVC) in the high-speed protocol, as well as in the SVC in the high-resolution protocol. In the analysis of the foveal avascular zone (FAZ) parameters, the PAPS group had a larger area of FAZ in the DVC using the high-speed method compared to the control group (p=0.047). In the quantitative analysis of the MP, the PAPS group had lower central (p=0.041) and global (p<0.001) retinal sensitivity compared to the control group, as well as in the sector analysis, with the exception of the inferior sector. In the quantitative evaluation of fixation stability, there was a trend towards worse stability in the PAPS subgroup with PAMM in both studied methods. Conclusions: PAMM was observed in 11.5% of PAPS patients with no previous ocular complaints. Systemic hypertension concomitant with hyperlipidemia was the most commonly associated risk factor for PAMM in patients with PAPS. PAPS patients present lower vascular density and retinal sensitivity compared to the control group, even in patients without PAMM.Keywords: antiphospholipid syndrome, optical coherence angio tomography, optical coherence tomography, retina
Procedia PDF Downloads 80659 Conditions That Brought Bounce-Back in Southern Europe: An Inter-Temporal and Cross-National Analysis on Female Labour Force Participation with Fuzzy Set Qualitative Comparative Analysis
Authors: A. Onur Kutlu, H. Tolga Bolukbasi
Abstract:
Since the 1990s, governments, international organizations and scholars have drawn increasing attention to the significance of women in the labour force. While advanced industrial countries in North Western Europe and North America have managed to increase female labour force participation (FLFP) in the early post world war two period, emerging economies of the 1970s have only been able to increase FLFP only a decade later. Among these areas, Southern Europe features a wave of remarkable bounce backs in FLFP. However, despite striking similarities between the features in Southern Europe and those in Turkey, Turkey has not been able to pull women into the labour force. Despite a host of institutional similarities, Turkey has failed to reach to the level of her Southern European neighbours. This paper addresses the puzzle why Turkey lag behind in FLFP in comparison to her Southern European neighbours. There are signs showing that FLFP is currently reaching a critical threshold at a time when structural factors may allow a trend. It is not known, however, the constellation of conditions which may bring rising FLFP in Turkey. In order to gain analytical leverage from similar transitions in countries that share similar labour market and welfare state regime characteristics, this paper identifies the conditions in Southern Europe that brought rising FLFP to be able to explore the prospects for Turkey. Second, this paper takes these variables in the fuzzy set Qualitative Comparative Analysis (fsQCA) as conditions which can potentially explain the outcome of rising FLFP in Portugal, Spain, Italy, Greece and Turkey. The purpose here is to identify any causal pathway there may exist that lead to rising FLFP in Southern Europe. In order to do so, this study analyses two time periods in all cases, which represent different periods for different countries. The first period is identified on the basis of low FLFP and the second period on the basis of the transition to significantly higher FLFP. Third, the conditions are treated following the standard procedures in fsQCA, which provide equifinal: two distinct paths to higher levels of FLFP in Southern Europe, each of which may potentially increase FLFP in Turkey. Based on this analysis, this paper proposes that there exist two distinct paths leading to higher levels of FLFP in Southern Europe. Among these paths, salience of left parties emerges as a sufficient condition. In cases where this condition was not present, a second path combining enlarging service sector employment, increased tertiary education among women and increased childcare enrolment rates led to increasing FLFP.Keywords: female labour force participation, fsQCA, Southern Europe, Turkey
Procedia PDF Downloads 326658 Elevated Creatinine Clearance and Normal Glomerular Filtration Rate in Patients with Systemic Lupus erythematosus
Authors: Stoyanka Vladeva, Elena Kirilova, Nikola Kirilov
Abstract:
Background: The creatinine clearance is a widely used value to estimate the GFR. Increased creatinine clearance is often called hyperfiltration and is usually seen during pregnancy, patients with diabetes mellitus preceding the diabetic nephropathy. It may also occur with large dietary protein intake or with plasma volume expansion. Renal injury in lupus nephritis is known to affect the glomerular, tubulointerstitial, and vascular compartment. However high creatinine clearance has not been found in patients with SLE, Target: Follow-up of creatinine clearance values in patients with systemic lupus erythematosus without history of kidney injury. Material and methods: We observed the creatinine, creatinine clearance, GFR and dipstick protein values of 7 women (with a mean age of 42.71 years) with systemic lupus erythematosus. Patients with active lupus have been monthly tested in the period of 13 months. Creatinine clearance has been estimated by Cockcroft-Gault Equation formula in ml/sec. GFR has been estimated by MDRD formula (The Modification of Diet in renal Disease) in ml/min/1.73 m2. Proteinuria has been defined as present when dipstick protein > 1+.Results: In all patients without history of kidney injury we found elevated creatinine clearance levels, but GFRremained within the reference range. Two of the patients were in remission while the other five patients had clinically and immunologically active Lupus. Three of the patients had a permanent presence of high creatinine clearance levels and proteinuria. Two of the patients had periodically elevated creatinine clearance without proteinuria. These results show that kidney disturbances may be caused by the vascular changes typical for SLE. Glomerular hyperfiltration can be result of focal segmental glomerulosclerosis caused by a reduction in renal mass. Probably lupus nephropathy is preceded not only by glomerular vascular changes, but also by tubular vascular changes. Using only the GFR is not a sufficient method to detect these primary functional disturbances. Conclusion: For early detection of kidney injury in patients with SLE we determined that the follow up of creatinine clearance values could be helpful.Keywords: systemic Lupus erythematosus, kidney injury, elevated creatinine clearance level, normal glomerular filtration rate
Procedia PDF Downloads 270657 The 'Toshi-No-Sakon' Phenomenon: A Trend in Japanese Family Formations
Authors: Franco Lorenzo D. Morales
Abstract:
‘Toshi-no-sakon,’ which translates to as ‘age gap marriage,’ is a term that has been popularized by celebrity couples in the Japanese entertainment industry. Japan is distinct for a developed nation for its rapidly aging population, declining marital and fertility rates, and the reinforcement of traditional gender roles. Statistical data has shown that the average age of marriage in Japan is increasing every year, showing a growing tendency for late marriage. As a result, the government has been trying to curb the declining trends by encouraging marriage and childbirth among the populace. This graduate thesis seeks to analyze the ‘toshi-no-sakon’ phenomenon in lieu of Japan’s current economic and social situation, and to see what the implications are for these kinds of married couples. This research also seeks to expound more on age gaps within married couples, which is a factor rarely-touched upon in Japanese family studies. A literature review was first performed in order to provide a framework to study ‘toshi-no-sakon’ from the perspective of four fields of study—marriage, family, aging, and gender. Numerous anonymous online statements by ‘toshi-no-sakon’ couples were then collected and analyzed, which brought to light a number of concerns. Couples wherein the husband is the older partner were prioritized in order to narrow down the focus of the research, and ‘toshi-no-sakon’ is only considered when the couple’s age gap is ten years or more. Current findings suggest that one of the perceived merits for a woman to marry an older man is that financial security would be guaranteed. However, this has been shown to be untrue as a number of couples express concern regarding their financial situation, which could be attributed to their husband’s socio-economic status. Having an older husband who is approaching the age of retirement presents another dilemma as the wife would be more obliged to provide care for her aging husband. This notion of the wife being a caregiver likely stems from an arrangement once common in Japanese families in which the wife must primarily care for her husband’s elderly parents. Childbearing is another concern as couples would be pressured to have a child right away due to the age of the husband, in addition to limiting the couple’s ideal number of children. This is another problematic aspect as the husband would have to provide income until his child has finished their education, implying that retirement would have to be delayed indefinitely. It is highly recommended that future studies conduct face-to-face interviews with couples and families who fall under the category of ‘toshi-no-sakon’ in order to gain a more in-depth perspective into the phenomenon and to reveal any undiscovered trends. Cases wherein the wife is the older partner in the relationship should also be given focus in future studies involving ‘toshi-no-sakon’.Keywords: age gap, family structure, gender roles, marriage trends
Procedia PDF Downloads 364656 Effect of Phthalates on Male Infertility: Myth or Truth?
Authors: Rashmi Tomar, A. Srinivasan, Nayan K. Mohanty, Arun K. Jain
Abstract:
Phthalates have been used as additives in industrial products since the 1930s, and are universally considered to be ubiquitous environmental contaminants. The general population is exposed to phthalates through consumer products, as well as diet and medical treatments. Animal studies showing the existence of an association between some phthalates and testicular toxicity have generated public and scientific concern about the potential adverse effects of environmental changes on male reproductive health. Unprecedented declines in fertility rates and semen quality have been reported during the last half of the 20th century in developed countries and increasing interest exists on the potential relationship between exposure to environmental contaminants, including phthalates, and human male reproductive health Studies. Phthalates may be associated with altered endocrine function and adverse effects on male reproductive development and function, but human studies are limited. The aim of the present study was detection of phthalate compounds, estimation of their metabolites in infertile & fertile male. Blood and urine samples were collected from 150 infertile patients & 75 fertile volunteers recruited through Department of Urology, Safdarjung Hospital, New Delhi. Blood have been collected in separate glass tubes from the antecubital vein of the patients, serum have been separate and estimate the phthalate level in serum samples by Gas Chromatography / Mass Spectrometry using NIOSH / OSHA detailed protocol. Urine of Infertile & Fertile Subjects was collected & extracted using solid phase extraction method, analysis by HPLC. In conclusion, to the best of our knowledge the present study based on human is first to show the presence of phthalate in human serum samples and their metabolites in urine samples. Significant differences were observed between several phthalates in infertile and fertile healthy individuals.Keywords: Gas Chromatography, HPLC, male infertility, phthalates, serum, toxicity, urine
Procedia PDF Downloads 363655 Baricitinib Lipid-based Nanosystems as a Topical Alternative for Atopic Dermatitis Treatment
Authors: N. Garrós, P. Bustos, N. Beirampour, R. Mohammadi, M. Mallandrich, A.C. Calpena, H. Colom
Abstract:
Atopic dermatitis (AD) is a persistent skin condition characterized by chronic inflammation caused by an autoimmune response. It is a prevalent clinical issue that requires continual treatment to enhance the patient's quality of life. Systemic therapy often involves the use of glucocorticoids or immunosuppressants to manage symptoms. Our objective was to create and assess topical liposomal formulations containing Baricitinib (BNB), a reversible inhibitor of Janus-associated kinase (JAK), which is involved in various immune responses. These formulations were intended to address flare-ups and improve treatment outcomes for AD. We created three distinct liposomal formulations by combining different amounts of 1-palmitoyl-2-oleoyl-glycero-3-phosphocholine (POPC), cholesterol (CHOL), and ceramide (CER): (i) pure POPC, (ii) POPC mixed with CHOL (at a ratio of 8:2, mol/mol), and (iii) POPC mixed with CHOL and CER (at a ratio of 3.6:2.4:4.0 mol/mol/mol). We conducted various tests to determine the formulations' skin tolerance, irritancy capacity, and their ability to cause erythema and edema on altered skin. We also assessed the transepidermal water loss (TEWL) and skin hydration of rabbits to evaluate the efficacy of the formulations. Histological analysis, the HET-CAM test, and the modified Draize test were all used in the evaluation process. The histological analysis revealed that liposome POPC and POPC:CHOL avoided any damage to the tissues structures. The HET-CAM test showed no irritation effect caused by any of the three liposomes, and the modified Draize test showed a good Draize score for erythema and edema. Liposome POPC effectively counteracted the impact of xylol on the skin, and no erythema or edema was observed during the study. TEWL values were constant for all the liposomes with similar values to the negative control (within the range 8 - 15 g/h·m2, which means a healthy value for rabbits), whereas the positive control showed a significant increase. The skin hydration values were constant and followed the trend of the negative control, while the positive control showed a steady increase during the tolerance study. In conclusion, the developed formulations containing BNB exhibited no harmful or irritating effects, they did not demonstrate any irritant potential in the HET-CAM test and liposomes POPC and POPC:CHOL did not cause any structural alteration according to the histological analysis. These positive findings suggest that additional research is necessary to evaluate the efficacy of these liposomal formulations in animal models of the disease, including mutant animals. Furthermore, before proceeding to clinical trials, biochemical investigations should be conducted to better understand the mechanisms of action involved in these formulations.Keywords: baricitinib, HET-CAM test, histological study, JAK inhibitor, liposomes, modified draize test
Procedia PDF Downloads 92654 Magnitude of Meconium Stained Amniotic Fluid and Associated Factors among Women Who Gave Birth in North Shoa Zone Hospital’s Amhara Region Ethiopia 2022
Authors: Mitiku Tefera
Abstract:
Background: Meconium-stained amniotic fluid is one of the primary causes of birth asphyxia. Each year, over five million neonatal deaths occur worldwide due to meconium-stained amniotic fluid, with 90% of these deaths due to birth asphyxia. In Ethiopia meconium-stained amniotic fluid is under investigated, specifically in North Shoa Zone Amhara region Ethiopia. Objective: The aim of this study was to assess the magnitude of meconium-stained amniotic fluid and associated factors among women who gave birth in the North Shoa Zone Hospital’s Amhara Region, Ethiopia, in 2022. Methods: An institutional-based, cross-sectional study was conducted among 628 women who gave birth at North Shoa Zone Hospitals, Amhara, Ethiopia. The study was conducted from 08/June-08/August 2022. Two-stage cluster sampling was used to recruit study participants. The data was collected by using a structured interview-administered questionnaire and chart review. The collected data was entered into Epi-Data Version 4.6 and exported to SPSS Version 25. Logistics regression was employed, and a p-value <0.05 was considered significant. Result: The magnitude of meconium-stained amniotic fluid was 30.3%. Women presented with normal hematocrit level 83% less likely develop meconium-stained amniotic fluid. Women had mid-upper arm circumference value was less than 22.9cm(AOR=1.9; 95% CI;1.18-3.20), obstructed labor(AOR=3.6; 95% CI;1.48-8.83), prolonged labor ≥ 15hr (AOR=7.5; 95% CI ;7.68-13.3), the premature rapture of the membrane (AOR=1.7; 95% CI; 3.22-7.40), fetal tachycardia(AOR=6.2; 95% CI; 2.41-16.3) and Bradycardia (AOR=3.1; 95% CI;1.93-5.28) were significant association with meconium stained amniotic fluid. Conclusion: The magnitude of meconium-stained amniotic fluid, which was high. In this study, MUAC value <22.9 cm, obstructed and prolonged labor, PROM, bradycardia, and tachycardia were factors associated with meconium-stained amniotic fluid. A follow-up study and pooled similar articles will be mentioned for better evidence, enhancing intrapartum services and strengthening early detection of meconium-stained amniotic fluid for the health of the mother and baby.Keywords: women, meconium-staned amniotic fluid, magnitude, Ethiopia
Procedia PDF Downloads 128653 Bioaccumulation and Forensic Relevance of Gunshot Residue in Forensically Relevant Blowflies
Authors: Michaela Storen, Michelle Harvey, Xavier Conlan
Abstract:
Gun violence internationally is increasing at an unprecedented level, becoming a favoured means for executing violence against another individual. Not only is this putting a strain on forensic scientists who attempt to determine the cause of death in circumstances where firearms have been involved in the death of an individual, but it also highlights the need for an alternative technique of identification of a gunshot wound when other established techniques have been exhausted. A corpse may be colonized by necrophagous insects following death, and this close association between the time of death and insect colonization makes entomological samples valuable evidence when remains become decomposed beyond toxicological utility. Entomotoxicology provides the potential for the identification of toxins in a decomposing corpse, with recent research uncovering the capabilities of entomotoxicology to detect gunshot residue (GSR) in a corpse. However, shortcomings of the limited literature available on this topic have not been addressed, with the bioaccumulation, detection limits, and sensitivity to gunshots not considered thus far, leaving questions as to the applicability of this new technique in the forensic context. Larvae were placed on meat contaminated with GSR at different concentrations and compared to a control meat sample to establish the uptake of GSR by the larvae, with bioaccumulation established by placing the larvae on fresh, uncontaminated meat for a period of time before analysis using ICP-MS. The findings of Pb, Ba, and Sb at each stage of the lifecycle and bioaccumulation in the larvae will be presented. In addition, throughout these previously mentioned experiments, larvae were washed once, twice and three times to evaluate the effectiveness of existing entomological practices in removing external toxins from specimens prior to entomotoxicologyical analysis. Analysis of these larval washes will be presented. By addressing these points, this research extends the utility of entomotoxicology in cause-of-death investigations and provides an additional source of evidence for forensic scientists in the circumstances involving a gunshot wound on a corpse, in addition to advising the effectiveness of current entomology collection protocols.Keywords: bioaccumulation, chemistry, entomology, gunshot residue, toxicology
Procedia PDF Downloads 81652 Preparation of Papers - Developing a Leukemia Diagnostic System Based on Hybrid Deep Learning Architectures in Actual Clinical Environments
Authors: Skyler Kim
Abstract:
An early diagnosis of leukemia has always been a challenge to doctors and hematologists. On a worldwide basis, it was reported that there were approximately 350,000 new cases in 2012, and diagnosing leukemia was time-consuming and inefficient because of an endemic shortage of flow cytometry equipment in current clinical practice. As the number of medical diagnosis tools increased and a large volume of high-quality data was produced, there was an urgent need for more advanced data analysis methods. One of these methods was the AI approach. This approach has become a major trend in recent years, and several research groups have been working on developing these diagnostic models. However, designing and implementing a leukemia diagnostic system in real clinical environments based on a deep learning approach with larger sets remains complex. Leukemia is a major hematological malignancy that results in mortality and morbidity throughout different ages. We decided to select acute lymphocytic leukemia to develop our diagnostic system since acute lymphocytic leukemia is the most common type of leukemia, accounting for 74% of all children diagnosed with leukemia. The results from this development work can be applied to all other types of leukemia. To develop our model, the Kaggle dataset was used, which consists of 15135 total images, 8491 of these are images of abnormal cells, and 5398 images are normal. In this paper, we design and implement a leukemia diagnostic system in a real clinical environment based on deep learning approaches with larger sets. The proposed diagnostic system has the function of detecting and classifying leukemia. Different from other AI approaches, we explore hybrid architectures to improve the current performance. First, we developed two independent convolutional neural network models: VGG19 and ResNet50. Then, using both VGG19 and ResNet50, we developed a hybrid deep learning architecture employing transfer learning techniques to extract features from each input image. In our approach, fusing the features from specific abstraction layers can be deemed as auxiliary features and lead to further improvement of the classification accuracy. In this approach, features extracted from the lower levels are combined into higher dimension feature maps to help improve the discriminative capability of intermediate features and also overcome the problem of network gradient vanishing or exploding. By comparing VGG19 and ResNet50 and the proposed hybrid model, we concluded that the hybrid model had a significant advantage in accuracy. The detailed results of each model’s performance and their pros and cons will be presented in the conference.Keywords: acute lymphoblastic leukemia, hybrid model, leukemia diagnostic system, machine learning
Procedia PDF Downloads 187651 Knowledge Management and Administrative Effectiveness of Non-teaching Staff in Federal Universities in the South-West, Nigeria
Authors: Nathaniel Oladimeji Dixon, Adekemi Dorcas Fadun
Abstract:
Educational managers have observed a downward trend in the administrative effectiveness of non-teaching staff in federal universities in South-west Nigeria. This is evident in the low-quality service delivery of administrators and unaccomplished institutional goals and missions of higher education. Scholars have thus indicated the need for the deployment and adoption of a practice that encourages information collection and sharing among stakeholders with a view to improving service delivery and outcomes. This study examined the extent to which knowledge management correlated with the administrative effectiveness of non-teaching staff in federal universities in South-west Nigeria. The study adopted the survey design. Three federal universities (the University of Ibadan, Federal University of Agriculture, Abeokuta, and Obafemi Awolowo University) were purposively selected because administrative ineffectiveness was more pronounced among non-teaching staff in government-owned universities, and these federal universities were long established. The proportional and stratified random sampling was adopted to select 1156 non-teaching staff across the three universities along the three existing layers of the non-teaching staff: secretarial (senior=311; junior=224), non-secretarial (senior=147; junior=241) and technicians (senior=130; junior=103). Knowledge Management Practices Questionnaire with four sub-scales: knowledge creation (α=0.72), knowledge utilization (α=0.76), knowledge sharing (α=0.79) and knowledge transfer (α=0.83); and Administrative Effectiveness Questionnaire with four sub-scales: communication (α=0.84), decision implementation (α=0.75), service delivery (α=0.81) and interpersonal relationship (α=0.78) were used for data collection. Data were analyzed using descriptive statistics, Pearson product-moment correlation and multiple regression at 0.05 level of significance, while qualitative data were content analyzed. About 59.8% of the non-teaching staff exhibited a low level of knowledge management. The indices of administrative effectiveness of non-teaching staff were rated as follows: service delivery (82.0%), communication (78.0%), decision implementation (71.0%) and interpersonal relationship (68.0%). Knowledge management had significant relationships with the indices of administrative effectiveness: service delivery (r=0.82), communication (r=0.81), decision implementation (r=0.80) and interpersonal relationship (r=0.47). Knowledge management had a significant joint prediction on administrative effectiveness (F (4;1151)= 0.79, R=0.86), accounting for 73.0% of its variance. Knowledge sharing (β=0.38), knowledge transfer (β=0.26), knowledge utilization (β=0.22), and knowledge creation (β=0.06) had relatively significant contributions to administrative effectiveness. Lack of team spirit and withdrawal syndrome is the major perceived constraints to knowledge management practices among the non-teaching staff. Knowledge management positively influenced the administrative effectiveness of the non-teaching staff in federal universities in South-west Nigeria. There is a need to ensure that the non-teaching staff imbibe team spirit and embrace teamwork with a view to eliminating their withdrawal syndromes. Besides, knowledge management practices should be deployed into the administrative procedures of the university system.Keywords: knowledge management, administrative effectiveness of non-teaching staff, federal universities in the south-west of nigeria., knowledge creation, knowledge utilization, effective communication, decision implementation
Procedia PDF Downloads 102