Search results for: analytical streamflow distribution
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7171

Search results for: analytical streamflow distribution

601 Population Centralization in Urban Area and Metropolitans in Developing Countries: A Case Study of Urban Centralization in Iran

Authors: Safar Ghaedrahmati, Leila Soltani

Abstract:

Population centralization in urban area and metropolitans, especially in developing countries such as Iran increase metropolitan's problems. For few decades, the population of cities in developing countries, including Iran had a higher growth rate than the total growth rate of countries’ population. While in developed countries, the development of the big cities began decades ago and generally allowed for controlled and planned urban expansion, the opposite is the case in developing countries, where rapid urbanization process is characterized by an unplanned existing urban expansion. The developing metropolitan cities have enormous difficulties in coping both with the natural population growth and the urban physical expansion. Iranian cities are usually the heart of economic and cultural changes that have occurred after the Islamic revolution in 1979. These cities are increasingly having impacts via political–economical arrangement and chiefly by urban management structures. Structural features have led to the population growth of cities and urbanization (in number, population and physical frame) and the main problems in them. On the other hand, the lack of birth control policies and the deceptive attractions of cities, particularly big cities, and the birth rate has shot up, something which has occurred mainly in rural regions and small cities. The population of Iran has increased rapidly since 1956. The 1956 and 1966 decennial censuses counted the population of Iran at 18.9 million and 25.7 million, respectively, with a 3.1% annual growth rate during the 1956–1966 period. The 1976 and 1986 decennial censuses counted Iran’s population at 33.7 and 49.4 million, respectively, a 2.7% and 3.9% annual growth rate during the 1966–1976 and 1976–1986 periods. The 1996 count put Iran’s population at 60 million, a 1.96% annual growth rate from 1986–1996 and the 2006 count put Iran population at 72 million. A recent major policy of urban economic and industrial decentralization is a persistent program of the government. The policy has been identified as a result of the massive growth of Tehran in the recent years, up to 9 million by 2010. Part of the growth of the capitally resulted from the lack of economic opportunities elsewhere and in order to redress the developing primacy of Tehran and the domestic pressures which it is undergoing, the policy of decentralization is to be implemented as quickly as possible. Type of research is applied and method of data collection is documentary and methods of analysis are; population analysis with urban system analysis and urban distribution system

Keywords: population centralization, cities of Iran, urban centralization, urban system

Procedia PDF Downloads 291
600 Estimating Affected Croplands and Potential Crop Yield Loss of an Individual Farmer Due to Floods

Authors: Shima Nabinejad, Holger Schüttrumpf

Abstract:

Farmers who are living in flood-prone areas such as coasts are exposed to storm surges increased due to climate change. Crop cultivation is the most important economic activity of farmers, and in the time of flooding, agricultural lands are subject to inundation. Additionally, overflow saline water causes more severe damage outcomes than riverine flooding. Agricultural crops are more vulnerable to salinity than other land uses for which the economic damages may continue for a number of years even after flooding and affect farmers’ decision-making for the following year. Therefore, it is essential to assess what extent the agricultural areas are flooded and how much the associated flood damage to each individual farmer is. To address these questions, we integrated farmers’ decision-making at farm-scale with flood risk management. The integrated model includes identification of hazard scenarios, failure analysis of structural measures, derivation of hydraulic parameters for the inundated areas and analysis of the economic damages experienced by each farmer. The present study has two aims; firstly, it attempts to investigate the flooded cropland and potential crop damages for the whole area. Secondly, it compares them among farmers’ field for three flood scenarios, which differ in breach locations of the flood protection structure. To achieve its goal, the spatial distribution of fields and cultivated crops of farmers were fed into the flood risk model, and a 100-year storm surge hydrograph was selected as the flood event. The study area was Pellworm Island that is located in the German Wadden Sea National Park and surrounded by North Sea. Due to high salt content in seawater of North Sea, crops cultivated in the agricultural areas of Pellworm Island are 100% destroyed by storm surges which were taken into account in developing of depth-damage curve for analysis of consequences. As a result, inundated croplands and economic damages to crops were estimated in the whole Island which was further compared for six selected farmers under three flood scenarios. The results demonstrate the significance and the flexibility of the proposed model in flood risk assessment of flood-prone areas by integrating flood risk management and decision-making.

Keywords: crop damages, flood risk analysis, individual farmer, inundated cropland, Pellworm Island, storm surges

Procedia PDF Downloads 253
599 Inbreeding Study Using Runs of Homozygosity in Nelore Beef Cattle

Authors: Priscila A. Bernardes, Marcos E. Buzanskas, Luciana C. A. Regitano, Ricardo V. Ventura, Danisio P. Munari

Abstract:

The best linear unbiased predictor (BLUP) is a method commonly used in genetic evaluations of breeding programs. However, this approach can lead to higher inbreeding coefficients in the population due to the intensive use of few bulls with higher genetic potential, usually presenting some degree of relatedness. High levels of inbreeding are associated to low genetic viability, fertility, and performance for some economically important traits and therefore, should be constantly monitored. Unreliable pedigree data can also lead to misleading results. Genomic information (i.e., single nucleotide polymorphism – SNP) is a useful tool to estimate the inbreeding coefficient. Runs of homozygosity have been used to evaluate homozygous segments inherited due to direct or collateral inbreeding and allows inferring population selection history. This study aimed to evaluate runs of homozygosity (ROH) and inbreeding in a population of Nelore beef cattle. A total of 814 animals were genotyped with the Illumina BovineHD BeadChip and the quality control was carried out excluding SNPs located in non-autosomal regions, with unknown position, with a p-value in the Hardy-Weinberg equilibrium lower than 10⁻⁵, call rate lower than 0.98 and samples with the call rate lower than 0.90. After the quality control, 809 animals and 509,107 SNPs remained for analyses. For the ROH analysis, PLINK software was used considering segments with at least 50 SNPs with a minimum length of 1Mb in each animal. The inbreeding coefficient was calculated using the ratio between the sum of all ROH sizes and the size of the whole genome (2,548,724kb). A total of 25.711 ROH were observed, presenting mean, median, minimum, and maximum length of 3.34Mb, 2Mb, 1Mb, and 80.8Mb, respectively. The number of SNPs present in ROH segments varied from 50 to 14.954. The longest ROH length was observed in one animal, which presented a length of 634Mb (24.88% of the genome). Four bulls were among the 10 animals with the longest extension of ROH, presenting 11% of ROH with length higher than 10Mb. Segments longer than 10Mb indicate recent inbreeding. Therefore, the results indicate an intensive use of few sires in the studied data. The distribution of ROH along the chromosomes showed that chromosomes 5 and 6 presented a large number of segments when compared to other chromosomes. The mean, median, minimum, and maximum inbreeding coefficients were 5.84%, 5.40%, 0.00%, and 24.88%, respectively. Although the mean inbreeding was considered low, the ROH indicates a recent and intensive use of few sires, which should be avoided for the genetic progress of breed.

Keywords: autozygosity, Bos taurus indicus, genomic information, single nucleotide polymorphism

Procedia PDF Downloads 143
598 Time-Interval between Rectal Cancer Surgery and Reintervention for Anastomotic Leakage and the Effects of a Defunctioning Stoma: A Dutch Population-Based Study

Authors: Anne-Loes K. Warps, Rob A. E. M. Tollenaar, Pieter J. Tanis, Jan Willem T. Dekker

Abstract:

Anastomotic leakage after colorectal cancer surgery remains a severe complication. Early diagnosis and treatment are essential to prevent further adverse outcomes. In the literature, it has been suggested that earlier reintervention is associated with better survival, but anastomotic leakage can occur with a highly variable time interval to index surgery. This study aims to evaluate the time-interval between rectal cancer resection with primary anastomosis creation and reoperation, in relation to short-term outcomes, stratified for the use of a defunctioning stoma. Methods: Data of all primary rectal cancer patients that underwent elective resection with primary anastomosis during 2013-2019 were extracted from the Dutch ColoRectal Audit. Analyses were stratified for defunctioning stoma. Anastomotic leakage was defined as a defect of the intestinal wall or abscess at the site of the colorectal anastomosis for which a reintervention was required within 30 days. Primary outcomes were new stoma construction, mortality, ICU admission, prolonged hospital stay and readmission. The association between time to reoperation and outcome was evaluated in three ways: Per 2 days, before versus on or after postoperative day 5 and during primary versus readmission. Results: In total 10,772 rectal cancer patients underwent resection with primary anastomosis. A defunctioning stoma was made in 46.6% of patients. These patients had a lower anastomotic leakage rate (8.2% vs. 11.6%, p < 0.001) and less often underwent a reoperation (45.3% vs. 88.7%, p < 0.001). Early reoperations (< 5 days) had the highest complication and mortality rate. Thereafter the distribution of adverse outcomes was more spread over the 30-day postoperative period for patients with a defunctioning stoma. Median time-interval from primary resection to reoperation for defunctioning stoma patients was 7 days (IQR 4-14) versus 5 days (IQR 3-13 days) for no-defunctioning stoma patients. The mortality rate after primary resection and reoperation were comparable (resp. for defunctioning vs. no-defunctioning stoma 1.0% vs. 0.7%, P=0.106 and 5.0% vs. 2.3%, P=0.107). Conclusion: This study demonstrated that early reinterventions after anastomotic leakage are associated with worse outcomes (i.e. mortality). Maybe the combination of a physiological dip in the cellular immune response and release of cytokines following surgery, as well as a release of endotoxins caused by the bacteremia originating from the leakage, leads to a more profound sepsis. Another explanation might be that early leaks are not contained to the pelvis, leading to a more profound sepsis requiring early reoperations. Leakage with or without defunctioning stoma resulted in a different type of reinterventions and time-interval between surgery and reoperation.

Keywords: rectal cancer surgery, defunctioning stoma, anastomotic leakage, time-interval to reoperation

Procedia PDF Downloads 122
597 Influence of Atmospheric Circulation Patterns on Dust Pollution Transport during the Harmattan Period over West Africa

Authors: Ayodeji Oluleye

Abstract:

This study used Total Ozone Mapping Spectrometer (TOMS) Aerosol Index (AI) and reanalysis dataset of thirty years (1983-2012) to investigate the influence of the atmospheric circulation on dust transport during the Harmattan period over WestAfrica using TOMS data. The Harmattan dust mobilization and atmospheric circulation pattern were evaluated using a kernel density estimate which shows the areas where most points are concentrated between the variables. The evolution of the Inter-Tropical Discontinuity (ITD), Sea surface Temperature (SST) over the Gulf of Guinea, and the North Atlantic Oscillation (NAO) index during the Harmattan period (November-March) was also analyzed and graphs of the average ITD positions, SST and the NAO were observed on daily basis. The Pearson moment correlation analysis was also employed to assess the effect of atmospheric circulation on Harmattan dust transport. The results show that the departure (increased) of TOMS AI values from the long-term mean (1.64) occurred from around 21st of December, which signifies the rich dust days during winter period. Strong TOMS AI signal were observed from January to March with the maximum occurring in the latter months (February and March). The inter-annual variability of TOMSAI revealed that the rich dust years were found between 1984-1985, 1987-1988, 1997-1998, 1999-2000, and 2002-2004. Significantly, poor dust year was found between 2005 and 2006 in all the periods. The study has found strong north-easterly (NE) trade winds were over most of the Sahelianregion of West Africa during the winter months with the maximum wind speed reaching 8.61m/s inJanuary.The strength of NE winds determines the extent of dust transport to the coast of Gulf of Guinea during winter. This study has confirmed that the presence of the Harmattan is strongly dependent on theSST over Atlantic Ocean and ITD position. The locus of the average SST and ITD positions over West Africa could be described by polynomial functions. The study concludes that the evolution of near surface wind field at 925 hpa, and the variations of SST and ITD positions are the major large scale atmospheric circulation systems driving the emission, distribution, and transport of Harmattan dust aerosols over West Africa. However, the influence of NAO was shown to have fewer significance effects on the Harmattan dust transport over the region.

Keywords: atmospheric circulation, dust aerosols, Harmattan, West Africa

Procedia PDF Downloads 303
596 Lessons Learned from Push-Plus Implementation in Northern Nigeria

Authors: Aisha Giwa, Mohammed-Faosy Adeniran, Olufunke Femi-Ojo

Abstract:

Four decades ago, the World Health Organization (WHO) launched the Expanded Programme on Immunization (EPI). The EPI blueprint laid out the technical and managerial functions necessary to routinely vaccinate children with a limited number of vaccines, providing protection against diphtheria, tetanus, whooping cough, measles, polio, and tuberculosis, and to prevent maternal and neonatal tetanus by vaccinating women of childbearing age with tetanus toxoid. Despite global efforts, the Routine Immunization (RI) coverage in two of the World Health Organization (WHO) regions; the African Region and the South-East Asia Region, still remains short of its targets. As a result, the WHO Regional Director for Africa declared 2012 as the year for intensifying RI in these regions and this also coincided with the declaration of polio as a programmatic emergency by the WHO Executive Board. In order to intensify routine immunization, the National Routine Immunization Strategic Plan (2013-2015) stated that its core priority is to ensure 100% adequacy and availability of vaccines for safe immunization. To achieve 100% availability, the “PUSH System” and then “Push-Plus” were adopted for vaccine distribution, which replaced the inefficient “PULL” method. The NPHCDA plays the key role in coordinating activities in area advocacy, capacity building, engagement of 3PL for the state as well as monitoring and evaluation of the vaccine delivery process. eHealth Africa (eHA) is a player as a 3PL service provider engaged by State Primary Health Care Boards (SPHCDB) to ensure vaccine availability through Vaccine Direct Delivery (VDD) project which is essential to successful routine immunization services. The VDD project ensures the availability and adequate supply of high-quality vaccines and immunization-related materials to last-mile facilities. eHA’s commitment to the VDD project saw the need for an assessment of the project vis-a-vis the overall project performance, evaluation of a process for necessary improvement suggestions as well as general impact across Kano State (Where eHA had transitioned to the state), Bauchi State (currently manage delivery to all LGAs except 3 LGAs currently being managed by the state), Sokoto State (eHA currently covers all LGAs) and Zamfara State (Currently, in-sourced and managed solely by the state).

Keywords: cold chain logistics, health supply chain system strengthening, logistics management information system, vaccine delivery traceability and accountability

Procedia PDF Downloads 287
595 Implementation of Nutritional Awareness Programme on Eating Habits of Primary School Children

Authors: Gulcin Satir, Ahmet Yildirim

Abstract:

Globally, including Turkey, health problems associated with malnutrition and nutrient deficiencies in childhood will remain major public health problems in future. Nutrition is a major environmental influence on physical and mental growth and development in early life. Many studies support the fact that nutritional knowledge makes contribution to wellbeing of children and their school performance. The purpose of this study was to examine nutritional knowledge and eating habits of primary school children and to investigate differences in these variables by socioeconomic status. A quasi-experimental one group pretest/posttest design study was conducted in five primary schools totaling 200 children aging 9-10 years in grade 4 to determine the effect of nutritional awareness programme on eating habits of primary school children. The schools were chosen according to parents’ social and demographic characteristics. The implemented nutritional awareness education programme focused on healthy lifestyle such as beneficial foods, eating habits, personal hygiene, physical activity and the programme consisted of eight lessons. The teaching approaches used included interactive teaching, role-playing, demonstration, small group discussions, questioning, and feedback. The lessons were given twice a week for four weeks totaling eight lessons. All lessons lasted 45-60 minutes and first 5 minutes of this was pre-assessment and last 5 minutes post assessment evaluation. The obtained data were analyzed for normality, and the distribution of the variables was tested by the Kolmogorov-Smirnov test. Paired t-test was used to evaluate the effectiveness of education programme and to compare the above-mentioned variables in each school separately before and after the lessons. The result of the paired t-test conducted separately for each school showed that on average after eight lessons, there was a 25-32% increase in nutritional knowledge of students regardless of the school they attend to and this rate was significant (P < 0.01). This shows that increase in nutritional awareness in these five schools having different socio-economic status was similar to each other. This study suggests that having children involved directly in lessons help to achieve nutritional awareness leading to healthy eating habits. It is concluded that nutritional awareness is a valuable tool to change eating habits. Study findings will provide information for developing nutrition education programmes for the healthy life and obesity prevention in children.

Keywords: children, nutritional awareness, obesity, socioeconomic status

Procedia PDF Downloads 131
594 Spectrogram Pre-Processing to Improve Isotopic Identification to Discriminate Gamma and Neutrons Sources

Authors: Mustafa Alhamdi

Abstract:

Industrial application to classify gamma rays and neutron events is investigated in this study using deep machine learning. The identification using a convolutional neural network and recursive neural network showed a significant improvement in predication accuracy in a variety of applications. The ability to identify the isotope type and activity from spectral information depends on feature extraction methods, followed by classification. The features extracted from the spectrum profiles try to find patterns and relationships to present the actual spectrum energy in low dimensional space. Increasing the level of separation between classes in feature space improves the possibility to enhance classification accuracy. The nonlinear nature to extract features by neural network contains a variety of transformation and mathematical optimization, while principal component analysis depends on linear transformations to extract features and subsequently improve the classification accuracy. In this paper, the isotope spectrum information has been preprocessed by finding the frequencies components relative to time and using them as a training dataset. Fourier transform implementation to extract frequencies component has been optimized by a suitable windowing function. Training and validation samples of different isotope profiles interacted with CdTe crystal have been simulated using Geant4. The readout electronic noise has been simulated by optimizing the mean and variance of normal distribution. Ensemble learning by combing voting of many models managed to improve the classification accuracy of neural networks. The ability to discriminate gamma and neutron events in a single predication approach using deep machine learning has shown high accuracy using deep learning. The paper findings show the ability to improve the classification accuracy by applying the spectrogram preprocessing stage to the gamma and neutron spectrums of different isotopes. Tuning deep machine learning models by hyperparameter optimization of neural network models enhanced the separation in the latent space and provided the ability to extend the number of detected isotopes in the training database. Ensemble learning contributed significantly to improve the final prediction.

Keywords: machine learning, nuclear physics, Monte Carlo simulation, noise estimation, feature extraction, classification

Procedia PDF Downloads 140
593 The Sr-Nd Isotope Data of the Platreef Rocks from the Northern Limb of the Bushveld Igneous Complex: Evidence of Contrasting Magma Composition and Origin

Authors: Tshipeng Mwenze, Charles Okujeni, Abdi Siad, Russel Bailie, Dirk Frei, Marcelene Voigt, Petrus Le Roux

Abstract:

The Platreef is a platinum group element (PGE) deposit in the northern limb of the Bushveld Igneous Complex (BIC) which was emplaced as a series of mafic and ultramafic sills between the Main Zone (MZ) and the country rocks. The PGE mineralisation in the Platreef is hosted in different rock types, and its distribution and style vary with depth and along strike. This study contributes towards understanding the processes involved in the genesis of the Platreef. Twenty-four Platreef (2 harzburgites, 4 olivine pyroxenites, 17 feldspathic pyroxenites and 1 gabbronorite) and few MZ (1 gabbronorite and 1 leucogabbronorite) quarter core samples were collected from four drill cores (e.g., TN754, TN200, SS339, and OY482) and analysed for whole-rock Sr-Nd isotope data. The results show positive ɛNd values (+3.53 to +7.51) for harzburgites suggesting their parental magmas derived from the depleted Mantle. The remaining Platreef rocks have negative ɛNd values (-2.91 to -22.88) and show significant variations in Sr-Nd isotopic compositions. The first group of Platreef samples has relatively high isotopic compositions (ɛNd= -2.91 to -5.68; ⁸⁷Sr/⁸⁶Sri= 0.709177 - 0.711998). The second group of Platreef samples has Sr ratios (⁸⁷Sr/⁸⁶Sri= 0.709816-0.712106) overlapping with samples of the first group but slightly lower ɛNd values (-7.44 to -8.39). Lastly, the third group of Platreef samples has low ɛNd values (-10.82 to -14.32) and low Sr ratios (⁸⁷Sr/⁸⁶Sri= 0.707545-0.710042) than those from samples of the two Platreef groups mentioned above. There is, however, a Platreef sample with ɛNd value (-5.26) in range with the Platreef samples of the first group, but its Sr ratio (0.707281) is the lowest even when compared to samples of the third Platreef group. There are also five other Platreef samples which have either anomalous ɛNd or Sr ratios which make it difficult to assess their isotopic compositions relative to other samples. These isotopic variations for the Platreef samples indicate both multiple sources and multiple magma chambers where varying crustal contamination styles have operated during the evolution of these magmas prior their emplacements into the Platreef setting as sills. Furthermore, the MZ rocks have different Sr-Nd isotopic compositions (For OY482 gabbronorite [ɛNd= +0.65; ⁸⁷Sr/⁸⁶Sri= 0.711746]; for TN754 leucogabbronorite [ɛNd= -7.44; ⁸⁷Sr/⁸⁶Sri= 0.709322]) which do not only indicate different MZ magma chambers, but also different magmas from those of the Platreef. Although the Platreef is still considered a single stratigraphic unit in the northern limb of the BIC, its genesis involved multiple magmatic processes which evolved independently from each other.

Keywords: crustal contamination styles, magma chambers, magma sources, multiple sills emplacement

Procedia PDF Downloads 158
592 A Critical Analysis of the Creation of Geoparks in Brazil: Challenges and Possibilities

Authors: Isabella Maria Beil

Abstract:

The International Geosciences and Geoparks Programme (IGGP) were officially created in 2015 by the United Nations Educational, Scientific and Cultural Organization (UNESCO) to enhance the protection of the geological heritage and fill the gaps on the World Heritage Convention. According to UNESCO, a Global Geopark is an unified area where sites and landscapes of international geological significance are managed based on a concept of sustainable development. Tourism is seen as a main activity to develop new sources of revenue. Currently (November 2022), UNESCO recognized 177 Global Geoparks, of which more than 50% are in Europe, 40% in Asia, 6% in Latin America, and the remaining 4% are distributed between Africa and Anglo-Saxon America. This picture shows the existence of a much uneven geographical distribution of these areas across the planet. Currently, there are three Geoparks in Brazil; however, the first of them was accepted by the Global Geoparks Network in 2006 and, just fifteen years later, two other Brazilian Geoparks also obtained the UNESCO title. Therefore, this paper aims to provide an overview of the current geopark situation in Brazil and to identify the main challenges faced by the implementation of these areas in the country. To this end, the Brazilian history and its main characteristics regarding the development of geoparks over the years will be briefly presented. Then, the results obtained from interviews with those responsible for each of the current 29 aspiring geoparks in Brazil will be presented. Finally, the main challenges related to the implementation of Geoparks in the country will be listed. Among these challenges, the answers obtained through the interviews revealed conflicts and problems that pose hindrances both to the start of the development of a Geopark project and to its continuity and implementation. It is clear that the task of getting multiple social actors, or stakeholders, to engage with the Geopark, one of UNESCO’s guidelines, is one of its most complex aspects. Therefore, among the main challenges, stand out the difficulty of establishing solid partnerships, what directly reflects divergences between the different social actors and their goals. This difficulty in establishing partnerships happens for a number of reasons. One of them is that the investment in a Geopark project can be high and investors often expect a short-term financial return. In addition, political support from the public sector is often costly as well, since the possible results and positive influences of a Geopark in a given area will only be experienced during future mandates. These results demonstrate that the research on Geoparks goes far beyond the geological perspective linked to its origins, and is deeply embedded in political and economic issues.

Keywords: Brazil, geoparks, tourism, UNESCO

Procedia PDF Downloads 81
591 Integration, a Tool to Develop Critical Thinking Skills of Undergraduate Veterinary Students

Authors: M. L. W. P. De Silva, R. A. C. Rabel, N. Smith, L. McIntyre, T. J Parkinson, K. A. N. Wijayawardhane

Abstract:

Curricular integration is an important concept in medical education for developing students’ ability to create connections between different medical disciplines. Problem-Based Learning (PBL) is one of the vehicles through which such integration can be achieved. During the recent review of the veterinary curriculum at the University of Peradeniya, a series of courses in Integrative Veterinary Science (IVS) were introduced, in which PBL was the primary teaching methodology. The objectives of this study were to evaluate students’ opinions on PBL as a teaching method: it should be noted that, within the context of secondary and tertiary education in Sri Lanka, this would be an entirely novel learning experience for the students. Opinions were sought at the conclusion of IVS sessions where students of semesters 2, 4, 6, and 7 (of an 8-semester program) were exposed to a two, 2-hour PBL-based case scenario. The PBL-based case scenario in semesters 2, 4, and 7 were delivered using material previously developed by an experienced PBL practitioner, whilst material for semester 6 was prepared de novo by a less experienced practitioner. Each student (semesters 2: n=38, 4: n=37, 6: n=55, and 7: n=40) completed a questionnaire which asked whether: (i) the course had improved their critical thinking skills; (ii) the learning environment was sufficiently comfortable to express/share student’s opinion; (iii) there was sufficient facilitator guidance; (iv) the online study environment enhanced learning; and (v) the students were overall satisfied with the PBL approach and IVS concept. Responses were given on a 5-point Likert-scale (strongly agree (SA), agree (A), neutral (N), disagree (D), and strongly disagree (SD)). SA and A responses were summed to provide an overall ‘satisfactory’ response. Results were subjected to frequency-distribution statistical analysis. A total of 88.5% of students gave SA+A scores to their overall satisfaction. The proportion of SA+A scores differed between different semesters, such that 95% of semester 2, 4, and 7 students gave SA+A scores, whereas only 69% of semester 6 students did so for their respective sessions. Overall, 96% of the students gave SA+A scores to the question relating to the improvement of critical thinking skills: semester 6 students’ scores were marginally, but not significantly, lower (91% SA+A) than those in other semesters. The difference of scores between semester 6 and the other semesters may be attributed to the different PBL-material used and/or the different experience levels of the practitioners that developed the study material. The use of PBL as a means of teaching IVS curriculum-integration courses was well-received by the students in terms of their overall satisfaction and their perceptions of improved critical thinking skills. Importantly, this was achieved in the face of a methodology that was entirely novel to the students. Finally, the delivery of the PBL medium was readily mastered by the practitioner to whom it was also a novel methodology.

Keywords: critical thinking skills, integration, problem based learning, veterinary education

Procedia PDF Downloads 128
590 Housing Recovery in Heavily Damaged Communities in New Jersey after Hurricane Sandy

Authors: Chenyi Ma

Abstract:

Background: The second costliest hurricane in U.S. history, Sandy landed in southern New Jersey on October 29, 2012, and struck the entire state with high winds and torrential rains. The disaster killed more than 100 people, left more than 8.5 million households without power, and damaged or destroyed more than 200,000 homes across the state. Immediately after the disaster, public policy support was provided in nine coastal counties that constituted 98% of the major and severely damaged housing units in NJ overall. The programs include Individuals and Households Assistance Program, Small Business Loan Program, National Flood Insurance Program, and the Federal Emergency Management Administration (FEMA) Public Assistance Grant Program. In the most severely affected counties, additional funding was provided through Community Development Block Grant: Reconstruction, Rehabilitation, Elevation, and Mitigation Program, and Homeowner Resettlement Program. How these policies individually and as a whole impacted housing recovery across communities with different socioeconomic and demographic profiles has not yet been studied, particularly in relation to damage levels. The concept of community social vulnerability has been widely used to explain many aspects of natural disasters. Nevertheless, how communities are vulnerable has been less fully examined. Community resilience has been conceptualized as a protective factor against negative impacts from disasters, however, how community resilience buffers the effects of vulnerability is not yet known. Because housing recovery is a dynamic social and economic process that varies according to context, this study examined the path from community vulnerability and resilience to housing recovery looking at both community characteristics and policy interventions. Sample/Methods: This retrospective longitudinal case study compared a literature-identified set of pre-disaster community characteristics, the effects of multiple public policy programs, and a set of time-variant community resilience indicators to changes in housing stock (operationally defined by percent of building permits to total occupied housing units/households) between 2010 and 2014, two years before and after Hurricane Sandy. The sample consisted of 51 municipalities in the nine counties in which between 4% and 58% of housing units suffered either major or severe damage. Structural equation modeling (SEM) was used to determine the path from vulnerability to the housing recovery, via multiple public programs, separately and as a whole, and via the community resilience indicators. The spatial analytical tool ArcGIS 10.2 was used to show the spatial relations between housing recovery patterns and community vulnerability and resilience. Findings: Holding damage levels constant, communities with higher proportions of Hispanic households had significantly lower levels of housing recovery while communities with households with an adult >age 65 had significantly higher levels of the housing recovery. The contrast was partly due to the different levels of total public support the two types of the community received. Further, while the public policy programs individually mediated the negative associations between African American and female-headed households and housing recovery, communities with larger proportions of African American, female-headed and Hispanic households were “vulnerable” to lower levels of housing recovery because they lacked sufficient public program support. Even so, higher employment rates and incomes buffered vulnerability to lower housing recovery. Because housing is the "wobbly pillar" of the welfare state, the housing needs of these particular groups should be more fully addressed by disaster policy.

Keywords: community social vulnerability, community resilience, hurricane, public policy

Procedia PDF Downloads 366
589 A Cross-Sectional Study on Evaluation of Studies Conducted on Women in Turkey

Authors: Oya Isik, Filiz Yurtal, Kubilay Vursavus, Muge K. Davran, Metehan Celik, Munire Akgul, Olcay Karacan

Abstract:

In this study, to discuss the causes and problems of women by bringing together different disciplines engaged in women's studies were aimed. Also, to solve these problems, to share information and experiences in different disciplines about women, and to reach the task areas and decision mechanisms in practice were other objectives. For this purpose, proceedings presented at the Second Congress of Women's Studies held in Adana, Turkey, on 28-30 November 2018 was evaluated. The document analysis model, which is one of the qualitative research methods, was used in the evaluation of the congress proceedings. A total of 86 papers were presented in the congress and the topic distributions of the papers were determined. At the evaluation stage, the papers were classified according to their subjects and descriptive analyses were made on the papers. According to the analysis results of the papers presented in the congress, 64 % of the total 86 papers presented in the Congress were review-based and 36 % were research-based studies. When the distribution of these reports was examined based on subject, the biggest share with the rate of 34.9% (13 reviews and 17 research-based papers) has been studied on women's issues through sociology, psychology and philosophy. This was followed by the economy, employment, organization, and non-governmental organizations with 20.9% (9 reviews and nine research-based papers), arts and literature with 17.4% (15 reviews based papers) and law with 12.8% (11 reviews based papers). The lowest share of the congress was presented in politics with one review based paper (1.2%), health with two research-based paper (2.3%), history with two reviews based papers (2.3%), religion with two reviews and one research-based papers (3.5%) and media-communication with two compilations and two researches based papers (4.7%). In the papers categorized under main headings, women were examined in terms of gender and gender roles. According to the results, it was determined that discrimination against women continued, changes in-laws were not put into practice sufficiently, education and economic independence levels of women were insufficient, and violence against women continued increasingly. To eliminate all these problems and to make the society conscious, it was decided that scientific studies should be supported. Furthermore, support policies should be realized jointly for women and men to make women visible in public life, tolerance or mitigation should not be put forward for any reason or in any group in cases of harassment and assault against women. However, it has been determined that women in Turkey should be in a better position in the social, cultural, psychological, economic and educational areas, and future studies should be carried out to improve women's rights and to create a positive perspective.

Keywords: gender, gender roles, sociology, psychology and philosophy, women studies

Procedia PDF Downloads 136
588 Made on Land, Ends Up in the Water "I-Clare" Intelligent Remediation System for Removal of Harmful Contaminants in Water using Modified Reticulated Vitreous Carbon Foam

Authors: Sabina Żołędowska, Tadeusz Ossowski, Robert Bogdanowicz, Jacek Ryl, Paweł Rostkowski, Michał Kruczkowski, Michał Sobaszek, Zofia Cebula, Grzegorz Skowierzak, Paweł Jakóbczyk, Lilit Hovhannisyan, Paweł Ślepski, Iwona Kaczmarczyk, Mattia Pierpaoli, Bartłomiej Dec, Dawid Nidzworski

Abstract:

The circular economy of water presents a pressing environmental challenge in our society. Water contains various harmful substances, such as drugs, antibiotics, hormones, and dioxides, which can pose silent threats. Water pollution has severe consequences for aquatic ecosystems. It disrupts the balance of ecosystems by harming aquatic plants, animals, and microorganisms. Water pollution poses significant risks to human health. Exposure to toxic chemicals through contaminated water can have long-term health effects, such as cancer, developmental disorders, and hormonal imbalances. However, effective remediation systems can be implemented to remove these contaminants using electrocatalytic processes, which offer an environmentally friendly alternative to other treatment methods, and one of them is the innovative iCLARE system. The project's primary focus revolves around a few main topics: Reactor design and construction, selection of a specific type of reticulated vitreous carbon foams (RVC), analytical studies of harmful contaminants parameters and AI implementation. This high-performance electrochemical reactor will be build based on a novel type of electrode material. The proposed approach utilizes the application of reticulated vitreous carbon foams (RVC) with deposited modified metal oxides (MMO) and diamond thin films. The following setup is characterized by high surface area development and satisfactory mechanical and electrochemical properties, designed for high electrocatalytic process efficiency. The consortium validated electrode modification methods that are the base of the iCLARE product and established the procedures for the detection of chemicals detection: - deposition of metal oxides WO3 and V2O5-deposition of boron-doped diamond/nanowalls structures by CVD process. The chosen electrodes (porous Ferroterm electrodes) were stress tested for various parameters that might occur inside the iCLARE machine–corosis, the long-term structure of the electrode surface during electrochemical processes, and energetic efficacy using cyclic polarization and electrochemical impedance spectroscopy (before and after electrolysis) and dynamic electrochemical impedance spectroscopy (DEIS). This tool allows real-time monitoring of the changes at the electrode/electrolyte interphase. On the other hand, the toxicity of iCLARE chemicals and products of electrolysis are evaluated before and after the treatment using MARA examination (IBMM) and HPLC-MS-MS (NILU), giving us information about the harmfulness of using electrode material and the efficiency of iClare system in the disposal of pollutants. Implementation of data into the system that uses artificial intelligence and the possibility of practical application is in progress (SensDx).

Keywords: waste water treatement, RVC, electrocatalysis, paracetamol

Procedia PDF Downloads 74
587 Application of Nuclear Magnetic Resonance (1H-NMR) in the Analysis of Catalytic Aquathermolysis: Colombian Heavy Oil Case

Authors: Paola Leon, Hugo Garcia, Adan Leon, Samuel Munoz

Abstract:

The enhanced oil recovery by steam injection was considered a process that only generated physical recovery mechanisms. However, there is evidence of the occurrence of a series of chemical reactions, which are called aquathermolysis, which generates hydrogen sulfide, carbon dioxide, methane, and lower molecular weight hydrocarbons. These reactions can be favored by the addition of a catalyst during steam injection; in this way, it is possible to generate the original oil in situ upgrading through the production increase of molecules of lower molecular weight. This additional effect could increase the oil recovery factor and reduce costs in transport and refining stages. Therefore, this research has focused on the experimental evaluation of the catalytic aquathermolysis on a Colombian heavy oil with 12,8°API. The effects of three different catalysts, reaction time, and temperature were evaluated in a batch microreactor. The changes in the Colombian heavy oil were quantified through nuclear magnetic resonance 1H-NMR. The relaxation times interpretation and the absorption intensity allowed to identify the distribution of the functional groups in the base oil and upgraded oils. Additionally, the average number of aliphatic carbons in alkyl chains, the number of substituted rings, and the aromaticity factor were established as average structural parameters in order to simplify the samples' compositional analysis. The first experimental stage proved that each catalyst develops a different reaction mechanism. The aromaticity factor has an increasing order of the salts used: Mo > Fe > Ni. However, the upgraded oil obtained with iron naphthenate tends to form a higher content of mono-aromatic and lower content of poly-aromatic compounds. On the other hand, the results obtained from the second phase of experiments suggest that the upgraded oils have a smaller difference in the length of alkyl chains in the range of 240º to 270°C. This parameter has lower values at 300°C, which indicates that the alkylation or cleavage reactions of alkyl chains govern at higher reaction temperatures. The presence of condensation reactions is supported by the behavior of the aromaticity factor and the bridge carbons production between aromatic rings (RCH₂). Finally, it is observed that there is a greater dispersion in the aliphatic hydrogens, which indicates that the alkyl chains have a greater reactivity compared to the aromatic structures.

Keywords: catalyst, upgrading, aquathermolysis, steam

Procedia PDF Downloads 102
586 Multibody Constrained Dynamics of Y-Method Installation System for a Large Scale Subsea Equipment

Authors: Naeem Ullah, Menglan Duan, Mac Darlington Uche Onuoha

Abstract:

The lowering of subsea equipment into the deep waters is a challenging job due to the harsh offshore environment. Many researchers have introduced various installation systems to deploy the payload safely into the deep oceans. In general practice, dual floating vessels are not employed owing to the prevalent safety risks and hazards caused by ever-increasing dynamical effects sourced by mutual interaction between the bodies. However, while keeping in the view of the optimal grounds, such as economical one, the Y-method, the two conventional tugboats supporting the equipment by the two independent strands connected to a tri-plate above the equipment, has been employed to study multibody dynamics of the dual barge lifting operations. In this study, the two tugboats and the suspended payload (Y-method) are deployed for the lowering of subsea equipment into the deep waters as a multibody dynamic system. The two-wire ropes are used for the lifting and installation operation by this Y-method installation system. 6-dof (degree of freedom) for each body are considered to establish coupled 18-dof multibody model by embedding technique or velocity transformation technique. The fundamental and prompt advantage of this technique is that the constraint forces can be eliminated directly, and no extra computational effort is required for the elimination of the constraint forces. The inertial frame of reference is taken at the surface of the water as the time-independent frame of reference, and the floating frames of reference are introduced in each body as the time-dependent frames of reference in order to formulate the velocity transformation matrix. The local transformation of the generalized coordinates to the inertial frame of reference is executed by applying the Euler Angle approach. The spherical joints are articulated amongst the multibody as the kinematic joints. The hydrodynamic force, the two-strand forces, the hydrostatic force, and the mooring forces are taken into consideration as the external forces. The radiation force of the hydrodynamic force is obtained by employing the Cummins equation. The wave exciting part of the hydrodynamic force is obtained by using force response amplitude operators (RAOs) that are obtained by the commercial solver ‘OpenFOAM’. The strand force is obtained by considering the wire rope as an elastic spring. The nonlinear hydrostatic force is obtained by the pressure integration technique at each time step of the wave movement. The mooring forces are evaluated by using Faltinsen analytical approach. ‘The Runge Kutta Method’ of Fourth-Order is employed to evaluate the coupled equations of motion obtained for 18-dof multibody model. The results are correlated with the simulated Orcaflex Model. Moreover, the results from Orcaflex Model are compared with the MOSES Model from previous studies. The MBDS of single barge lifting operation from the former studies are compared with the MBDS of the established dual barge lifting operation. The dynamics of the dual barge lifting operation are found larger in magnitude as compared to the single barge lifting operation. It is noticed that the traction at the top connection point of the cable decreases with the increase in the length, and it becomes almost constant after passing through the splash zone.

Keywords: dual barge lifting operation, Y-method, multibody dynamics, shipbuilding, installation of subsea equipment, shipbuilding

Procedia PDF Downloads 197
585 Semi-Empirical Modeling of Heat Inactivation of Enterococci and Clostridia During the Hygienisation in Anaerobic Digestion Process

Authors: Jihane Saad, Thomas Lendormi, Caroline Le Marechal, Anne-marie Pourcher, Céline Druilhe, Jean-louis Lanoiselle

Abstract:

Agricultural anaerobic digestion consists in the conversion of animal slurry and manure into biogas and digestate. They need, however, to be treated at 70 ºC during 60 min before anaerobic digestion according to the European regulation (EC n°1069/2009 & EU n°142/2011). The impact of such heat treatment on the outcome of bacteria has been poorly studied up to now. Moreover, a recent study¹ has shown that enterococci and clostridia are still detected despite the application of such thermal treatment, questioning the relevance of this approach for the hygienisation of digestate. The aim of this study is to establish the heat inactivation kinetics of two species of enterococci (Enterococcus faecalis and Enterococcus faecium) and two species of clostridia (Clostridioides difficile and Clostridium novyi as a non-toxic model for Clostridium botulinum of group III). A pure culture of each strain was prepared in a specific sterile medium at concentration of 10⁴ – 10⁷ MPN / mL (Most Probable number), depending on the bacterial species. Bacterial suspensions were then filled in sterilized capillary tubes and placed in a water or oil bath at desired temperature for a specific period of time. Each bacterial suspension was enumerated using a MPN approach, and tests were repeated three times for each temperature/time couple. The inactivation kinetics of the four indicator bacteria is described using the Weibull model and the classical Bigelow model of first-order kinetics. The Weibull model takes biological variation, with respect to thermal inactivation, into account and is basically a statistical model of distribution of inactivation times as the classical first-order approach is a special case of the Weibull model. The heat treatment at 70 ºC / 60 min contributes to a reduction greater than 5 log10 for E. faecium and E. faecalis. However, it results only in a reduction of about 0.7 log10 for C. difficile and an increase of 0.5 log10 for C. novyi. Application of treatments at higher temperatures is required to reach a reduction greater or equal to 3 log10 for C. novyi (such as 30 min / 100 ºC, 13 min / 105 ºC, 3 min / 110 ºC, and 1 min / 115 ºC), raising the question of the relevance of the application of heat treatment at 70 ºC / 60 min for these spore-forming bacteria. To conclude, the heat treatment (70 ºC / 60 min) defined by the European regulation is sufficient to inactivate non-sporulating bacteria. Higher temperatures (> 100 ºC) are required as far as spore-forming bacteria concerns to reach a 3 log10 reduction (sporicidal activity).

Keywords: heat treatment, enterococci, clostridia, inactivation kinetics

Procedia PDF Downloads 99
584 Effect of Laser Ablation OTR Films and High Concentration Carbon Dioxide for Maintaining the Freshness of Strawberry ‘Maehyang’ for Export in Modified Atmosphere Condition

Authors: Hyuk Sung Yoon, In-Lee Choi, Min Jae Jeong, Jun Pill Baek, Ho-Min Kang

Abstract:

This study was conducted to improve storability by using suitable laser ablation oxygen transmission rate (OTR) films and effectiveness of high carbon dioxide at strawberry 'Maehyang' for export. Strawberries were grown by hydroponic system in Gyeongsangnam-do province. These strawberries were packed by different laser ablation OTR films (Daeryung Co., Ltd.) such as 1,300 cc, 20,000 cc, 40,000 cc, 80,000 cc, and 100,000 cc•m-2•day•atm. And CO2 injection (30%) treatment was used 20,000 cc•m-2•day•atm OTR film and perforated film was as a control. Temperature conditions were applied simulated shipping and distribution conditions from Korea to Singapore, there were stored at 3 ℃ (13 days), 10 ℃ (an hour), and 8 ℃ (7 days) for 20 days. Fresh weight loss rate was under 1% as maximum permissible weight loss in treated OTR films except perforated film as a control during storage. Carbon dioxide concentration within a package for the storage period showed a lower value than the maximum CO2 concentration tolerated range (15 %) in treated OTR films and even the concentration of high OTR film treatment; from 20,000cc to 100,000cc were less than 3%. 1,300 cc had a suitable carbon dioxide range as over 5 % under 15 % at 5 days after storage until finished experiments and CO2 injection treatment was quickly drop the 15 % at storage after 1 day, but it kept around 15 % during storage. Oxygen concentration was maintained between 10 to 15 % in 1,300 cc and CO2 injection treatments, but other treatments were kept in 19 to 21 %. Ethylene concentration was showed very higher concentration at the CO2 injection treatment than OTR treatments. In the OTR treatments, 1,300 cc showed the highest concentration in ethylene and 20,000 cc film had lowest. Firmness was maintained highest in 1,300cc, but there was not shown any significant differences among other OTR treatments. Visual quality had shown the best result in 20,000 cc that showed marketable quality until 20 days after storage. 20,000 cc and perforated film had better than other treatments in off-odor and the 1,300 cc and CO2 injection treatments have occurred strong off-odor even after 10 minutes. As a result of the difference between Hunter ‘L’ and ‘a’ values of chroma meter, the 1,300cc and CO2 injection treatments were delayed color developments and other treatments did not shown any significant differences. The results indicate that effectiveness for maintaining the freshness was best achieved at 20,000 cc•m-2•day•atm. Although 1,300 cc and CO2 injection treatments were in appropriate MA condition, it showed darkening of strawberry calyx and excessive reduction of coloring due to high carbon dioxide concentration during storage. While 1,300cc and CO2 injection treatments were considered as appropriate treatments for exports to Singapore, but the result was shown different. These results are based on cultivar characteristics of strawberry 'Maehyang'.

Keywords: carbon dioxide, firmness, shelf-life, visual quality

Procedia PDF Downloads 393
583 Platelet Volume Indices: Emerging Markers of Diabetic Thrombocytopathy

Authors: Mitakshara Sharma, S. K. Nema

Abstract:

Diabetes mellitus (DM) is metabolic disorder prevalent in pandemic proportions, incurring significant morbidity and mortality due to associated vascular angiopathies. Platelet related thrombogenesis plays key role in pathogenesis of these complications. Most patients with type II DM suffer from preventable vascular complications and early diagnosis can help manage these successfully. These complications are attributed to platelet activation which can be recognised by the increase in Platelet Volume Indices(PVI) viz. Mean Platelet Volume(MPV) and Platelet Distribution Width(PDW). This study was undertaken with the aim of finding a relationship between PVI and vascular complications of Diabetes mellitus, their importance as a causal factor in these complications and use as markers for early detection of impending vascular complications in patients with poor glycaemic status. This is a cross-sectional study conducted for 2 years with total 930 subjects. The subjects were segregated in 03 groups on basis of glycosylated haemoglobin (HbA1C) as: - (a) Diabetic, (b) Non-Diabetic and (c) Subjects with Impaired fasting glucose(IFG) with 300 individuals in IFG and non-diabetic group & 330 individuals in diabetic group. The diabetic group was further divided into two groups: - (a) Diabetic subjects with diabetes related vascular complications (b) Diabetic subjects without diabetes related vascular complications. Samples for HbA1C and platelet indices were collected using Ethylene diamine tetracetic acid(EDTA) as anticoagulant and processed on SYSMEX-XS-800i autoanalyser. The study revealed stepwise increase in PVI from non-diabetics to IFG to diabetics. MPV and PDW of diabetics, IFG and non diabetics were 17.60 ± 2.04, 11.76 ± 0.73, 9.93 ± 0.64 and 19.17 ± 1.48, 15.49 ± 0.67, 10.59 ± 0.67 respectively with a significant p value 0.00 and a significant positive correlation (MPV-HbA1c r = 0.951; PDW-HbA1c r = 0.875). However, significant negative correlation was found between glycaemic levels and total platelet count (PC- HbA1c r =-0.164). MPV & PDW of subjects with and without diabetes related complications were (15.14 ± 1.04) fl & (17.51±0.39) fl and (18.96 ± 0.83) fl & (20.09 ± 0.98) fl respectively with a significant p value 0.00.The current study demonstrates raised platelet indices & reduced platelet counts in association with rising glycaemic levels and diabetes related vascular complications across various study groups & showed that platelet morphology is altered with increasing glycaemic levels. These changes can be known by measurements of PVI which are important, simple, cost effective, effortless tool & indicators of impending vascular complications in patients with deranged glycaemic control. PVI should be researched and explored further as surrogate markers to develop a clinical tool for early recognition of vascular changes related to diabetes and thereby help prevent them. They can prove to be more useful in developing countries with limited resources. This study is multi-parameter, comprehensive with adequately powered study design and represents pioneering effort in India on account of the fact that both Platelet indices (MPV & PDW) along with platelet count have been evaluated together for the first time in Diabetics, non diabetics, patients with IFG and also in the diabetic patients with and without diabetes related vascular complications.

Keywords: diabetes, HbA1C, IFG, MPV, PDW, PVI

Procedia PDF Downloads 232
582 Lamellodiscus spp. (Monogenoidea: Diplectanidae) Infecting the Gill Lamellae of Porgies (Spariformes: Sparidae) in Dakar Coast

Authors: Sikhou Drame, Arfang Diamanka

Abstract:

In Senegal, the fishing sector plays an important role in socio-economic development. However, he is going through enormous difficulties, caused by the scarcity of fish on the Senegalese coast, the overexploitation of fishery resources. Based on this observation, the authorities are betting on the development of aquaculture. It is in this context that the exploration of fish from the highly consumed Sparidae family remains a good solution. Indeed, the Sparidae family has good characteristics for farming at sea. However, parasites can proliferate and destroy the efforts made to cultivate fish in confined areas. the knowledge of these parasites in particular the monogeneans, very specific to the sparidae fishes will allow to better know the bio-ecology of the fishes. Better know the main parasitic monogeneans of the genus Lamellodiscus of sparidae fish of the genus Pagrus harvested in Senegal. It will first be a question of identifying from the observation of the morpho-anatomical characters, Monogeneans of the genus Lamellodiscus, branchial parasites collected from three species of host: Pagrus caeruleostictus , Pagrus auriga and Pagrus africanus. Then to evaluate the spatial and temporary distribution of parasitic indices on two Dakar landing sites (Soumbédioune and Yarakh) and finally to determine their specificity. The fish examined were purchased directly from the landing sites in Dakar and then transported to the laboratory where they were identified, then dissected. The gills were examined under a magnifying glass and the monogeneans were harvested, fixed in 70% ethanol and then mounted between slide and coverslip. The identification of the parasites is based on the observation of the morpho-anatomical characters and on the measurements of the sclerified organs of the haptor and the male copulatory organ. In total out of the 90 individuals examined: Pagrus auriga (30), Pagrus africanus (30) and Pagrus caeruleostictus (30), 6 species of monogeneans of the genus Lamellodiscus (Monogenea, Diplectanidae) are obtained: L. sarculus, L. sigillatus, L.vicinus, L. rastellus, L. africanus n.sp and L. yarakhensis n.sp. Our results show that specimens of small sizes [15-20[cm are the most infested. The values of infestation intensity and abundance are higher in fish from Yarakh and also during the cold season. it is the species Pagrus caeruleostictus which records the highest parasitic loads in the two localities. the majority of the parasites identified have a strict or oioxene specificity. It appears from this study that fish of the genus Pagrus are highly parasitized by monogeneans of the genus Lamellodiscus with a general prevalence of 87.78%. Each infested fish has an average of 30 monogeneans of the genus Lamellodiscus.

Keywords: monogeneans, Lamellodiscus, Dakar coast, genus Pagrus

Procedia PDF Downloads 65
581 An Intelligence-Led Methodologly for Detecting Dark Actors in Human Trafficking Networks

Authors: Andrew D. Henshaw, James M. Austin

Abstract:

Introduction: Human trafficking is an increasingly serious transnational criminal enterprise and social security issue. Despite ongoing efforts to mitigate the phenomenon and a significant expansion of security scrutiny over past decades, it is not receding. This is true for many nations in Southeast Asia, widely recognized as the global hub for trafficked persons, including men, women, and children. Clearly, human trafficking is difficult to address because there are numerous drivers, causes, and motivators for it to persist, such as non-military and non-traditional security challenges, i.e., climate change, global warming displacement, and natural disasters. These make displaced persons and refugees particularly vulnerable. The issue is so large conservative estimates put a dollar value at around $150 billion-plus per year (Niethammer, 2020) spanning sexual slavery and exploitation, forced labor, construction, mining and in conflict roles, and forced marriages of girls and women. Coupled with corruption throughout military, police, and civil authorities around the world, and the active hands of powerful transnational criminal organizations, it is likely that such figures are grossly underestimated as human trafficking is misreported, under-detected, and deliberately obfuscated to protect those profiting from it. For example, the 2022 UN report on human trafficking shows a 56% reduction in convictions in that year alone (UNODC, 2022). Our Approach: To better understand this, our research utilizes a bespoke methodology. Applying a JAM (Juxtaposition Assessment Matrix), which we previously developed to detect flows of dark money around the globe (Henshaw, A & Austin, J, 2021), we now focus on the human trafficking paradigm. Indeed, utilizing a JAM methodology has identified key indicators of human trafficking not previously explored in depth. Being a set of structured analytical techniques that provide panoramic interpretations of the subject matter, this iteration of the JAM further incorporates behavioral and driver indicators, including the employment of Open-Source Artificial Intelligence (OS-AI) across multiple collection points. The extracted behavioral data was then applied to identify non-traditional indicators as they contribute to human trafficking. Furthermore, as the JAM OS-AI analyses data from the inverted position, i.e., the viewpoint of the traffickers, it examines the behavioral and physical traits required to succeed. This transposed examination of the requirements of success delivers potential leverage points for exploitation in the fight against human trafficking in a new and novel way. Findings: Our approach identified new innovative datasets that have previously been overlooked or, at best, undervalued. For example, the JAM OS-AI approach identified critical 'dark agent' lynchpins within human trafficking that are difficult to detect and harder to connect to actors and agents within a network. Our preliminary data suggests this is in part due to the fact that ‘dark agents’ in extant research have been difficult to detect and potentially much harder to directly connect to the actors and organizations in human trafficking networks. Our research demonstrates that using new investigative techniques such as OS-AI-aided JAM introduces a powerful toolset to increase understanding of human trafficking and transnational crime and illuminate networks that, to date, avoid global law enforcement scrutiny.

Keywords: human trafficking, open-source intelligence, transnational crime, human security, international human rights, intelligence analysis, JAM OS-AI, Dark Money

Procedia PDF Downloads 84
580 Robust Electrical Segmentation for Zone Coherency Delimitation Base on Multiplex Graph Community Detection

Authors: Noureddine Henka, Sami Tazi, Mohamad Assaad

Abstract:

The electrical grid is a highly intricate system designed to transfer electricity from production areas to consumption areas. The Transmission System Operator (TSO) is responsible for ensuring the efficient distribution of electricity and maintaining the grid's safety and quality. However, due to the increasing integration of intermittent renewable energy sources, there is a growing level of uncertainty, which requires a faster responsive approach. A potential solution involves the use of electrical segmentation, which involves creating coherence zones where electrical disturbances mainly remain within the zone. Indeed, by means of coherent electrical zones, it becomes possible to focus solely on the sub-zone, reducing the range of possibilities and aiding in managing uncertainty. It allows faster execution of operational processes and easier learning for supervised machine learning algorithms. Electrical segmentation can be applied to various applications, such as electrical control, minimizing electrical loss, and ensuring voltage stability. Since the electrical grid can be modeled as a graph, where the vertices represent electrical buses and the edges represent electrical lines, identifying coherent electrical zones can be seen as a clustering task on graphs, generally called community detection. Nevertheless, a critical criterion for the zones is their ability to remain resilient to the electrical evolution of the grid over time. This evolution is due to the constant changes in electricity generation and consumption, which are reflected in graph structure variations as well as line flow changes. One approach to creating a resilient segmentation is to design robust zones under various circumstances. This issue can be represented through a multiplex graph, where each layer represents a specific situation that may arise on the grid. Consequently, resilient segmentation can be achieved by conducting community detection on this multiplex graph. The multiplex graph is composed of multiple graphs, and all the layers share the same set of vertices. Our proposal involves a model that utilizes a unified representation to compute a flattening of all layers. This unified situation can be penalized to obtain (K) connected components representing the robust electrical segmentation clusters. We compare our robust segmentation to the segmentation based on a single reference situation. The robust segmentation proves its relevance by producing clusters with high intra-electrical perturbation and low variance of electrical perturbation. We saw through the experiences when robust electrical segmentation has a benefit and in which context.

Keywords: community detection, electrical segmentation, multiplex graph, power grid

Procedia PDF Downloads 65
579 Formulation and Invivo Evaluation of Salmeterol Xinafoate Loaded MDI for Asthma Using Response Surface Methodology

Authors: Paresh Patel, Priya Patel, Vaidehi Sorathiya, Navin Sheth

Abstract:

The aim of present work was to fabricate Salmeterol Xinafoate (SX) metered dose inhaler (MDI) for asthma and to evaluate the SX loaded solid lipid nanoparticles (SLNs) for pulmonary delivery. Solid lipid nanoparticles can be used to deliver particles to the lungs via MDI. A modified solvent emulsification diffusion technique was used to prepare Salmeterol Xinafoate loaded solid lipid nanoparticles by using compritol 888 ATO as lipid, tween 80 as surfactant, D-mannitol as cryoprotecting agent and L-leucine was used to improve aerosolization behaviour. Box-Behnken design was applied with 17 runs. 3-D surface response plots and contour plots were drawn and optimized formulation was selected based on minimum particle size and maximum % EE. % yield, in vitro diffusion study, scanning electron microscopy, X-ray diffraction, DSC, FTIR also characterized. Particle size, zeta potential analyzed by Zetatrac particle size analyzer and aerodynamic properties was carried out by cascade impactor. Pre convulsion time was examined for control group, treatment group and compare with marketed group. MDI was evaluated for leakage test, flammability test, spray test and content per puff. By experimental design, particle size and % EE found to be in range between 119-337 nm and 62.04-76.77% by solvent emulsification diffusion technique. Morphologically, particles have spherical shape and uniform distribution. DSC & FTIR study showed that no interaction between drug and excipients. Zeta potential shows good stability of SLNs. % respirable fraction found to be 52.78% indicating reach to the deep part of lung such as alveoli. Animal study showed that fabricated MDI protect the lungs against histamine induced bronchospasm in guinea pigs. MDI showed sphericity of particle in spray pattern, 96.34% content per puff and non-flammable. SLNs prepared by Solvent emulsification diffusion technique provide desirable size for deposition into the alveoli. This delivery platform opens up a wide range of treatment application of pulmonary disease like asthma via solid lipid nanoparticles.

Keywords: salmeterol xinafoate, solid lipid nanoparticles, box-behnken design, solvent emulsification diffusion technique, pulmonary delivery

Procedia PDF Downloads 443
578 Experimental Investigation on the Effect of Prestress on the Dynamic Mechanical Properties of Conglomerate Based on 3D-SHPB System

Authors: Wei Jun, Liao Hualin, Wang Huajian, Chen Jingkai, Liang Hongjun, Liu Chuanfu

Abstract:

Kuqa Piedmont is rich in oil and gas resources and has great development potential in Tarim Basin, China. However, there is a huge thick gravel layer developed with high content, wide distribution and variation in size of gravel, leading to the condition of strong heterogeneity. So that, the drill string is in a state of severe vibration and the drill bit is worn seriously while drilling, which greatly reduces the rock-breaking efficiency, and there is a complex load state of impact and three-dimensional in-situ stress acting on the rock in the bottom hole. The dynamic mechanical properties and the influencing factors of conglomerate, the main component of gravel layer, are the basis of engineering design and efficient rock breaking method and theoretical research. Limited by the previously experimental technique, there are few works published yet about conglomerate, especially rare in dynamic load. Based on this, a kind of 3D SHPB system, three-dimensional prestress, can be applied to simulate the in-situ stress characteristics, is adopted for the dynamic test of the conglomerate. The results show that the dynamic strength is higher than its static strength obviously, and while the three-dimensional prestress is 0 and the loading strain rate is 81.25~228.42 s-1, the true triaxial equivalent strength is 167.17~199.87 MPa, and the strong growth factor of dynamic and static is 1.61~1.92. And the higher the impact velocity, the greater the loading strain rate, the higher the dynamic strength and the greater the failure strain, which all increase linearly. There is a critical prestress in the impact direction and its vertical direction. In the impact direction, while the prestress is less than the critical one, the dynamic strength and the loading strain rate increase linearly; otherwise, the strength decreases slightly and the strain rate decreases rapidly. In the vertical direction of impact load, the strength increases and the strain rate decreases linearly before the critical prestress, after that, oppositely. The dynamic strength of the conglomerate can be reduced properly by reducing the amplitude of impact load so that the service life of rock-breaking tools can be prolonged while drilling in the stratum rich in gravel. The research has important reference significance for the speed-increasing technology and theoretical research while drilling in gravel layer.

Keywords: huge thick gravel layer, conglomerate, 3D SHPB, dynamic strength, the deformation characteristics, prestress

Procedia PDF Downloads 192
577 Charcoal Traditional Production in Portugal: Contribution to the Quantification of Air Pollutant Emissions

Authors: Cátia Gonçalves, Teresa Nunes, Inês Pina, Ana Vicente, C. Alves, Felix Charvet, Daniel Neves, A. Matos

Abstract:

The production of charcoal relies on rudimentary technologies using traditional brick kilns. Charcoal is produced under pyrolysis conditions: breaking down the chemical structure of biomass under high temperature in the absence of air. The amount of the pyrolysis products (charcoal, pyroligneous extract, and flue gas) depends on various parameters, including temperature, time, pressure, kiln design, and wood characteristics like the moisture content. This activity is recognized for its inefficiency and high pollution levels, but it is poorly characterized. This activity is widely distributed and is a vital economic activity in certain regions of Portugal, playing a relevant role in the management of woody residues. The location of the units establishes the biomass used for charcoal production. The Portalegre district, in the Alto Alentejo region (Portugal), is a good example, essentially with rural characteristics, with a predominant farming, agricultural, and forestry profile, and with a significant charcoal production activity. In this district, a recent inventory identifies almost 50 charcoal production units, equivalent to more than 450 kilns, of which 80% appear to be in operation. A field campaign was designed with the objective of determining the composition of the emissions released during a charcoal production cycle. A total of 30 samples of particulate matter and 20 gas samples in Tedlar bags were collected. Particulate and gas samplings were performed in parallel, 2 in the morning and 2 in the afternoon, alternating the inlet heads (PM₁₀ and PM₂.₅), in the particulate sampler. The gas and particulate samples were collected in the plume as close as the emission chimney point. The biomass (dry basis) used in the carbonization process was a mixture of cork oak (77 wt.%), holm oak (7 wt.%), stumps (11 wt.%), and charred wood (5 wt.%) from previous carbonization processes. A cylindrical batch kiln (80 m³) with 4.5 m diameter and 5 m of height was used in this study. The composition of the gases was determined by gas chromatography, while the particulate samples (PM₁₀, PM₂.₅) were subjected to different analytical techniques (thermo-optical transmission technique, ion chromatography, HPAE-PAD, and GC-MS after solvent extraction) after prior gravimetric determination, to study their organic and inorganic constituents. The charcoal production cycle presents widely varying operating conditions, which will be reflected in the composition of gases and particles produced and emitted throughout the process. The concentration of PM₁₀ and PM₂.₅ in the plume was calculated, ranging between 0.003 and 0.293 g m⁻³, and 0.004 and 0.292 g m⁻³, respectively. Total carbon, inorganic ions, and sugars account, in average, for PM10 and PM₂.₅, 65 % and 56 %, 2.8 % and 2.3 %, 1.27 %, and 1.21 %, respectively. The organic fraction studied until now includes more than 30 aliphatic compounds and 20 PAHs. The emission factors of particulate matter to produce charcoal in the traditional kiln were 33 g/kg (wooddb) and 27 g/kg (wooddb) for PM₁₀ and PM₂.₅, respectively. With the data obtained in this study, it is possible to fill the lack of information about the environmental impact of the traditional charcoal production in Portugal. Acknowledgment: Authors thanks to FCT – Portuguese Science Foundation, I.P. and to Ministry of Science, Technology and Higher Education of Portugal for financial support within the scope of the project CHARCLEAN (PCIF/GVB/0179/2017) and CESAM (UIDP/50017/2020 + UIDB/50017/2020).

Keywords: brick kilns, charcoal, emission factors, PAHs, total carbon

Procedia PDF Downloads 132
576 A Comprehensive Study on Freshwater Aquatic Life Health Quality Assessment Using Physicochemical Parameters and Planktons as Bio Indicator in a Selected Region of Mahaweli River in Kandy District, Sri Lanka

Authors: S. M. D. Y. S. A. Wijayarathna, A. C. A. Jayasundera

Abstract:

Mahaweli River is the longest and largest river in Sri Lanka and it is the major drinking water source for a large portion of 2.5 million inhabitants in the Central Province. The aim of this study was to the determination of water quality and aquatic life health quality in a selected region of Mahaweli River. Six sampling locations (Site 1: 7° 16' 50" N, 80° 40' 00" E; Site 2: 7° 16' 34" N, 80° 40' 27" E; Site 3: 7° 16' 15" N, 80° 41' 28" E; Site 4: 7° 14' 06" N, 80° 44' 36" E; Site 5: 7° 14' 18" N, 80° 44' 39" E; Site 6: 7° 13' 32" N, 80° 46' 11" E) with various anthropogenic activities at bank of the river were selected for a period of three months from Tennekumbura Bridge to Victoria Reservoir. Temperature, pH, Electrical Conductivity (EC), Total Dissolved Solids (TDS), Dissolved Oxygen (DO), 5-day Biological Oxygen Demand (BOD5), Total Suspended Solids (TSS), hardness, the concentration of anions, and metal concentration were measured according to the standard methods, as physicochemical parameters. Planktons were considered as biological parameters. Using a plankton net (20 µm mesh size), surface water samples were collected into acid washed dried vials and were stored in an ice box during transportation. Diversity and abundance of planktons were identified within 4 days of sample collection using standard manuals of plankton identification under the light microscope. Almost all the measured physicochemical parameters were within the CEA standards limits for aquatic life, Sri Lanka Standards (SLS) or World Health Organization’s Guideline for drinking water. Concentration of orthophosphate ranged between 0.232 to 0.708 mg L-1, and it has exceeded the standard limit of aquatic life according to CEA guidelines (0.400 mg L-1) at Site 1 and Site 2, where there is high disturbance by cultivations and close households. According to the Pearson correlation (significant correlation at p < 0.05), it is obvious that some physicochemical parameters (temperature, DO, TDS, TSS, phosphate, sulphate, chloride fluoride, and sodium) were significantly correlated to the distribution of some plankton species such as Aulocoseira, Navicula, Synedra, Pediastrum, Fragilaria, Selenastrum, Oscillataria, Tribonema and Microcystis. Furthermore, species that appear in blooms (Aulocoseira), organic pollutants (Navicula), and phosphate high eutrophic water (Microcystis) were found, indicating deteriorated water quality in Mahaweli River due to agricultural activities, solid waste disposal, and release of domestic effluents. Therefore, it is necessary to improve environmental monitoring and management to control the further deterioration of water quality of the river.

Keywords: bio indicator, environmental variables, planktons, physicochemical parameters, water quality

Procedia PDF Downloads 99
575 Rapid Soil Classification Using Computer Vision with Electrical Resistivity and Soil Strength

Authors: Eugene Y. J. Aw, J. W. Koh, S. H. Chew, K. E. Chua, P. L. Goh, Grace H. B. Foo, M. L. Leong

Abstract:

This paper presents the evaluation of various soil testing methods such as the four-probe soil electrical resistivity method and cone penetration test (CPT) that can complement a newly developed novel rapid soil classification scheme using computer vision, to improve the accuracy and productivity of on-site classification of excavated soil. In Singapore, excavated soils from the local construction industry are transported to Staging Grounds (SGs) to be reused as fill material for land reclamation. Excavated soils are mainly categorized into two groups (“Good Earth” and “Soft Clay”) based on particle size distribution (PSD) and water content (w) from soil investigation reports and on-site visual survey, such that proper treatment and usage can be exercised. However, this process is time-consuming and labor-intensive. Thus, a rapid classification method is needed at the SGs. Four-probe soil electrical resistivity and CPT were evaluated for their feasibility as suitable additions to the computer vision system to further develop this innovative non-destructive and instantaneous classification method. The computer vision technique comprises soil image acquisition using an industrial-grade camera; image processing and analysis via calculation of Grey Level Co-occurrence Matrix (GLCM) textural parameters; and decision-making using an Artificial Neural Network (ANN). It was found from the previous study that the ANN model coupled with ρ can classify soils into “Good Earth” and “Soft Clay” in less than a minute, with an accuracy of 85% based on selected representative soil images. To further improve the technique, the following three items were targeted to be added onto the computer vision scheme: the apparent electrical resistivity of soil (ρ) measured using a set of four probes arranged in Wenner’s array, the soil strength measured using a modified mini cone penetrometer, and w measured using a set of time-domain reflectometry (TDR) probes. Laboratory proof-of-concept was conducted through a series of seven tests with three types of soils – “Good Earth”, “Soft Clay,” and a mix of the two. Validation was performed against the PSD and w of each soil type obtained from conventional laboratory tests. The results show that ρ, w and CPT measurements can be collectively analyzed to classify soils into “Good Earth” or “Soft Clay” and are feasible as complementing methods to the computer vision system.

Keywords: computer vision technique, cone penetration test, electrical resistivity, rapid and non-destructive, soil classification

Procedia PDF Downloads 226
574 A Preliminary Study on the Effects of Lung Impact on Ballistic Thoracic Trauma

Authors: Amy Pullen, Samantha Rodrigues, David Kieser, Brian Shaw

Abstract:

The aim of the study was to determine if a projectile interacting with the lungs increases the severity of injury in comparison to a projectile interacting with the ribs or intercostal muscle. This comparative study employed a 10% gelatine based model with either porcine ribs or balloons embedded to represent a lung. Four sample groups containing five samples were evaluated; these were control (plain gel), intercostal impact, rib impact, and lung impact. Two ammunition natures were evaluated at a range of 10m; these were 5.56x45mm and 7.62x51mm. Aspects of projectile behavior were quantified including exiting projectile weight, location of yawing, projectile fragmentation and distribution, location and area of the temporary cavity, permanent cavity formation, and overall energy deposition. Major findings included the cavity showing a higher percentage of the projectile weight exit the block than the intercostal and ribs, but similar to the control for the 5.56mm ammunition. However, for the 7.62mm ammunition, the lung was shown to have a higher percentage of the projectile weight exit the block than the control, intercostal and ribs. The total weight of projectile fragments as a function of penetration depth revealed large fluctuations and significant intra-group variation for both ammunition natures. Despite the lack of a clear trend, both plots show that the lung leads to greater projectile fragments exiting the model. The lung was shown to have a later center of the temporary cavity than the control, intercostal and ribs for both ammunition types. It was also shown to have a similar temporary cavity volume to the control, intercostal and ribs for the 5.56mm ammunition and a similar temporary cavity to the intercostal for the 7.62mm ammunition The lung was shown to leave a similar projectile tract than the control, intercostal and ribs for both ammunition types. It was also shown to have larger shear planes than the control and the intercostal, but similar to the ribs for the 5.56mm ammunition, whereas it was shown to have smaller shear planes than the control but similar shear planes to the intercostal and ribs for the 7.62mm ammunition. The lung was shown to have less energy deposited than the control, intercostal and ribs for both ammunition types. This comparative study provides insights into the influence of the lungs on thoracic gunshot trauma. It indicates that the lungs limits projectile deformation and causes a later onset of yawing and subsequently limits the energy deposited along the wound tract creating a deeper and smaller cavity. This suggests that lung impact creates an altered pattern of local energy deposition within the target which will affect the severity of trauma.

Keywords: ballistics, lung, trauma, wounding

Procedia PDF Downloads 164
573 Frequency Interpretation of a Wave Function, and a Vertical Waveform Treated as A 'Quantum Leap'

Authors: Anthony Coogan

Abstract:

Born’s probability interpretation of wave functions would have led to nearly identical results had he chosen a frequency interpretation instead. Logically, Born may have assumed that only one electron was under consideration, making it nonsensical to propose a frequency wave. Author’s suggestion: the actual experimental results were not of a single electron; rather, they were groups of reflected x-ray photons. The vertical waveform used by Scrhödinger in his Particle in the Box Theory makes sense if it was intended to represent a quantum leap. The author extended the single vertical panel to form a bar chart: separate panels would represent different energy levels. The proposed bar chart would be populated by reflected photons. Expansion of basic ideas: Part of Scrhödinger’s ‘Particle in the Box’ theory may be valid despite negative criticism. The waveform used in the diagram is vertical, which may seem absurd because real waves decay at a measurable rate, rather than instantaneously. However, there may be one notable exception. Supposedly, following from the theory, the Uncertainty Principle was derived – may a Quantum Leap not be represented as an instantaneous waveform? The great Scrhödinger must have had some reason to suggest a vertical waveform if the prevalent belief was that they did not exist. Complex wave forms representing a particle are usually assumed to be continuous. The actual observations made were x-ray photons, some of which had struck an electron, been reflected, and then moved toward a detector. From Born’s perspective, doing similar work the years in question 1926-7, he would also have considered a single electron – leading him to choose a probability distribution. Probability Distributions appear very similar to Frequency Distributions, but the former are considered to represent the likelihood of future events. Born’s interpretation of the results of quantum experiments led (or perhaps misled) many researchers into claiming that humans can influence events just by looking at them, e.g. collapsing complex wave functions by 'looking at the electron to see which slit it emerged from', while in reality light reflected from the electron moved in the observer’s direction after the electron had moved away. Astronomers may say that they 'look out into the universe' but are actually using logic opposed to the views of Newton and Hooke and many observers such as Romer, in that light carries information from a source or reflector to an observer, rather the reverse. Conclusion: Due to the controversial nature of these ideas, especially its implications about the nature of complex numbers used in applications in science and engineering, some time may pass before any consensus is reached.

Keywords: complex wave functions not necessary, frequency distributions instead of wave functions, information carried by light, sketch graph of uncertainty principle

Procedia PDF Downloads 194
572 Building User Behavioral Models by Processing Web Logs and Clustering Mechanisms

Authors: Madhuka G. P. D. Udantha, Gihan V. Dias, Surangika Ranathunga

Abstract:

Today Websites contain very interesting applications. But there are only few methodologies to analyze User navigations through the Websites and formulating if the Website is put to correct use. The web logs are only used if some major attack or malfunctioning occurs. Web Logs contain lot interesting dealings on users in the system. Analyzing web logs has become a challenge due to the huge log volume. Finding interesting patterns is not as easy as it is due to size, distribution and importance of minor details of each log. Web logs contain very important data of user and site which are not been put to good use. Retrieving interesting information from logs gives an idea of what the users need, group users according to their various needs and improve site to build an effective and efficient site. The model we built is able to detect attacks or malfunctioning of the system and anomaly detection. Logs will be more complex as volume of traffic and the size and complexity of web site grows. Unsupervised techniques are used in this solution which is fully automated. Expert knowledge is only used in validation. In our approach first clean and purify the logs to bring them to a common platform with a standard format and structure. After cleaning module web session builder is executed. It outputs two files, Web Sessions file and Indexed URLs file. The Indexed URLs file contains the list of URLs accessed and their indices. Web Sessions file lists down the indices of each web session. Then DBSCAN and EM Algorithms are used iteratively and recursively to get the best clustering results of the web sessions. Using homogeneity, completeness, V-measure, intra and inter cluster distance and silhouette coefficient as parameters these algorithms self-evaluate themselves to input better parametric values to run the algorithms. If a cluster is found to be too large then micro-clustering is used. Using Cluster Signature Module the clusters are annotated with a unique signature called finger-print. In this module each cluster is fed to Associative Rule Learning Module. If it outputs confidence and support as value 1 for an access sequence it would be a potential signature for the cluster. Then the access sequence occurrences are checked in other clusters. If it is found to be unique for the cluster considered then the cluster is annotated with the signature. These signatures are used in anomaly detection, prevent cyber attacks, real-time dashboards that visualize users, accessing web pages, predict actions of users and various other applications in Finance, University Websites, News and Media Websites etc.

Keywords: anomaly detection, clustering, pattern recognition, web sessions

Procedia PDF Downloads 279