Search results for: spatial classification
396 Decolonizing Print Culture and Bibliography Through Digital Visualizations of Artists’ Books at the University of Miami
Authors: Alejandra G. Barbón, José Vila, Dania Vazquez
Abstract:
This study seeks to contribute to the advancement of library and archival sciences in the areas of records management, knowledge organization, and information architecture, particularly focusing on the enhancement of bibliographical description through the incorporation of visual interactive designs aimed to enrich the library users’ experience. In an era of heightened awareness about the legacy of hiddenness across special and rare collections in libraries and archives, along with the need for inclusivity in academia, the University of Miami Libraries has embarked on an innovative project that intersects the realms of print culture, decolonization, and digital technology. This proposal presents an exciting initiative to revitalize the study of Artists’ Books collections by employing digital visual representations to decolonize bibliographic records of some of the most unique materials and foster a more holistic understanding of cultural heritage. Artists' Books, a dynamic and interdisciplinary art form, challenge conventional bibliographic classification systems, making them ripe for the exploration of alternative approaches. This project involves the creation of a digital platform that combines multimedia elements for digital representations, interactive information retrieval systems, innovative information architecture, trending bibliographic cataloging and metadata initiatives, and collaborative curation to transform how we engage with and understand these collections. By embracing the potential of technology, we aim to transcend traditional constraints and address the historical biases that have influenced bibliographic practices. In essence, this study showcases a groundbreaking endeavor at the University of Miami Libraries that seeks to not only enhance bibliographic practices but also confront the legacy of hiddenness across special and rare collections in libraries and archives while strengthening conventional bibliographic description. By embracing digital visualizations, we aim to provide new pathways for understanding Artists' Books collections in a manner that is more inclusive, dynamic, and forward-looking. This project exemplifies the University’s dedication to fostering critical engagement, embracing technological innovation, and promoting diverse and equitable classifications and representations of cultural heritage.Keywords: decolonizing bibliographic cataloging frameworks, digital visualizations information architecture platforms, collaborative curation and inclusivity for records management, engagement and accessibility increasing interaction design and user experience
Procedia PDF Downloads 72395 Theta-Phase Gamma-Amplitude Coupling as a Neurophysiological Marker in Neuroleptic-Naive Schizophrenia
Authors: Jun Won Kim
Abstract:
Objective: Theta-phase gamma-amplitude coupling (TGC) was used as a novel evidence-based tool to reflect the dysfunctional cortico-thalamic interaction in patients with schizophrenia. However, to our best knowledge, no studies have reported the diagnostic utility of the TGC in the resting-state electroencephalographic (EEG) of neuroleptic-naive patients with schizophrenia compared to healthy controls. Thus, the purpose of this EEG study was to understand the underlying mechanisms in patients with schizophrenia by comparing the TGC at rest between two groups and to evaluate the diagnostic utility of TGC. Method: The subjects included 90 patients with schizophrenia and 90 healthy controls. All patients were diagnosed with schizophrenia according to the criteria of Diagnostic and Statistical Manual of Mental Disorders, 4th edition (DSM-IV) by two independent psychiatrists using semi-structured clinical interviews. Because patients were either drug-naïve (first episode) or had not been taking psychoactive drugs for one month before the study, we could exclude the influence of medications. Five frequency bands were defined for spectral analyses: delta (1–4 Hz), theta (4–8 Hz), slow alpha (8–10 Hz), fast alpha (10–13.5 Hz), beta (13.5–30 Hz), and gamma (30-80 Hz). The spectral power of the EEG data was calculated with fast Fourier Transformation using the 'spectrogram.m' function of the signal processing toolbox in Matlab. An analysis of covariance (ANCOVA) was performed to compare the TGC results between the groups, which were adjusted using a Bonferroni correction (P < 0.05/19 = 0.0026). Receiver operator characteristic (ROC) analysis was conducted to examine the discriminating ability of the TGC data for schizophrenia diagnosis. Results: The patients with schizophrenia showed a significant increase in the resting-state TGC at all electrodes. The delta, theta, slow alpha, fast alpha, and beta powers showed low accuracies of 62.2%, 58.4%, 56.9%, 60.9%, and 59.0%, respectively, in discriminating the patients with schizophrenia from the healthy controls. The ROC analysis performed on the TGC data generated the most accurate result among the EEG measures, displaying an overall classification accuracy of 92.5%. Conclusion: As TGC includes phase, which contains information about neuronal interactions from the EEG recording, TGC is expected to be useful for understanding the mechanisms the dysfunctional cortico-thalamic interaction in patients with schizophrenia. The resting-state TGC value was increased in the patients with schizophrenia compared to that in the healthy controls and had a higher discriminating ability than the other parameters. These findings may be related to the compensatory hyper-arousal patterns of the dysfunctional default-mode network (DMN) in schizophrenia. Further research exploring the association between TGC and medical or psychiatric conditions that may confound EEG signals will help clarify the potential utility of TGC.Keywords: quantitative electroencephalography (QEEG), theta-phase gamma-amplitude coupling (TGC), schizophrenia, diagnostic utility
Procedia PDF Downloads 140394 Heat Vulnerability Index (HVI) Mapping in Extreme Heat Days Coupled with Air Pollution Using Principal Component Analysis (PCA) Technique: A Case Study of Amiens, France
Authors: Aiman Mazhar Qureshi, Ahmed Rachid
Abstract:
Extreme heat events are emerging human environmental health concerns in dense urban areas due to anthropogenic activities. High spatial and temporal resolution heat maps are important for urban heat adaptation and mitigation, helping to indicate hotspots that are required for the attention of city planners. The Heat Vulnerability Index (HVI) is the important approach used by decision-makers and urban planners to identify heat-vulnerable communities and areas that require heat stress mitigation strategies. Amiens is a medium-sized French city, where the average temperature has been increasing since the year 2000 by +1°C. Extreme heat events are recorded in the month of July for the last three consecutive years, 2018, 2019 and 2020. Poor air quality, especially ground-level ozone, has been observed mainly during the same hot period. In this study, we evaluated the HVI in Amiens during extreme heat days recorded last three years (2018,2019,2020). The Principal Component Analysis (PCA) technique is used for fine-scale vulnerability mapping. The main data we considered for this study to develop the HVI model are (a) socio-economic and demographic data; (b) Air pollution; (c) Land use and cover; (d) Elderly heat-illness; (e) socially vulnerable; (f) Remote sensing data (Land surface temperature (LST), mean elevation, NDVI and NDWI). The output maps identified the hot zones through comprehensive GIS analysis. The resultant map shows that high HVI exists in three typical areas: (1) where the population density is quite high and the vegetation cover is small (2) the artificial surfaces (built-in areas) (3) industrial zones that release thermal energy and ground-level ozone while those with low HVI are located in natural landscapes such as rivers and grasslands. The study also illustrates the system theory with a causal diagram after data analysis where anthropogenic activities and air pollution appear in correspondence with extreme heat events in the city. Our suggested index can be a useful tool to guide urban planners and municipalities, decision-makers and public health professionals in targeting areas at high risk of extreme heat and air pollution for future interventions adaptation and mitigation measures.Keywords: heat vulnerability index, heat mapping, heat health-illness, remote sensing, urban heat mitigation
Procedia PDF Downloads 147393 Advantages of Computer Navigation in Knee Arthroplasty
Authors: Mohammad Ali Al Qatawneh, Bespalchuk Pavel Ivanovich
Abstract:
Computer navigation has been introduced in total knee arthroplasty to improve the accuracy of the procedure. Computer navigation improves the accuracy of bone resection in the coronal and sagittal planes. It was also noted that it normalizes the rotational alignment of the femoral component and fully assesses and balances the deformation of soft tissues in the coronal plane. The work is devoted to the advantages of using computer navigation technology in total knee arthroplasty in 62 patients (11 men and 51 women) suffering from gonarthrosis, aged 51 to 83 years, operated using a computer navigation system, followed up to 3 years from the moment of surgery. During the examination, the deformity variant was determined, and radiometric parameters of the knee joints were measured using the Knee Society Score (KSS), Functional Knee Society Score (FKSS), and Western Ontario and McMaster University Osteoarthritis Index (WOMAC) scales. Also, functional stress tests were performed to assess the stability of the knee joint in the frontal plane and functional indicators of the range of motion. After surgery, improvement was observed in all scales; firstly, the WOMAC values decreased by 5.90 times, and the median value to 11 points (p < 0.001), secondly KSS increased by 3.91 times and reached 86 points (p < 0.001), and the third one is that FKSS data increased by 2.08 times and reached 94 points (p < 0.001). After TKA, the axis deviation of the lower limbs of more than 3 degrees was observed in 4 patients at 6.5% and frontal instability of the knee joint just in 2 cases at 3.2%., The lower incidence of sagittal instability of the knee joint after the operation was 9.6%. The range of motion increased by 1.25 times; the volume of movement averaged 125 degrees (p < 0.001). Computer navigation increases the accuracy of the spatial orientation of the endoprosthesis components in all planes, reduces the variability of the axis of the lower limbs within ± 3 °, allows you to achieve the best results of surgical interventions, and can be used to solve most basic tasks, allowing you to achieve excellent and good outcomes of operations in 100% of cases according to the WOMAC scale. With diaphyseal deformities of the femur and/or tibia, as well as with obstruction of their medullary canal, the use of computer navigation is the method of choice. The use of computer navigation prevents the occurrence of flexion contracture and hyperextension of the knee joint during the distal sawing of the femur. Using the navigation system achieves high-precision implantation for the endoprosthesis; in addition, it achieves an adequate balance of the ligaments, which contributes to the stability of the joint, reduces pain, and allows for the achievement of a good functional result of the treatment.Keywords: knee joint, arthroplasty, computer navigation, advantages
Procedia PDF Downloads 89392 Defining the Vibrancy of the Temple Square: A Case of Car Street Udupi, Karnataka
Authors: Nivedhitha Venkatakrishnan
Abstract:
Walking down busy temple streets in India is an experience in lifetime. Especially the temple streets are one of the most energetic places not only because of the divinity but also because of the streets itself which provides place for people to relax, meet, shop, linger, just walk around these activities create a set of experience which results in memories that lasts longer. Thinking of any temple street in India the image that comes to anyone’s mind are the elegantly sculpted Gopurams (Gateway) that depicts the craftsmanship and the history of the place, people taking a holy dip in the water, the aroma of the agarbathi’s, flowers with the divine Vedic chants and the sound of the temple bell flock of pigeons flying from the niches of the Gopuram with the sun in the backdrop. It gives a feeling of impulse energy that brings in life to these streets. Any temple street with even any one factor missing would look dead. This will be amiss in the essence in the scene of one’s experiences. These Temple Streets traditionally cater not only for religious purpose but to a wide range of activities. A vibrant street that facilitates such activities are preferred by the public any day. The research seeks to understand and find out the definition of Vibrancy in Indian Context. What is Vibrancy? What brings in the feeling of Vibrancy/Liveliness/Energy? Is it the Built structure and the city? Or is it the people? Or is it the Activity? Or is it Built structure – city – People – Activity put together brings the sense of Vibrancy to a place? How to define Vibrancy? Is it measurable? For which a case of Car Street Udupi, Karnataka is taken. The research is carried out in two stages. ‘Stage One’ makes use of ethnographic fieldwork as a basic method, complimented by structured field observations using a behavioral mapping procedure of the streets. Stage Two’ utilizes surveys that collected. This stage seeks to understand what design characteristics and furniture arrangements are associated with stationary, social and gathering activities of people by each cultural group and all groups collectively. The main conclusion from this research is that retail activities remain the main concern of people in cultural streets. Management and higher-level planning of retail activities on the streets could encourage and motivate possible Shops to enrich the trade variety of the street that provides a means for social and cultural diversity. In addition to business activities, spatial design characteristics are found to have an influence on people’s behavior and activity. The findings of this research suggest that retail and business activities, together with the design and skillful management of the public areas, could support a wider range of static and social activities among people of various ethnic backgrounds.Keywords: activity, liveliness, temple street, vibrancy
Procedia PDF Downloads 156391 Assessment of Indoor Air Pollution in Naturally Ventilated Dwellings of Mega-City Kolkata
Authors: Tanya Kaur Bedi, Shankha Pratim Bhattacharya
Abstract:
The US Environmental Protection Agency defines indoor air pollution as “The air quality within and around buildings, especially as it relates to the health and comfort of building occupants”. According to the 2021 report by the Energy Policy Institute at Chicago, Indian residents, a country which is home to the highest levels of air pollution in the world, lose about 5.9 years from life expectancy due to poor air quality and yet has numerous dwellings dependent on natural ventilation. Currently the urban population spends 90% of the time indoors, this scenario raises a concern for occupant health and well-being. This study attempts to demonstrate the causal relationship between the indoor air pollution and its determining aspects. Detailed indoor air pollution audits were conducted in residential buildings located in Kolkata, India in the months of December and January 2021. According to the air pollution knowledge assessment city program in India, Kolkata is also the second most polluted mega-city after Delhi. Although the air pollution levels are alarming year-long, the winter months are most crucial due to the unfavourable environmental conditions. While emissions remain typically constant throughout the year, cold air is denser and moves slower than warm air, trapping the pollution in place for much longer and consequently is breathed in at a higher rate than the summers. The air pollution monitoring period was selected considering environmental factors and major pollution contributors like traffic and road dust. This study focuses on the relationship between the built environment and the spatial-temporal distribution of air pollutants in and around it. The measured parameters include, temperature, relative humidity, air velocity, particulate matter, volatile organic compounds, formaldehyde, and benzene. A total of 56 rooms were audited, selectively targeting the most dominant middle-income group in the urban area of the metropolitan. The data-collection was conducted using a set of instruments positioned in the human breathing-zone. The study assesses the relationship between indoor air pollution levels and factors determining natural ventilation and air pollution dispersion such as surrounding environment, dominant wind, openable window to floor area ratio, windward or leeward side openings, and natural ventilation type in the room: single side or cross-ventilation, floor height, residents cleaning habits, etc.Keywords: indoor air quality, occupant health, air pollution, architecture, urban environment
Procedia PDF Downloads 106390 Comparing Deep Architectures for Selecting Optimal Machine Translation
Authors: Despoina Mouratidis, Katia Lida Kermanidis
Abstract:
Machine translation (MT) is a very important task in Natural Language Processing (NLP). MT evaluation is crucial in MT development, as it constitutes the means to assess the success of an MT system, and also helps improve its performance. Several methods have been proposed for the evaluation of (MT) systems. Some of the most popular ones in automatic MT evaluation are score-based, such as the BLEU score, and others are based on lexical similarity or syntactic similarity between the MT outputs and the reference involving higher-level information like part of speech tagging (POS). This paper presents a language-independent machine learning framework for classifying pairwise translations. This framework uses vector representations of two machine-produced translations, one from a statistical machine translation model (SMT) and one from a neural machine translation model (NMT). The vector representations consist of automatically extracted word embeddings and string-like language-independent features. These vector representations used as an input to a multi-layer neural network (NN) that models the similarity between each MT output and the reference, as well as between the two MT outputs. To evaluate the proposed approach, a professional translation and a "ground-truth" annotation are used. The parallel corpora used are English-Greek (EN-GR) and English-Italian (EN-IT), in the educational domain and of informal genres (video lecture subtitles, course forum text, etc.) that are difficult to be reliably translated. They have tested three basic deep learning (DL) architectures to this schema: (i) fully-connected dense, (ii) Convolutional Neural Network (CNN), and (iii) Long Short-Term Memory (LSTM). Experiments show that all tested architectures achieved better results when compared against those of some of the well-known basic approaches, such as Random Forest (RF) and Support Vector Machine (SVM). Better accuracy results are obtained when LSTM layers are used in our schema. In terms of a balance between the results, better accuracy results are obtained when dense layers are used. The reason for this is that the model correctly classifies more sentences of the minority class (SMT). For a more integrated analysis of the accuracy results, a qualitative linguistic analysis is carried out. In this context, problems have been identified about some figures of speech, as the metaphors, or about certain linguistic phenomena, such as per etymology: paronyms. It is quite interesting to find out why all the classifiers led to worse accuracy results in Italian as compared to Greek, taking into account that the linguistic features employed are language independent.Keywords: machine learning, machine translation evaluation, neural network architecture, pairwise classification
Procedia PDF Downloads 130389 Machine Learning Prediction of Diabetes Prevalence in the U.S. Using Demographic, Physical, and Lifestyle Indicators: A Study Based on NHANES 2009-2018
Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei
Abstract:
To develop a machine learning model to predict diabetes (DM) prevalence in the U.S. population using demographic characteristics, physical indicators, and lifestyle habits, and to analyze how these factors contribute to the likelihood of diabetes. We analyzed data from 23,546 participants aged 20 and older, who were non-pregnant, from the 2009-2018 National Health and Nutrition Examination Survey (NHANES). The dataset included key demographic (age, sex, ethnicity), physical (BMI, leg length, total cholesterol [TCHOL], fasting plasma glucose), and lifestyle indicators (smoking habits). A weighted sample was used to account for NHANES survey design features such as stratification and clustering. A classification machine learning model was trained to predict diabetes status. The target variable was binary (diabetes or non-diabetes) based on fasting plasma glucose measurements. The following models were evaluated: Logistic Regression (baseline), Random Forest Classifier, Gradient Boosting Machine (GBM), Support Vector Machine (SVM). Model performance was assessed using accuracy, F1-score, AUC-ROC, and precision-recall metrics. Feature importance was analyzed using SHAP values to interpret the contributions of variables such as age, BMI, ethnicity, and smoking status. The Gradient Boosting Machine (GBM) model outperformed other classifiers with an AUC-ROC score of 0.85. Feature importance analysis revealed the following key predictors: Age: The most significant predictor, with diabetes prevalence increasing with age, peaking around the 60s for males and 70s for females. BMI: Higher BMI was strongly associated with a higher risk of diabetes. Ethnicity: Black participants had the highest predicted prevalence of diabetes (14.6%), followed by Mexican-Americans (13.5%) and Whites (10.6%). TCHOL: Diabetics had lower total cholesterol levels, particularly among White participants (mean decline of 23.6 mg/dL). Smoking: Smoking showed a slight increase in diabetes risk among Whites (0.2%) but had a limited effect in other ethnic groups. Using machine learning models, we identified key demographic, physical, and lifestyle predictors of diabetes in the U.S. population. The results confirm that diabetes prevalence varies significantly across age, BMI, and ethnic groups, with lifestyle factors such as smoking contributing differently by ethnicity. These findings provide a basis for more targeted public health interventions and resource allocation for diabetes management.Keywords: diabetes, NHANES, random forest, gradient boosting machine, support vector machine
Procedia PDF Downloads 4388 Seeking Compatibility between Green Infrastructure and Recentralization: The Case of Greater Toronto Area
Authors: Sara Saboonian, Pierre Filion
Abstract:
There are two distinct planning approaches attempting to transform the North American suburb so as to reduce its adverse environmental impacts. The first one, the recentralization approach, proposes intensification, multi-functionality and more reliance on public transit and walking. It thus offers an alternative to the prevailing low-density, spatial specialization and automobile dependence of the North American suburb. The second approach concentrates instead on the provision of green infrastructure, which rely on natural systems rather than on highly engineered solutions to deal with the infrastructure needs of suburban areas. There are tensions between these two approaches as recentralization generally overlooks green infrastructure, which can be space consuming (as in the case of water retention systems), and thus conflicts with the intensification goals of recentralization. The research investigates three Canadian planned suburban centres in the Greater Toronto Area, where recentralization is the current planning practice, despite rising awareness of the benefits of green infrastructure. Methods include reviewing the literature on green infrastructure planning, a critical analysis of the Ontario provincial plans for recentralization, surveying residents’ preferences regarding alternative suburban development models, and interviewing officials who deal with the local planning of the three centres. The case studies expose the difficulties in creating planned suburban centres that accommodate green infrastructure while adhering to recentralization principles. Until now, planners have been mostly focussed on recentralization at the expense of green infrastructure. In this context, the frequent lack of compatibility between recentralization and the space requirements of green infrastructure explains the limited presence of such infrastructures in planned suburban centres. Finally, while much attention has been given in the planning discourse to the economic and lifestyle benefits of recentralization, much less has been made of the wide range of advantages of green infrastructure, which explains limited public mobilization over the development of green infrastructure networks. The paper will concentrate on ways of combining recentralization with green infrastructure strategies and identify the aspects of the two approaches that are most compatible with each other. The outcome of such blending will marry high density, public-transit oriented developments, which generate walkability and street-level animation, with the presence of green space, naturalized settings and reliance on renewable energy. The paper will advance a planning framework that will fuse green infrastructure with recentralization, thus ensuring the achievement of higher density and reduced reliance on the car along with the provision of critical ecosystem services throughout cities. This will support and enhance the objectives of both green infrastructure and recentralization.Keywords: environmental-based planning, green infrastructure, multi-functionality, recentralization
Procedia PDF Downloads 131387 Plastic Waste Sorting by the People of Dakar
Authors: E. Gaury, P. Mandausch, O. Picot, A. R. Thomas, L. Veisblat, L. Ralambozanany, C. Delsart
Abstract:
In Dakar, demographic and spatial growth was accompanied by a 50% increase in household waste between 1988 and 2008 in the city. In addition, a change in the nature of household waste was observed between 1990 and 2007. The share of plastic increased by 15% between 2004 and 2007 in Dakar. Plastics represent the seventh category of household waste, the most produced per year in Senegal. The share of plastic in household and similar waste is 9% in Senegal. Waste management in the city of Dakar is a complex process involving a multitude of formal and informal actors with different perceptions and objectives. The objective of this study was to understand the motivations that could lead to sorting action, as well as the perception of plastic waste sorting within the Dakar population (households and institutions). The problematic of this study was as follows: what may be the factors playing a role in the sorting action? In an attempt to answer this, two approaches have been developed: (1) An exploratory qualitative study by semi-structured interviews with two groups of individuals concerned by the sorting of plastic waste: on the one hand, the experts in charge of waste management and on the other the households-producers of waste plastics. This study served as the basis for formulating the hypotheses and thus for the quantitative analysis. (2) A quantitative study using a questionnaire survey method among households producing plastic waste in order to test the previously formulated hypotheses. The objective was to have quantitative results representative of the population of Dakar in relation to the behavior and the process inherent in the adoption of the plastic waste sorting action. The exploratory study shows that the perception of state responsibility varies between institutions and households. Public institutions perceive this as a shared responsibility because the problem of plastic waste affects many sectors (health, environmental education, etc.). Their involvement is geared more towards raising awareness and educating young people. As state action is limited, the emergence of private companies in this sector seems logical as they are setting up collection networks to develop a recycling activity. The state plays a moral support role in these activities and encourages companies to do more. The study of the understanding of the action of sorting plastic waste by the population of Dakar through a quantitative analysis was able to demonstrate the attitudes and constraints inherent in the adoption of plastic waste sorting.Cognitive attitude, knowledge, and visible consequences have been shown to correlate positively with sorting behavior. Thus, it would seem that the population of Dakar is more sensitive to what they see and what they know to adopt sorting behavior.It has also been shown that the strongest constraints that could slow down sorting behavior were the complexity of the process, too much time and the lack of infrastructure in which to deposit plastic waste.Keywords: behavior, Dakar, plastic waste, waste management
Procedia PDF Downloads 93386 Effect of Human Use, Season and Habitat on Ungulate Densities in Kanha Tiger Reserve
Authors: Neha Awasthi, Ujjwal Kumar
Abstract:
Density of large carnivores is primarily dictated by the density of their prey. Therefore, optimal management of ungulates populations permits harbouring of viable large carnivore populations within protected areas. Ungulate density is likely to respond to regimes of protection and vegetation types. This has generated the need among conservation practitioners to obtain strata specific seasonal species densities for habitat management. Kanha Tiger Reserve (KTR) of 2074 km2 area comprises of two distinct management strata: The core (940 km2), devoid of human settlements and buffer (1134 km2) which is a multiple use area. In general, four habitat strata, grassland, sal forest, bamboo-mixed forest and miscellaneous forest are present in the reserve. Stratified sampling approach was used to access a) impact of human use and b) effect of habitat and season on ungulate densities. Since 2013 to 2016, ungulates were surveyed in winter and summer of each year with an effort of 1200 km walk in 200 spatial transects distributed throughout Kanha Tiger Reserve. We used a single detection function for each species within each habitat stratum for each season for estimating species specific seasonal density, using program DISTANCE. Our key results state that the core area had 4.8 times higher wild ungulate biomass compared with the buffer zone, highlighting the importance of undisturbed area. Chital was found to be most abundant, having a density of 30.1(SE 4.34)/km2 and contributing 33% of the biomass with a habitat preference for grassland. Unlike other ungulates, Gaur being mega herbivore, showed a major seasonal shift in density from bamboo-mixed and sal forest in summer to miscellaneous forest in winter. Maximum diversity and ungulate biomass were supported by grassland followed by bamboo-mixed habitat. Our study stresses the importance of inviolate core areas for achieving high wild ungulate densities and for maintaining populations of endangered and rare species. Grasslands accounts for 9% of the core area of KTR maintained in arrested stage of succession, therefore enhancing this habitat would maintain ungulate diversity, density and cater to the needs of only surviving population of the endangered barasingha and grassland specialist the blackbuck. We show the relevance of different habitat types for differential seasonal use by ungulates and attempt to interpret this in the context of nutrition and cover needs by wild ungulates. Management for an optimal habitat mosaic that maintains ungulate diversity and maximizes ungulate biomass is recommended.Keywords: distance sampling, habitat management, ungulate biomass, diversity
Procedia PDF Downloads 302385 Argos System: Improvements and Future of the Constellation
Authors: Sophie Baudel, Aline Duplaa, Jean Muller, Stephan Lauriol, Yann Bernard
Abstract:
Argos is the main satellite telemetry system used by the wildlife research community, since its creation in 1978, for animal tracking and scientific data collection all around the world, to analyze and understand animal migrations and behavior. The marine mammals' biology is one of the major disciplines which had benefited from Argos telemetry, and conversely, marine mammals biologists’ community has contributed a lot to the growth and development of Argos use cases. The Argos constellation with 6 satellites in orbit in 2017 (Argos 2 payload on NOAA 15, NOAA 18, Argos 3 payload on NOAA 19, SARAL, METOP A and METOP B) is being extended in the following years with Argos 3 payload on METOP C (launch in October 2018), and Argos 4 payloads on Oceansat 3 (launch in 2019), CDARS in December 2021 (to be confirmed), METOP SG B1 in December 2022, and METOP-SG-B2 in 2029. Argos 4 will allow more frequency bands (600 kHz for Argos4NG, instead of 110 kHz for Argos 3), new modulation dedicated to animal (sea turtle) tracking allowing very low transmission power transmitters (50 to 100mW), with very low data rates (124 bps), enhancement of high data rates (1200-4800 bps), and downlink performance, at the whole contribution to enhance the system capacity (50,000 active beacons per month instead of 20,000 today). In parallel of this ‘institutional Argos’ constellation, in the context of a miniaturization trend in the spatial industry in order to reduce the costs and multiply the satellites to serve more and more societal needs, the French Space Agency CNES, which designs the Argos payloads, is innovating and launching the Argos ANGELS project (Argos NEO Generic Economic Light Satellites). ANGELS will lead to a nanosatellite prototype with an Argos NEO instrument (30 cm x 30 cm x 20cm) that will be launched in 2019. In the meantime, the design of the renewal of the Argos constellation, called Argos For Next Generations (Argos4NG), is on track and will be operational in 2022. Based on Argos 4 and benefitting of the feedback from ANGELS project, this constellation will allow revisiting time of fewer than 20 minutes in average between two satellite passes, and will also bring more frequency bands to improve the overall capacity of the system. The presentation will then be an overview of the Argos system, present and future and new capacities coming with it. On top of that, use cases of two Argos hardware modules will be presented: the goniometer pathfinder allowing recovering Argos beacons at sea or on the ground in a 100 km radius horizon-free circle around the beacon location and the new Argos 4 chipset called ‘Artic’, already available and tested by several manufacturers.Keywords: Argos satellite telemetry, marine protected areas, oceanography, maritime services
Procedia PDF Downloads 179384 A Review of Gas Hydrate Rock Physics Models
Authors: Hemin Yuan, Yun Wang, Xiangchun Wang
Abstract:
Gas hydrate is drawing attention due to the fact that it has an enormous amount all over the world, which is almost twice the conventional hydrocarbon reserves, making it a potential alternative source of energy. It is widely distributed in permafrost and continental ocean shelves, and many countries have launched national programs for investigating the gas hydrate. Gas hydrate is mainly explored through seismic methods, which include bottom simulating reflectors (BSR), amplitude blanking, and polarity reverse. These seismic methods are effective at finding the gas hydrate formations but usually contain large uncertainties when applying to invert the micro-scale petrophysical properties of the formations due to lack of constraints. Rock physics modeling links the micro-scale structures of the rocks to the macro-scale elastic properties and can work as effective constraints for the seismic methods. A number of rock physics models have been proposed for gas hydrate modeling, which addresses different mechanisms and applications. However, these models are generally not well classified, and it is confusing to determine the appropriate model for a specific study. Moreover, since the modeling usually involves multiple models and steps, it is difficult to determine the source of uncertainties. To solve these problems, we summarize the developed models/methods and make four classifications of the models according to the hydrate micro-scale morphology in sediments, the purpose of reservoir characterization, the stage of gas hydrate generation, and the lithology type of hosting sediments. Some sub-categories may overlap each other, but they have different priorities. Besides, we also analyze the priorities of different models, bring up the shortcomings, and explain the appropriate application scenarios. Moreover, by comparing the models, we summarize a general workflow of the modeling procedure, which includes rock matrix forming, dry rock frame generating, pore fluids mixing, and final fluid substitution in the rock frame. These procedures have been widely used in various gas hydrate modeling and have been confirmed to be effective. We also analyze the potential sources of uncertainties in each modeling step, which enables us to clearly recognize the potential uncertainties in the modeling. In the end, we explicate the general problems of the current models, including the influences of pressure and temperature, pore geometry, hydrate morphology, and rock structure change during gas hydrate dissociation and re-generation. We also point out that attenuation is also severely affected by gas hydrate in sediments and may work as an indicator to map gas hydrate concentration. Our work classifies rock physics models of gas hydrate into different categories, generalizes the modeling workflow, analyzes the modeling uncertainties and potential problems, which can facilitate the rock physics characterization of gas hydrate bearding sediments and provide hints for future studies.Keywords: gas hydrate, rock physics model, modeling classification, hydrate morphology
Procedia PDF Downloads 157383 Winter Wheat Yield Forecasting Using Sentinel-2 Imagery at the Early Stages
Authors: Chunhua Liao, Jinfei Wang, Bo Shan, Yang Song, Yongjun He, Taifeng Dong
Abstract:
Winter wheat is one of the main crops in Canada. Forecasting of within-field variability of yield in winter wheat at the early stages is essential for precision farming. However, the crop yield modelling based on high spatial resolution satellite data is generally affected by the lack of continuous satellite observations, resulting in reducing the generalization ability of the models and increasing the difficulty of crop yield forecasting at the early stages. In this study, the correlations between Sentinel-2 data (vegetation indices and reflectance) and yield data collected by combine harvester were investigated and a generalized multivariate linear regression (MLR) model was built and tested with data acquired in different years. It was found that the four-band reflectance (blue, green, red, near-infrared) performed better than their vegetation indices (NDVI, EVI, WDRVI and OSAVI) in wheat yield prediction. The optimum phenological stage for wheat yield prediction with highest accuracy was at the growing stages from the end of the flowering to the beginning of the filling stage. The best MLR model was therefore built to predict wheat yield before harvest using Sentinel-2 data acquired at the end of the flowering stage. Further, to improve the ability of the yield prediction at the early stages, three simple unsupervised domain adaptation (DA) methods were adopted to transform the reflectance data at the early stages to the optimum phenological stage. The winter wheat yield prediction using multiple vegetation indices showed higher accuracy than using single vegetation index. The optimum stage for winter wheat yield forecasting varied with different fields when using vegetation indices, while it was consistent when using multispectral reflectance and the optimum stage for winter wheat yield prediction was at the end of flowering stage. The average testing RMSE of the MLR model at the end of the flowering stage was 604.48 kg/ha. Near the booting stage, the average testing RMSE of yield prediction using the best MLR was reduced to 799.18 kg/ha when applying the mean matching domain adaptation approach to transform the data to the target domain (at the end of the flowering) compared to that using the original data based on the models developed at the booting stage directly (“MLR at the early stage”) (RMSE =1140.64 kg/ha). This study demonstrated that the simple mean matching (MM) performed better than other DA methods and it was found that “DA then MLR at the optimum stage” performed better than “MLR directly at the early stages” for winter wheat yield forecasting at the early stages. The results indicated that the DA had a great potential in near real-time crop yield forecasting at the early stages. This study indicated that the simple domain adaptation methods had a great potential in crop yield prediction at the early stages using remote sensing data.Keywords: wheat yield prediction, domain adaptation, Sentinel-2, within-field scale
Procedia PDF Downloads 63382 Development of a Framework for Assessing Public Health Risk Due to Pluvial Flooding: A Case Study of Sukhumvit, Bangkok
Authors: Pratima Pokharel
Abstract:
When sewer overflow due to rainfall in urban areas, this leads to public health risks when an individual is exposed to that contaminated floodwater. Nevertheless, it is still unclear the extent to which the infections pose a risk to public health. This study analyzed reported diarrheal cases by month and age in Bangkok, Thailand. The results showed that the cases are reported higher in the wet season than in the dry season. It was also found that in Bangkok, the probability of infection with diarrheal diseases in the wet season is higher for the age group between 15 to 44. However, the probability of infection is highest for kids under 5 years, but they are not influenced by wet weather. Further, this study introduced a vulnerability that leads to health risks from urban flooding. This study has found some vulnerability variables that contribute to health risks from flooding. Thus, for vulnerability analysis, the study has chosen two variables, economic status, and age, that contribute to health risk. Assuming that the people's economic status depends on the types of houses they are living in, the study shows the spatial distribution of economic status in the vulnerability maps. The vulnerability map result shows that people living in Sukhumvit have low vulnerability to health risks with respect to the types of houses they are living in. In addition, from age the probability of infection of diarrhea was analyzed. Moreover, a field survey was carried out to validate the vulnerability of people. It showed that health vulnerability depends on economic status, income level, and education. The result depicts that people with low income and poor living conditions are more vulnerable to health risks. Further, the study also carried out 1D Hydrodynamic Advection-Dispersion modelling with 2-year rainfall events to simulate the dispersion of fecal coliform concentration in the drainage network as well as 1D/2D Hydrodynamic model to simulate the overland flow. The 1D result represents higher concentrations for dry weather flows and a large dilution of concentration on the commencement of a rainfall event, resulting in a drop of the concentration due to runoff generated after rainfall, whereas the model produced flood depth, flood duration, and fecal coliform concentration maps, which were transferred to ArcGIS to produce hazard and risk maps. In addition, the study also simulates the 5-year and 10-year rainfall simulations to show the variation in health hazards and risks. It was found that even though the hazard coverage is very high with a 10-year rainfall events among three rainfall events, the risk was observed to be the same with a 5-year and 10-year rainfall events.Keywords: urban flooding, risk, hazard, vulnerability, health risk, framework
Procedia PDF Downloads 74381 Landscape Pattern Evolution and Optimization Strategy in Wuhan Urban Development Zone, China
Abstract:
With the rapid development of urbanization process in China, its environmental protection pressure is severely tested. So, analyzing and optimizing the landscape pattern is an important measure to ease the pressure on the ecological environment. This paper takes Wuhan Urban Development Zone as the research object, and studies its landscape pattern evolution and quantitative optimization strategy. First, remote sensing image data from 1990 to 2015 were interpreted by using Erdas software. Next, the landscape pattern index of landscape level, class level, and patch level was studied based on Fragstats. Then five indicators of ecological environment based on National Environmental Protection Standard of China were selected to evaluate the impact of landscape pattern evolution on the ecological environment. Besides, the cost distance analysis of ArcGIS was applied to simulate wildlife migration thus indirectly measuring the improvement of ecological environment quality. The result shows that the area of land for construction increased 491%. But the bare land, sparse grassland, forest, farmland, water decreased 82%, 47%, 36%, 25% and 11% respectively. They were mainly converted into construction land. On landscape level, the change of landscape index all showed a downward trend. Number of patches (NP), Landscape shape index (LSI), Connection index (CONNECT), Shannon's diversity index (SHDI), Aggregation index (AI) separately decreased by 2778, 25.7, 0.042, 0.6, 29.2%, all of which indicated that the NP, the degree of aggregation and the landscape connectivity declined. On class level, the construction land and forest, CPLAND, TCA, AI and LSI ascended, but the Distribution Statistics Core Area (CORE_AM) decreased. As for farmland, water, sparse grassland, bare land, CPLAND, TCA and DIVISION, the Patch Density (PD) and LSI descended, yet the patch fragmentation and CORE_AM increased. On patch level, patch area, Patch perimeter, Shape index of water, farmland and bare land continued to decline. The three indexes of forest patches increased overall, sparse grassland decreased as a whole, and construction land increased. It is obvious that the urbanization greatly influenced the landscape evolution. Ecological diversity and landscape heterogeneity of ecological patches clearly dropped. The Habitat Quality Index continuously declined by 14%. Therefore, optimization strategy based on greenway network planning is raised for discussion. This paper contributes to the study of landscape pattern evolution in planning and design and to the research on spatial layout of urbanization.Keywords: landscape pattern, optimization strategy, ArcGIS, Erdas, landscape metrics, landscape architecture
Procedia PDF Downloads 163380 Artificial Intelligence: Reimagining Education
Authors: Silvia Zanazzi
Abstract:
Artificial intelligence (AI) has become an integral part of our world, transitioning from scientific exploration to practical applications that impact daily life. The emergence of generative AI is reshaping education, prompting new questions about the role of teachers, the nature of learning, and the overall purpose of schooling. While AI offers the potential for optimizing teaching and learning processes, concerns about discrimination and bias arising from training data and algorithmic decisions persist. There is a risk of a disconnect between the rapid development of AI and the goals of building inclusive educational environments. The prevailing discourse on AI in education often prioritizes efficiency and individual skill acquisition. This narrow focus can undermine the importance of collaborative learning and shared experiences. A growing body of research challenges this perspective, advocating for AI that enhances, rather than replaces, human interaction in education. This study aims to examine the relationship between AI and education critically. Reviewing existing research will identify both AI implementation’s potential benefits and risks. The goal is to develop a framework that supports the ethical and effective integration of AI into education, ensuring it serves the needs of all learners. The theoretical reflection will be developed based on a review of national and international scientific literature on artificial intelligence in education. The primary objective is to curate a selection of critical contributions from diverse disciplinary perspectives and/or an inter- and transdisciplinary viewpoint, providing a state-of-the-art overview and a critical analysis of potential future developments. Subsequently, the thematic analysis of these contributions will enable the creation of a framework for understanding and critically analyzing the role of artificial intelligence in schools and education, highlighting promising directions and potential pitfalls. The expected results are (1) a classification of the cognitive biases present in representations of AI in education and the associated risks and (2) a categorization of potentially beneficial interactions between AI applications and teaching and learning processes, including those already in use or under development. While not exhaustive, the proposed framework will serve as a guide for critically exploring the complexity of AI in education. It will help to reframe dystopian visions often associated with technology and facilitate discussions on fostering synergies that balance the ‘dream’ of quality education for all with the realities of AI implementation. The discourse on artificial intelligence in education, highlighting reductionist models rooted in fragmented and utilitarian views of knowledge, has the merit of stimulating the construction of alternative perspectives that can ‘return’ teaching and learning to education, human growth, and the well-being of individuals and communities.Keywords: education, artificial intelligence, teaching, learning
Procedia PDF Downloads 19379 Evaluation of Low-Global Warming Potential Refrigerants in Vapor Compression Heat Pumps
Authors: Hamed Jafargholi
Abstract:
Global warming presents an immense environmental risk, causing detrimental impacts on ecological systems and putting coastal areas at risk. Implementing efficient measures to minimize greenhouse gas emissions and the use of fossil fuels is essential to reducing global warming. Vapor compression heat pumps provide a practical method for harnessing energy from waste heat sources and reducing energy consumption. However, traditional working fluids used in these heat pumps generally contain a significant global warming potential (GWP), which might cause severe greenhouse effects if they are released. The goal of the emphasis on low-GWP (below 150) refrigerants is to further the vapor compression heat pumps. A classification system for vapor compression heat pumps is offered, with different boundaries based on the needed heat temperature and advancements in heat pump technology. A heat pump could be classified as a low temperature heat pump (LTHP), medium temperature heat pump (MTHP), high temperature heat pump (HTHP), or ultra-high temperature heat pump (UHTHP). The HTHP/UHTHP border is 160 °C, the MTHP/HTHP and LTHP/MTHP limits are 100 and 60 °C, respectively. The refrigerant is one of the most important parts of a vapor compression heat pump system. Presently, the main ways to choose a refrigerant are based on ozone depletion potential (ODP) and GWP, with GWP being the lowest possible value and ODP being zero. Pure low-GWP refrigerants, such as natural refrigerants (R718 and R744), hydrocarbons (R290, R600), hydrofluorocarbons (R152a and R161), hydrofluoroolefins (R1234yf, R1234ze(E)), and hydrochlorofluoroolefin (R1233zd(E)), were selected as candidates for vapor compression heat pump systems based on these selection principles. The performance, characteristics, and potential uses of these low-GWP refrigerants in heat pump systems are investigated in this paper. As vapor compression heat pumps with pure low-GWP refrigerants become more common, more and more low-grade heat can be recovered. This means that energy consumption would decrease. The research outputs showed that the refrigerants R718 for UHTHP application, R1233zd(E) for HTHP application, R600, R152a, R161, R1234ze(E) for MTHP, and R744, R290, and R1234yf for LTHP application are appropriate. The selection of an appropriate refrigerant should, in fact, take into consideration two different environmental and thermodynamic points of view. It might be argued that, depending on the situation, a trade-off between these two groups should constantly be considered. The environmental approach is now far stronger than it was previously, according to the European Union regulations. This will promote sustainable energy consumption and social development in addition to assisting in the reduction of greenhouse gas emissions and the management of global warming.Keywords: vapor compression, global warming potential, heat pumps, greenhouse
Procedia PDF Downloads 32378 A Crowdsourced Homeless Data Collection System and Its Econometric Analysis: Strengthening Inclusive Public Administration Policies
Authors: Praniil Nagaraj
Abstract:
This paper proposes a method to collect homeless data using crowdsourcing and presents an approach to analyze the data, demonstrating its potential to strengthen existing and future policies aimed at promoting socio-economic equilibrium. This paper's contributions can be categorized into three main areas. Firstly, a unique method for collecting homeless data is introduced, utilizing a user-friendly smartphone app (currently available for Android). The app enables the general public to quickly record information about homeless individuals, including the number of people and details about their living conditions. The collected data, including date, time, and location, is anonymized and securely transmitted to the cloud. It is anticipated that an increasing number of users motivated to contribute to society will adopt the app, thus expanding the data collection efforts. Duplicate data is addressed through simple classification methods, and historical data is utilized to fill in missing information. The second contribution of this paper is the description of data analysis techniques applied to the collected data. By combining this new data with existing information, statistical regression analysis is employed to gain insights into various aspects, such as distinguishing between unsheltered and sheltered homeless populations, as well as examining their correlation with factors like unemployment rates, housing affordability, and labor demand. Initial data is collected in San Francisco, while pre-existing information is drawn from three cities: San Francisco, New York City, and Washington D.C., facilitating the conduction of simulations. The third contribution focuses on demonstrating the practical implications of the data processing results. The challenges faced by key stakeholders, including charitable organizations and local city governments, are taken into consideration. Two case studies are presented as examples. The first case study explores improving the efficiency of food and necessities distribution, as well as medical assistance, driven by charitable organizations. The second case study examines the correlation between micro-geographic budget expenditure by local city governments and homeless information to justify budget allocation and expenditures. The ultimate objective of this endeavor is to enable the continuous enhancement of the quality of life for the underprivileged. It is hoped that through increased crowdsourcing of data from the public, the Generosity Curve and the Need Curve will intersect, leading to a better world for all.Keywords: crowdsourcing, homelessness, socio-economic policies, statistical analysis
Procedia PDF Downloads 41377 Kinematical Analysis of Tai Chi Chuan Players during Gait and Balance Test and Implication in Rehabilitation Exercise
Authors: Bijad Alqahtani, Graham Arnold, Weijie Wang
Abstract:
Background—Tai Chi Chuan (TCC) is a type of traditional Chinese martial art and is considered a benefiting physical fitness. Advanced techniques of motion analysis have been routinely used in the clinical assessment. However, so far, little research has been done on the biomechanical assessment of TCC players in terms of gait and balance using motion analysis. Objectives—The aim of this study was to investigate whether TCC improves the lower limb conditions and balance ability using the state of the art motion analysis technologies, i.e. motion capture system, electromyography and force platform. Methods—Twenty TCC (9 male, 11 female) with age between (42-77) years old and weight (56.2-119 Kg), and eighteen Non-TCC participants (7 male, 11 female), weight (50-110 Kg) with age (43- 78) years old at the matched age as a control group were recruited in this study. Their gait and balance were collected using Vicon Nexus® to obtain the gait parameters, and kinematic parameters of hip, knee, and ankle joints in three planes of both limbs. Participants stood on force platforms to perform a single leg balance test. Then, they were asked to walk along a 10 m walkway at their comfortable speed. Participants performed 5 trials of single-leg balance for the dominant side. Also, the participants performed 3 trials of four square step balance and 10 trials of walking. From the recorded trials, three good ones were analyzed using the Vicon Plug-in-Gait model to obtain gait parameters, e.g. walking speed, cadence, stride length, and joint parameters, e.g. joint angle, force, moments, etc. Result— The temporal-spatial variables of TCC subjects were compared with the Non-TCC subjects, it was found that there was a significant difference (p < 0.05) between the groups. Moreover, it was observed that participants of TCC have significant differences in ankle, hip, and knee joints’ kinematics in the sagittal, coronal, and transverse planes such as ankle angle (19.90±19.54 deg) for TCC while (15.34±6.50 deg) for Non-TCC, and knee angle (14.96±6.40 deg) for TCC while (17.63±5.79 deg) for Non-TCC in the transverse plane. Also, the result showed that there was a significant difference between groups in the single-leg balance test, e.g. maintaining single leg stance time in the TCC participants showed longer duration (20.85±10.53 s) in compared to Non-TCC people group (13.39±8.78 s). While the result showed that there was no significant difference between groups in the four square step balance. Conclusion—Our result showed that there are significant differences between Tai Chi Chuan and Non-Tai Chi Chuan participants in the various aspects of gait analysis and balance test, as a consequence of these findings some of biomechanical parameters such as joints kinematics, gait parameters and single leg stance balance test, the Tai Chi Chuan could improve the lower limb conditions and could reduce a risk of fall for the elderly with ageing.Keywords: gait analysis, kinematics, single leg stance, Tai Chi Chuan
Procedia PDF Downloads 124376 Artificial Law: Legal AI Systems and the Need to Satisfy Principles of Justice, Equality and the Protection of Human Rights
Authors: Begum Koru, Isik Aybay, Demet Celik Ulusoy
Abstract:
The discipline of law is quite complex and has its own terminology. Apart from written legal rules, there is also living law, which refers to legal practice. Basic legal rules aim at the happiness of individuals in social life and have different characteristics in different branches such as public or private law. On the other hand, law is a national phenomenon. The law of one nation and the legal system applied on the territory of another nation may be completely different. People who are experts in a particular field of law in one country may have insufficient expertise in the law of another country. Today, in addition to the local nature of law, international and even supranational law rules are applied in order to protect basic human values and ensure the protection of human rights around the world. Systems that offer algorithmic solutions to legal problems using artificial intelligence (AI) tools will perhaps serve to produce very meaningful results in terms of human rights. However, algorithms to be used should not be developed by only computer experts, but also need the contribution of people who are familiar with law, values, judicial decisions, and even the social and political culture of the society to which it will provide solutions. Otherwise, even if the algorithm works perfectly, it may not be compatible with the values of the society in which it is applied. The latest developments involving the use of AI techniques in legal systems indicate that artificial law will emerge as a new field in the discipline of law. More AI systems are already being applied in the field of law, with examples such as predicting judicial decisions, text summarization, decision support systems, and classification of documents. Algorithms for legal systems employing AI tools, especially in the field of prediction of judicial decisions and decision support systems, have the capacity to create automatic decisions instead of judges. When the judge is removed from this equation, artificial intelligence-made law created by an intelligent algorithm on its own emerges, whether the domain is national or international law. In this work, the aim is to make a general analysis of this new topic. Such an analysis needs both a literature survey and a perspective from computer experts' and lawyers' point of view. In some societies, the use of prediction or decision support systems may be useful to integrate international human rights safeguards. In this case, artificial law can serve to produce more comprehensive and human rights-protective results than written or living law. In non-democratic countries, it may even be thought that direct decisions and artificial intelligence-made law would be more protective instead of a decision "support" system. Since the values of law are directed towards "human happiness or well-being", it requires that the AI algorithms should always be capable of serving this purpose and based on the rule of law, the principle of justice and equality, and the protection of human rights.Keywords: AI and law, artificial law, protection of human rights, AI tools for legal systems
Procedia PDF Downloads 73375 Identifying Protein-Coding and Non-Coding Regions in Transcriptomes
Authors: Angela U. Makolo
Abstract:
Protein-coding and Non-coding regions determine the biology of a sequenced transcriptome. Research advances have shown that Non-coding regions are important in disease progression and clinical diagnosis. Existing bioinformatics tools have been targeted towards Protein-coding regions alone. Therefore, there are challenges associated with gaining biological insights from transcriptome sequence data. These tools are also limited to computationally intensive sequence alignment, which is inadequate and less accurate to identify both Protein-coding and Non-coding regions. Alignment-free techniques can overcome the limitation of identifying both regions. Therefore, this study was designed to develop an efficient sequence alignment-free model for identifying both Protein-coding and Non-coding regions in sequenced transcriptomes. Feature grouping and randomization procedures were applied to the input transcriptomes (37,503 data points). Successive iterations were carried out to compute the gradient vector that converged the developed Protein-coding and Non-coding Region Identifier (PNRI) model to the approximate coefficient vector. The logistic regression algorithm was used with a sigmoid activation function. A parameter vector was estimated for every sample in 37,503 data points in a bid to reduce the generalization error and cost. Maximum Likelihood Estimation (MLE) was used for parameter estimation by taking the log-likelihood of six features and combining them into a summation function. Dynamic thresholding was used to classify the Protein-coding and Non-coding regions, and the Receiver Operating Characteristic (ROC) curve was determined. The generalization performance of PNRI was determined in terms of F1 score, accuracy, sensitivity, and specificity. The average generalization performance of PNRI was determined using a benchmark of multi-species organisms. The generalization error for identifying Protein-coding and Non-coding regions decreased from 0.514 to 0.508 and to 0.378, respectively, after three iterations. The cost (difference between the predicted and the actual outcome) also decreased from 1.446 to 0.842 and to 0.718, respectively, for the first, second and third iterations. The iterations terminated at the 390th epoch, having an error of 0.036 and a cost of 0.316. The computed elements of the parameter vector that maximized the objective function were 0.043, 0.519, 0.715, 0.878, 1.157, and 2.575. The PNRI gave an ROC of 0.97, indicating an improved predictive ability. The PNRI identified both Protein-coding and Non-coding regions with an F1 score of 0.970, accuracy (0.969), sensitivity (0.966), and specificity of 0.973. Using 13 non-human multi-species model organisms, the average generalization performance of the traditional method was 74.4%, while that of the developed model was 85.2%, thereby making the developed model better in the identification of Protein-coding and Non-coding regions in transcriptomes. The developed Protein-coding and Non-coding region identifier model efficiently identified the Protein-coding and Non-coding transcriptomic regions. It could be used in genome annotation and in the analysis of transcriptomes.Keywords: sequence alignment-free model, dynamic thresholding classification, input randomization, genome annotation
Procedia PDF Downloads 66374 Reflections of Narrative Architecture in Transformational Representations on the Architectural Design Studio
Authors: M. Mortas, H. Asar, P. Dursun Cebi
Abstract:
The visionary works of architectural representation in the 21st century's present situation, are practiced through the methodologies which try to expose the intellectual and theoretical essences of futurologist positions that are revealed with this era's interactions. Expansions of conceptual and contextual inputs related to one architectural design representation, depend on its deepness of critical attitudes, its interactions with the concepts such as experience, meaning, affection, psychology, perception and aura, as well as its communication with spatial, cultural and environmental factors. The purpose of this research study is to be able to offer methodological application areas for the design dimensions of experiential practices into architectural design studios, by focusing on the architectural representative narrations of 'transformation,' 'metamorphosis,' 'morphogenesis,' 'in-betweenness', 'superposition' and 'intertwine’ in which they affect and are affected by the today’s spatiotemporal hybridizations of architecture. The narrative representations and the visual theory paradigms of the designers are chosen under the main title of 'transformation' for the investigation of these visionary and critical representations' dismantlings and decodings. Case studies of this research area are chosen from Neil Spiller, Bryan Cantley, Perry Kulper and Dan Slavinsky’s transformative, morphogenetic representations. The theoretical dismantlings and decodings which are obtained from these artists’ contemporary architectural representations are tried to utilize and practice in the structural design studios as alternative methodologies when to approach architectural design processes, for enriching, differentiating, diversifying and 'transforming' the applications of so far used design process precedents. The research aims to indicate architectural students about how they can reproduce, rethink and reimagine their own representative lexicons and so languages of their architectural imaginations, regarding the newly perceived tectonics of prosthetic, biotechnology, synchronicity, nanotechnology or machinery into various experiential design workshops. The methodology of this work can be thought as revealing the technical and theoretical tools, lexicons and meanings of contemporary-visionary architectural representations of our decade, with the essential contents and components of hermeneutics, etymology, existentialism, post-humanism, phenomenology and avant-gardism disciplines to re-give meanings the architectural visual theorists’ transformative representations of our decade. The value of this study may be to emerge the superposed and overlapped atmospheres of futurologist architectural representations for the students who need to rethink on the transcultural, deterritorialized and post-humanist critical theories to create and use the representative visual lexicons of themselves for their architectural soft machines and beings by criticizing the now, to be imaginative for the future of architecture.Keywords: architectural design studio, visionary lexicon, narrative architecture, transformative representation
Procedia PDF Downloads 141373 Seasonal Variability of M₂ Internal Tides Energetics in the Western Bay of Bengal
Authors: A. D. Rao, Sachiko Mohanty
Abstract:
The Internal Waves (IWs) are generated by the flow of barotropic tide over the rapidly varying and steep topographic features like continental shelf slope, subsurface ridges, and the seamounts, etc. The IWs of the tidal frequency are generally known as internal tides. These waves have a significant influence on the vertical density and hence causes mixing in the region. Such waves are also important in submarine acoustics, underwater navigation, offshore structures, ocean mixing and biogeochemical processes, etc. over the shelf-slope region. The seasonal variability of internal tides in the Bay of Bengal with special emphasis on its energetics is examined by using three-dimensional MITgcm model. The numerical simulations are performed for different periods covering August-September, 2013; November-December, 2013 and March-April, 2014 representing monsoon, post-monsoon and pre-monsoon seasons respectively during which high temporal resolution in-situ data sets are available. The model is initially validated through the spectral estimates of density and the baroclinic velocities. From the estimates, it is inferred that the internal tides associated with semi-diurnal frequency are more dominant in both observations and model simulations for November-December and March-April. However, in August, the estimate is found to be maximum near-inertial frequency at all the available depths. The observed vertical structure of the baroclinic velocities and its magnitude are found to be well captured by the model. EOF analysis is performed to decompose the zonal and meridional baroclinic tidal currents into different vertical modes. The analysis suggests that about 70-80% of the total variance comes from Mode-1 semi-diurnal internal tide in both observations as well as in the model simulations. The first three modes are sufficient to describe most of the variability for semidiurnal internal tides, as they represent 90-95% of the total variance for all the seasons. The phase speed, group speed, and wavelength are found to be maximum for post-monsoon season compared to other two seasons. The model simulation suggests that the internal tide is generated all along the shelf-slope regions and propagate away from the generation sites in all the months. The model simulated energy dissipation rate infers that its maximum occurs at the generation sites and hence the local mixing due to internal tide is maximum at these sites. The spatial distribution of available potential energy is found to be maximum in November (20kg/m²) in northern BoB and minimum in August (14kg/m²). The detailed energy budget calculation are made for all the seasons and results are analysed.Keywords: available potential energy, baroclinic energy flux, internal tides, Bay of Bengal
Procedia PDF Downloads 168372 Approach to Freight Trip Attraction Areas Classification, in Developing Countries
Authors: Adrián Esteban Ortiz-Valera, Angélica Lozano
Abstract:
In developing countries, informal trade is relevant, but it has been little studied in urban freight transport (UFT) context, although it is a challenge due to the non- contemplated demand it produces and the operational limitations it imposes. Hence, UFT operational improvements (initiatives) and freight attraction models must consider informal trade for developing countries. Afour phasesapproach for characterizing the commercial areas in developing countries (considering both formal and informal establishments) is proposed and applied to ten areas in Mexico City. This characterization is required to calculate real freight trip attraction and then select and/or adapt suitable initiatives. Phase 1 aims the delimitation of the study area. The following information is obtained for each establishment of a potential area: location or geographic coordinates, industrial sector, industrial subsector, and number of employees. Phase 2 characterizes the study area and proposes a set of indicators. This allows a broad view of the operations and constraints of UFT in the study area. Phase 3 classifies the study area according to seven indicators. Each indicator represents a level of conflict in the area due to the presence of formal (registered) and informal establishments on the sidewalks and streets, affecting urban freight transport (and other activities). Phase 4 determines preliminary initiatives which could be implemented in the study area to improve the operation of UFT. The indicators and initiatives relation allows a preliminary initiatives selection. This relation requires to know the following: a) the problems in the area (congested streets, lack of parking space for freight vehicles, etc.); b) the factors which limit initiatives due to informal establishments (reduced streets for freight vehicles; mobility and parking inability during a period, among others), c) the problems in the area due to its physical characteristics; and d) the factors which limit initiatives due to regulations of the area. Several differences in the study areas were observed. As the indicators increases, the areas tend to be less ordered, and the limitations for the initiatives become higher, causing a smaller number of susceptible initiatives. In ordered areas (similar to the commercial areas of developed countries), the current techniquesfor estimating freight trip attraction (FTA) can bedirectly applied, however, in the areas where the level of order is lower due to the presence of informal trade, this is not recommended because the real FTA would not be estimated. Therefore, a technique, which consider the characteristics of the areas in developing countries to obtain data and to estimate FTA, is required. This estimation can be the base for proposing feasible initiatives to such zones. The proposed approach provides a wide view of the needs of the commercial areas of developing countries. The knowledge of these needs would allow UFT´s operation to be improved and its negative impacts to be minimized.Keywords: freight initiatives, freight trip attraction, informal trade, urban freight transport
Procedia PDF Downloads 139371 Socio-Economic and Psychological Factors of Moscow Population Deviant Behavior: Sociological and Statistical Research
Authors: V. Bezverbny
Abstract:
The actuality of the project deals with stable growing of deviant behavior’ statistics among Moscow citizens. During the recent years the socioeconomic health, wealth and life expectation of Moscow residents is regularly growing up, but the limits of crime and drug addiction have grown up seriously. Another serious Moscow problem has been economical stratification of population. The cost of identical residential areas differs at 2.5 times. The project is aimed at complex research and the development of methodology for main factors and reasons evaluation of deviant behavior growing in Moscow. The main project objective is finding out the links between the urban environment quality and dynamics of citizens’ deviant behavior in regional and municipal aspect using the statistical research methods and GIS modeling. The conducted research allowed: 1) to evaluate the dynamics of deviant behavior in Moscow different administrative districts; 2) to describe the reasons of crime increasing, drugs addiction, alcoholism, suicides tendencies among the city population; 3) to develop the city districts classification based on the level of the crime rate; 4) to create the statistical database containing the main indicators of Moscow population deviant behavior in 2010-2015 including information regarding crime level, alcoholism, drug addiction, suicides; 5) to present statistical indicators that characterize the dynamics of Moscow population deviant behavior in condition of expanding the city territory; 6) to analyze the main sociological theories and factors of deviant behavior for concretization the deviation types; 7) to consider the main theoretical statements of the city sociology devoted to the reasons for deviant behavior in megalopolis conditions. To explore the level of deviant behavior’ factors differentiation, the questionnaire was worked out, and sociological survey involved more than 1000 people from different districts of the city was conducted. Sociological survey allowed to study the socio-economical and psychological factors of deviant behavior. It also included the Moscow residents’ open-ended answers regarding the most actual problems in their districts and reasons of wish to leave their place. The results of sociological survey lead to the conclusion that the main factors of deviant behavior in Moscow are high level of social inequality, large number of illegal migrants and bums, nearness of large transport hubs and stations on the territory, ineffective work of police, alcohol availability and drug accessibility, low level of psychological comfort for Moscow citizens, large number of building projects.Keywords: deviant behavior, megapolis, Moscow, urban environment, social stratification
Procedia PDF Downloads 191370 Household Climate-Resilience Index Development for the Health Sector in Tanzania: Use of Demographic and Health Surveys Data Linked with Remote Sensing
Authors: Heribert R. Kaijage, Samuel N. A. Codjoe, Simon H. D. Mamuya, Mangi J. Ezekiel
Abstract:
There is strong evidence that climate has changed significantly affecting various sectors including public health. The recommended feasible solution is adopting development trajectories which combine both mitigation and adaptation measures for improving resilience pathways. This approach demands a consideration for complex interactions between climate and social-ecological systems. While other sectors such as agriculture and water have developed climate resilience indices, the public health sector in Tanzania is still lagging behind. The aim of this study was to find out how can we use Demographic and Health Surveys (DHS) linked with Remote Sensing (RS) technology and metrological information as tools to inform climate change resilient development and evaluation for the health sector. Methodological review was conducted whereby a number of studies were content analyzed to find appropriate indicators and indices for climate resilience household and their integration approach. These indicators were critically reviewed, listed, filtered and their sources determined. Preliminary identification and ranking of indicators were conducted using participatory approach of pairwise weighting by selected national stakeholders from meeting/conferences on human health and climate change sciences in Tanzania. DHS datasets were retrieved from Measure Evaluation project, processed and critically analyzed for possible climate change indicators. Other sources for indicators of climate change exposure were also identified. For the purpose of preliminary reporting, operationalization of selected indicators was discussed to produce methodological approach to be used in resilience comparative analysis study. It was found that household climate resilient index depends on the combination of three indices namely Household Adaptive and Mitigation Capacity (HC), Household Health Sensitivity (HHS) and Household Exposure Status (HES). It was also found that, DHS alone cannot complement resilient evaluation unless integrated with other data sources notably flooding data as a measure of vulnerability, remote sensing image of Normalized Vegetation Index (NDVI) and Metrological data (deviation from rainfall pattern). It can be concluded that if these indices retrieved from DHS data sets are computed and scientifically integrated can produce single climate resilience index and resilience maps could be generated at different spatial and time scales to enhance targeted interventions for climate resilient development and evaluations. However, further studies are need to test for the sensitivity of index in resilience comparative analysis among selected regions.Keywords: climate change, resilience, remote sensing, demographic and health surveys
Procedia PDF Downloads 164369 Microplastics Accumulation and Abundance Standardization for Fluvial Sediments: Case Study for the Tena River
Authors: Mishell E. Cabrera, Bryan G. Valencia, Anderson I. Guamán
Abstract:
Human dependence on plastic products has led to global pollution, with plastic particles ranging in size from 0.001 to 5 millimeters, which are called microplastics (hereafter, MPs). The abundance of microplastics is used as an indicator of pollution. However, reports of pollution (abundance of MPs) in river sediments do not consider that the accumulation of sediments and MPs depends on the energy of the river. That is, the abundance of microplastics will be underestimated if the sediments analyzed come from places where the river flows with a lot of energy, and the abundance will be overestimated if the sediment analyzed comes from places where the river flows with less energy. This bias can generate an error greater than 300% of the MPs value reported for the same river and should increase when comparisons are made between 2 rivers with different characteristics. Sections where the river flows with higher energy allow sands to be deposited and limit the accumulation of MPs, while sections, where the same river has lower energy, allow fine sediments such as clays and silts to be deposited and should facilitate the accumulation of MPs particles. That is, the abundance of MPs in the same river is underrepresented when the sediment analyzed is sand, and the abundance of MPs is overrepresented if the sediment analyzed is silt or clay. The present investigation establishes a protocol aimed at incorporating sample granulometry to calibrate MPs quantification and eliminate over- or under-representation bias (hereafter granulometric bias). A total of 30 samples were collected by taking five samples within six work zones. The slope of the sampling points was less than 8 degrees, referred to as low slope areas, according to the Van Zuidam slope classification. During sampling, blanks were used to estimate possible contamination by MPs during sampling. Samples were dried at 60 degrees Celsius for three days. A flotation technique was employed to isolate the MPs using sodium metatungstate with a density of 2 gm/l. For organic matter digestion, 30% hydrogen peroxide and Fenton were used at a ratio of 6:1 for 24 hours. The samples were stained with rose bengal at a concentration of 200 mg/L and were subsequently dried in an oven at 60 degrees Celsius for 1 hour to be identified and photographed in a stereomicroscope with the following conditions: Eyepiece magnification: 10x, Zoom magnification (zoom knob): 4x, Objective lens magnification: 0.35x for analysis in ImageJ. A total of 630 fibers of MPs were identified, mainly red, black, blue, and transparent colors, with an overall average length of 474,310 µm and an overall median length of 368,474 µm. The particle size of the 30 samples was calculated using 100 g per sample using sieves with the following apertures: 2 mm, 1 mm, 500 µm, 250 µm, 125 µm and 0.63 µm. This sieving allowed a visual evaluation and a more precise quantification of the microplastics present. At the same time, the weight of sediment in each fraction was calculated, revealing an evident magnitude: as the presence of sediment in the < 63 µm fraction increases, a significant increase in the number of MPs particles is observed.Keywords: microplastics, pollution, sediments, Tena River
Procedia PDF Downloads 71368 Dimensionality Reduction in Modal Analysis for Structural Health Monitoring
Authors: Elia Favarelli, Enrico Testi, Andrea Giorgetti
Abstract:
Autonomous structural health monitoring (SHM) of many structures and bridges became a topic of paramount importance for maintenance purposes and safety reasons. This paper proposes a set of machine learning (ML) tools to perform automatic feature selection and detection of anomalies in a bridge from vibrational data and compare different feature extraction schemes to increase the accuracy and reduce the amount of data collected. As a case study, the Z-24 bridge is considered because of the extensive database of accelerometric data in both standard and damaged conditions. The proposed framework starts from the first four fundamental frequencies extracted through operational modal analysis (OMA) and clustering, followed by density-based time-domain filtering (tracking). The fundamental frequencies extracted are then fed to a dimensionality reduction block implemented through two different approaches: feature selection (intelligent multiplexer) that tries to estimate the most reliable frequencies based on the evaluation of some statistical features (i.e., mean value, variance, kurtosis), and feature extraction (auto-associative neural network (ANN)) that combine the fundamental frequencies to extract new damage sensitive features in a low dimensional feature space. Finally, one class classifier (OCC) algorithms perform anomaly detection, trained with standard condition points, and tested with normal and anomaly ones. In particular, a new anomaly detector strategy is proposed, namely one class classifier neural network two (OCCNN2), which exploit the classification capability of standard classifiers in an anomaly detection problem, finding the standard class (the boundary of the features space in normal operating conditions) through a two-step approach: coarse and fine boundary estimation. The coarse estimation uses classics OCC techniques, while the fine estimation is performed through a feedforward neural network (NN) trained that exploits the boundaries estimated in the coarse step. The detection algorithms vare then compared with known methods based on principal component analysis (PCA), kernel principal component analysis (KPCA), and auto-associative neural network (ANN). In many cases, the proposed solution increases the performance with respect to the standard OCC algorithms in terms of F1 score and accuracy. In particular, by evaluating the correct features, the anomaly can be detected with accuracy and an F1 score greater than 96% with the proposed method.Keywords: anomaly detection, frequencies selection, modal analysis, neural network, sensor network, structural health monitoring, vibration measurement
Procedia PDF Downloads 122367 Pharmacovigilance in Hospitals: Retrospective Study at the Pharmacovigilance Service of UHE-Oran, Algeria
Authors: Nadjet Mekaouche, Hanane Zitouni, Fatma Boudia, Habiba Fetati, A. Saleh, A. Lardjam, H. Geniaux, A. Coubret, H. Toumi
Abstract:
Medicines have undeniably played a major role in prolonging shelf life and improving quality. The absolute efficacy of the drug remains a lever for innovation, its benefit/risk balance is not always assured and it does not always have the expected effects. Prior to marketing, knowledge about adverse drug reactions is incomplete. Once on the market, phase IV drug studies begin. For years, the drug was prescribed with less care to a large number of very heterogeneous patients and often in combination with other drugs. It is at this point that previously unknown adverse effects may appear, hence the need for the implementation of a pharmacovigilance system. Pharmacovigilance represents all methods for detecting, evaluating, informing and preventing the risks of adverse drug reactions. The most severe adverse events occur frequently in hospital and that a significant proportion of adverse events result in hospitalizations. In addition, the consequences of hospital adverse events in terms of length of stay, mortality and costs are considerable. It, therefore, appears necessary to develop ‘hospital pharmacovigilance’ aimed at reducing the incidence of adverse reactions in hospitals. The most widely used monitoring method in pharmacovigilance is spontaneous notification. However, underreporting of adverse drug reactions is common in many countries and is a major obstacle to pharmacovigilance assessment. It is in this context that this study aims to describe the experience of the pharmacovigilance service at the University Hospital of Oran (EHUO). This is a retrospective study extending from 2011 to 2017, carried out on archived records of declarations collected at the level of the EHUO Pharmacovigilance Department. Reporting was collected by two methods: ‘spontaneous notification’ and ‘active pharmacovigilance’ targeting certain clinical services. We counted 217 statements. It involved 56% female patients and 46% male patients. Age ranged from 5 to 78 years with an average of 46 years. The most common adverse reaction was drug toxidermy. For the drugs in question, they were essentially according to the ATC classification of anti-infectives followed by anticancer drugs. As regards the evolution of declarations by year, a low rate of notification was noted in 2011. That is why we decided to set up an active approach at the level of some services where a resident of reference attended the staffs every week. This has resulted in an increase in the number of reports. The declarations came essentially from the services where the active approach was installed. This highlights the need for ongoing communication between all relevant health actors to stimulate reporting and secure drug treatments.Keywords: adverse drug reactions, hospital, pharmacovigilance, spontaneous notification
Procedia PDF Downloads 169