Search results for: artificial intelligence and genetic algorithms
278 The Development of Group Counseling Program for Elderly's Caregivers by Base on Person-Centered Theory to Promoting for the Resilience Quotient in Elderly People
Authors: Jirapan Khruesarn, Wimwipa Boonklin
Abstract:
Background: Currently, Thailand has an aging population. In 2017, the elderly population was over 11.14 million. There will be an increase in the number of elderly people, 8.39 million, some people grumble to themselves and have conflicts with their offspring or those close to them. It is a source of stress. Mental health promotion should be given to the elderly in order to cope with these changes. Due to the family characteristics of Thai society, these family members will act as caregivers for the elderly. Therefore, a group-counseling program based on Personnel-Centered Theory for Elderly Caregivers in Mental Health Promotion for Older People in Na Kaeo Municipality, Kau Ka District, Lampang Province, has been developed to compare the elderly care behavior before and after the participation. Methods: This research was study for 20 elderly' caregiver: Those aimed to compare the before and after use of group program for caregiver to promoting for the elderly by the following methods: Step 1 Establish a framework for evaluating elderly care behaviors and develop a group counseling program for promote mental health for elderly on: 1) Body 2) Willpower 3) Social and community management and 4) Organizing learning process. Step 2 Assessing an Elderly Care Behaviors by using "The behavior assessment on caring for the elderly" and assessing the mental health power level of the elderly and follow the counseling program 9 times and compare of the elderly care behaviors before and after joined a group program, and compare of mental health level of caregiver attends a group program. Results: This study is developing a group counseling program to promoting for the resilience quotient in elderly people that the results of the study could be summarized as follows: 1) Before the elderly's caregivers join a group counseling program: Mental health promotion behaviors of the elderly were at the high level of (3.32), and after: were at the high level of (3.44). 2) Before the elderly's caregiver attends a group counseling program: the mental health level of the elderly the mean score was (47.85 percent), and the standard deviation was (0.21 percent) and after. The elderly had a higher score of (51.45 percent) In summary, after the elderly caregivers joined the group, the elderly are higher in all aspects promote mental health for elderly and the statistically significance at the 0.05, It shows that programs are fit for personal and community condition in promoting the mental health of the elderly because this theory has the idea that: Humans have the ability to use their intelligence to solve problems or make decisions effectively, And member of group counseling program have ventured and express grievances that the counselor is a facilitator who focuses on personal development by building relationships among people. In other words, the factors contributing to higher levels of elderly care behaviors is group counseling, that isn't a hypothetical process but focus on building relationships that are based on mutual trust and Unconditional acceptance.Keywords: group counseling base on person-centered theory, elderly person, resilience quotient: RQ, caregiver
Procedia PDF Downloads 91277 Sweepline Algorithm for Voronoi Diagram of Polygonal Sites
Authors: Dmitry A. Koptelov, Leonid M. Mestetskiy
Abstract:
Voronoi Diagram (VD) of finite set of disjoint simple polygons, called sites, is a partition of plane into loci (for each site at the locus) – regions, consisting of points that are closer to a given site than to all other. Set of polygons is a universal model for many applications in engineering, geoinformatics, design, computer vision, and graphics. VD of polygons construction usually done with a reduction to task of constructing VD of segments, for which there are effective O(n log n) algorithms for n segments. Preprocessing – constructing segments from polygons’ sides, and postprocessing – polygon’s loci construction by merging the loci of the sides of each polygon are also included in reduction. This approach doesn’t take into account two specific properties of the resulting segment sites. Firstly, all this segments are connected in pairs in the vertices of the polygons. Secondly, on the one side of each segment lies the interior of the polygon. The polygon is obviously included in its locus. Using this properties in the algorithm for VD construction is a resource to reduce computations. The article proposes an algorithm for the direct construction of VD of polygonal sites. Algorithm is based on sweepline paradigm, allowing to effectively take into account these properties. The solution is performed based on reduction. Preprocessing is the constructing of set of sites from vertices and edges of polygons. Each site has an orientation such that the interior of the polygon lies to the left of it. Proposed algorithm constructs VD for set of oriented sites with sweepline paradigm. Postprocessing is a selecting of edges of this VD formed by the centers of empty circles touching different polygons. Improving the efficiency of the proposed sweepline algorithm in comparison with the general Fortune algorithm is achieved due to the following fundamental solutions: 1. Algorithm constructs only such VD edges, which are on the outside of polygons. Concept of oriented sites allowed to avoid construction of VD edges located inside the polygons. 2. The list of events in sweepline algorithm has a special property: the majority of events are connected with “medium” polygon vertices, where one incident polygon side lies behind the sweepline and the other in front of it. The proposed algorithm processes such events in constant time and not in logarithmic time, as in the general Fortune algorithm. The proposed algorithm is fully implemented and tested on a large number of examples. The high reliability and efficiency of the algorithm is also confirmed by computational experiments with complex sets of several thousand polygons. It should be noted that, despite the considerable time that has passed since the publication of Fortune's algorithm in 1986, a full-scale implementation of this algorithm for an arbitrary set of segment sites has not been made. The proposed algorithm fills this gap for an important special case - a set of sites formed by polygons.Keywords: voronoi diagram, sweepline, polygon sites, fortunes' algorithm, segment sites
Procedia PDF Downloads 177276 Green Organic Chemistry, a New Paradigm in Pharmaceutical Sciences
Authors: Pesaru Vigneshwar Reddy, Parvathaneni Pavan
Abstract:
Green organic chemistry which is the latest and one of the most researched topics now-a- days has been in demand since 1990’s. Majority of the research in green organic chemistry chemicals are some of the important starting materials for greater number of major chemical industries. The production of organic chemicals has raw materials (or) reagents for other application is major sector of manufacturing polymers, pharmaceuticals, pesticides, paints, artificial fibers, food additives etc. organic synthesis on a large scale compound to the labratory scale, involves the use of energy, basic chemical ingredients from the petro chemical sectors, catalyst and after the end of the reaction, seperation, purification, storage, packing distribution etc. During these processes there are many problems of health and safety for workers in addition to the environmental problems caused there by use and deposition as waste. Green chemistry with its 12 principles would like to see changes in conventional way that were used for decades to make synthetic organic chemical and the use of less toxic starting materials. Green chemistry would like to increase the efficiency of synthetic methods, to use less toxic solvents, reduce the stage of synthetic routes and minimize waste as far as practically possible. In this way, organic synthesis will be part of the effort for sustainable development Green chemistry is also interested for research and alternatives innovations on many practical aspects of organic synthesis in the university and research labaratory of institutions. By changing the methodologies of organic synthesis, health and safety will be advanced in the small scale laboratory level but also will be extended to the industrial large scale production a process through new techniques. The three key developments in green chemistry include the use of super critical carbondioxide as green solvent, aqueous hydrogen peroxide as an oxidising agent and use of hydrogen in asymmetric synthesis. It also focuses on replacing traditional methods of heating with that of modern methods of heating like microwaves traditions, so that carbon foot print should reduces as far as possible. Another beneficiary of this green chemistry is that it will reduce environmental pollution through the use of less toxic reagents, minimizing of waste and more bio-degradable biproducts. In this present paper some of the basic principles, approaches, and early achievements of green chemistry has a branch of chemistry that studies the laws of passing of chemical reactions is also considered, with the summarization of green chemistry principles. A discussion about E-factor, old and new synthesis of ibuprofen, microwave techniques, and some of the recent advancements also considered.Keywords: energy, e-factor, carbon foot print, micro-wave, sono-chemistry, advancement
Procedia PDF Downloads 306275 Cultural Innovation in Uruena: A Path Against Depopulation
Authors: S. Sansone-Casaburi
Abstract:
The pandemic that the world is going through is causing important changes in the daily life of all cities, which can translate into opportunities to rearrange pending situations. Among others: the town-city relationship and sustainability. On the one hand, the city continues to be the center of attention, and the countryside is assumed as the supplier of food. However, the temporary closure of cities highlighted the importance of the rural environment, and many people are reassessing this context as an alternative for life. Furthermore, the countryside is not simply the home and the center of activity of the people who inhabit it, but rather constitutes the active group of all citizens, both rural and urban. On the other hand, the pandemic is the opportunity to meet sustainable development goals. Sustainable development is understood as the capital to be transferred to future generations made up of three types of wealth: natural capital (environment), human capital (people, relationships, culture), and artificial or built capital, made up of buildings and infrastructure, or by cities and towns. The 'new normal' can mean going back to the countryside, but not to a merely agricultural place but to a sustainable, affordable, and healthy place, which, with the appropriate infrastructures, allows work from a distance, a new post-COVID-19 modality. The contribution of the research is towards the recovery of traditional villages from the perspective of populations that have managed to maintain their vitality with innovative solutions. It is assumed that innovation is a path for the recovery of traditional villages, so we ask: what conditions are necessary for innovation to be successful and sustainable? In the research, several variables were found, among which culture is named, so the objective of this article is to understand Uruena, a town in the province of Valladolid, which with only 182 inhabitants houses five museums and twelve bookstores that make up the first Villa del Libro in Spain. The methodology used is mixed: inductive and deductive and the results were specified in determining the formula of innovative peoples in culture: PIc = Pt + C [E (Aec) + S (pp) + A (T + s + t + enc)]. Where the innovative villages in culture PIc are the result of traditional villages Pt that from a cultural innovation C, integrates into the economic, economic and cultural activities E (Aec); in the social sphere, the public and private actors S (pp); and in the environmental (A), Territory (T), services (s), technology (t) and natural and built spaces (enc). The results of this analysis will focus on determining what makes the structure of innovative peoples sustainable and understanding what variables make up that structure to verify if they can be applied in other contexts and repower abandoned places to provide a solution for people who migrate to this context. That is, learn from what has been done to replicate it in similar cases.Keywords: culture as innovation, depopulation, sustainability, traditional villages
Procedia PDF Downloads 88274 Performance Parameters of an Abbreviated Breast MRI Protocol
Authors: Andy Ho
Abstract:
Breast cancer is a common cancer in Australia. Early diagnosis is crucial for improving patient outcomes, as later-stage detection correlates with poorer prognoses. While multiparametric MRI offers superior sensitivity in detecting invasive and high-grade breast cancers compared to conventional mammography, its extended scan duration and high costs limit widespread application. As a result, full protocol MRI screening is typically reserved for patients at elevated risk. Recent advancements in imaging technology have facilitated the development of Abbreviated MRI protocols, which dramatically reduce scan times (<10 minutes compared to >30 minutes for full protocol). The potential for Abbreviated MRI to offer a more time- and cost-efficient alternative has implications for improving patient accessibility, reducing appointment durations, and enhancing compliance—especially relevant for individuals requiring regular annual screening over several decades. The purpose of this study is to assess the diagnostic efficacy of Abbreviated MRI for breast cancer screening among high-risk patients at the Royal Prince Alfred Hospital (RPA). This study aims to determine the sensitivity, specificity, and inter-reader variability of Abbreviated MRI protocols when interpreted by subspecialty-trained Breast Radiologists. A systematic review of the RPA’s electronic Picture Archive and Communication System identified high-risk patients, defined by Australian ‘Medicare Benefits Schedule’ criteria, who underwent Breast MRI from 2021 to 2022. Eligible participants included asymptomatic patients under 50 years old referred by the High-Risk Clinic due to a high-risk genetic profile or relevant familial history. The MRIs were anonymized, randomized, and interpreted by four Breast Radiologists, each independently completing standardized proforma evaluations. Radiological findings were compared against histopathology as the gold standard or follow-up imaging if biopsies were unavailable. Statistical metrics, including sensitivity, specificity, and inter-reader variability, were assessed. The Fleiss-Kappa analysis demonstrated a fair inter-reader agreement (kappa = 0.25; 95% CI: 0.19–0.32; p < 0.0001). The sensitivity for detecting malignancies was 0.75, with a specificity of 0.84. These findings underline the potential of Abbreviated MRI as a reliable screening tool for malignancies with significant specificity, though reduced sensitivity highlights the importance of robust radiologist training and consistent evaluation standards. Abbreviated MRI protocols exhibit promise as a viable screening option for high-risk patients, combining reduced scan times and acceptable diagnostic accuracy. Further work to refine interpretation practices and optimize training is essential to maximize the protocol’s utility in routine clinical screening and facilitate broader accessibility.Keywords: abbreviated, breast, cancer, MRI
Procedia PDF Downloads 11273 A Feature Clustering-Based Sequential Selection Approach for Color Texture Classification
Authors: Mohamed Alimoussa, Alice Porebski, Nicolas Vandenbroucke, Rachid Oulad Haj Thami, Sana El Fkihi
Abstract:
Color and texture are highly discriminant visual cues that provide an essential information in many types of images. Color texture representation and classification is therefore one of the most challenging problems in computer vision and image processing applications. Color textures can be represented in different color spaces by using multiple image descriptors which generate a high dimensional set of texture features. In order to reduce the dimensionality of the feature set, feature selection techniques can be used. The goal of feature selection is to find a relevant subset from an original feature space that can improve the accuracy and efficiency of a classification algorithm. Traditionally, feature selection is focused on removing irrelevant features, neglecting the possible redundancy between relevant ones. This is why some feature selection approaches prefer to use feature clustering analysis to aid and guide the search. These techniques can be divided into two categories. i) Feature clustering-based ranking algorithm uses feature clustering as an analysis that comes before feature ranking. Indeed, after dividing the feature set into groups, these approaches perform a feature ranking in order to select the most discriminant feature of each group. ii) Feature clustering-based subset search algorithms can use feature clustering following one of three strategies; as an initial step that comes before the search, binded and combined with the search or as the search alternative and replacement. In this paper, we propose a new feature clustering-based sequential selection approach for the purpose of color texture representation and classification. Our approach is a three step algorithm. First, irrelevant features are removed from the feature set thanks to a class-correlation measure. Then, introducing a new automatic feature clustering algorithm, the feature set is divided into several feature clusters. Finally, a sequential search algorithm, based on a filter model and a separability measure, builds a relevant and non redundant feature subset: at each step, a feature is selected and features of the same cluster are removed and thus not considered thereafter. This allows to significantly speed up the selection process since large number of redundant features are eliminated at each step. The proposed algorithm uses the clustering algorithm binded and combined with the search. Experiments using a combination of two well known texture descriptors, namely Haralick features extracted from Reduced Size Chromatic Co-occurence Matrices (RSCCMs) and features extracted from Local Binary patterns (LBP) image histograms, on five color texture data sets, Outex, NewBarktex, Parquet, Stex and USPtex demonstrate the efficiency of our method compared to seven of the state of the art methods in terms of accuracy and computation time.Keywords: feature selection, color texture classification, feature clustering, color LBP, chromatic cooccurrence matrix
Procedia PDF Downloads 136272 Association between Physical Inactivity and Sedentary Behaviours with Risk of Hypertension among Sedentary Occupation Workers: A Cross-Sectional Study
Authors: Hanan Badr, Fahad Manee, Rao Shashidhar, Omar Bayoumy
Abstract:
Introduction: Hypertension is the major risk factor for cardiovascular diseases and stroke and a universe leading cause of disability-adjusted life years and mortality. Adopting an unhealthy lifestyle is thought to be associated with developing hypertension regardless of predisposing genetic factors. This study aimed to examine the association between recreational physical activity (RPA), and sedentary behaviors with a risk of hypertension among ministry employees, where there is no role for occupational physical activity (PA), and to scrutinize participants’ time spent in RPA and sedentary behaviors on the working and weekend days. Methods: A cross-sectional study was conducted among randomly selected 2562 employees working at ten randomly selected ministries in Kuwait. To have a representative sample, the proportional allocation technique was used to define the number of participants in each ministry. A self-administered questionnaire was used to collect data about participants' socio-demographic characteristics, health status, and their 24 hours’ time use during a regular working day and a weekend day. The time use covered a list of 20 different activities practiced by a person daily. The New Zealand Physical Activity Questionnaire-Short Form (NZPAQ-SF) was used to assess the level of RPA. The scale generates three categories according to the number of hours spent in RPA/week: relatively inactive, relatively active, and highly active. Gender-matched trained nurses performed anthropometric measurements (weight and height) and measuring blood pressure (two readings) using an automatic blood pressure monitor (95% accuracy level compared to a calibrated mercury sphygmomanometer). Results: Participants’ mean age was 35.3±8.4 years, with almost equal gender distribution. About 13% of the participants were smokers, and 75% were overweight. Almost 10% reported doctor-diagnosed hypertension. Among those who did not, the mean systolic blood pressure was 119.9±14.2 and the mean diastolic blood pressure was 80.9±7.3. Moreover, 73.9% of participants were relatively physically inactive and 18% were highly active. Mean systolic and diastolic blood pressure showed a significant inverse association with the level of RPA (means of blood pressure measures were: 123.3/82.8 among relatively inactive, 119.7/80.4 among relatively active, and 116.6/79.6 among highly active). Furthermore, RPA occupied 1.6% and 1.8% of working and weekend days, respectively, while sedentary behaviors (watching TV, using electronics for social media or entertaining, etc.) occupied 11.2% and 13.1%, respectively. Sedentary behaviors were significantly associated with high levels of systolic and diastolic blood pressure. Binary logistic regression revealed that physical inactivity (OR=3.13, 95% CI: 2.25-4.35) and sedentary behaviors (OR=2.25, CI: 1.45-3.17) were independent risk factors for high systolic and diastolic blood pressure after adjustment for other covariates. Conclusions: Physical inactivity and sedentary lifestyle were associated with a high risk of hypertension. Further research to examine the independent role of RPA in improving blood pressure levels and cultural and occupational barriers for practicing RPA are recommended. Policies should be enacted in promoting PA in the workplace that might help in decreasing the risk of hypertension among sedentary occupation workers.Keywords: physical activity, sedentary behaviors, hypertension, workplace
Procedia PDF Downloads 178271 Enzymatic Determination of Limonene in Red Clover Genotypes
Authors: Andrés Quiroz, Emilio Hormazabal, Ana Mutis, Fernando Ortega, Manuel Chacón-Fuentes, Leonardo Parra
Abstract:
Red clover (Trifolium pratense L.) is an important forage species in temperate regions of the world. The main limitation of this species worldwide is a lack of persistence related to the high mortality of plants due to a complex of biotic and abiotic factors, determining a life span of two or three seasons. Because of the importance of red clover in Chile, a red clover breeding program was started at INIA Carillanca Research Center in 1989, with the main objective of improving the survival of plants, forage yield, and persistence. The main selection criteria for selecting new varieties have been based on agronomical parameters and biotic factors. The main biotic factor associated with red clover mortality in Chile is Hylastinus obscurus (Coleoptera: Curculionidae). Both larval and adults feed on the roots, causing weakening and subsequent death of clover plants. Pesticides have not been successful for controlling infestations of this root borer. Therefore, alternative strategies for controlling this pest are a high priority for red clover producers. Currently, the role of semiochemical in the interaction between H. obscurus and red clover plants has been widely studied for our group. Specifically, from the red clover foliage has been identified limonene is eliciting repellency from the root borer. Limonene is generated in the plant from two independent biosynthetic pathways, the mevalonic acid, and deoxyxylulose pathway. Mevalonate pathway enzymes are localized in the cytosol, whereas the deoxyxylulose phosphate pathway enzymes are found in plastids. In summary, limonene can be determinated by enzymatic bioassay using GPP as substrate and by limonene synthase expression. Therefore, the main objective of this work was to study genetic variation of limonene in material provided by INIA´s Red Clover breeding program. Protein extraction was carried out homogenizing 250 mg of leave tissue and suspended in 6 mL of extraction buffer (PEG 1500, PVP-30, 20 mM MgCl2 and antioxidants) and stirred on ice for 20 min. After centrifugation, aliquots of 2.5 mL were desalted on PD-10 columns, resulting in a final volume of 3.5 mL. Protein determination was performed according to Bradford with BSA as a standard. Monoterpene synthase assays were performed with 50 µL of protein extracts transferred into gas-tight 2 mL crimp seal vials after addition of 4 µL MgCl₂ and 41 µL assay buffer. The assay was started by adding 5 µL of a GPP solution. The mixture was incubated for 30 min at 40 °C. Biosynthesized limonene was quantified in a GC equipped with a chiral column and using synthetic R and S-limonene standards. The enzymatic the production of R and S-limonene from different Superqueli-Carillanca genotypes is shown in this work. Preliminary results showed significant differences in limonene content among the genotypes analyzed. These results constitute an important base for selecting genotypes with a high content of this repellent monoterpene towards H. obscurus.Keywords: head space, limonene enzymatic determination, red clover, Hylastinus obscurus
Procedia PDF Downloads 266270 Executive Leadership in Kinesiology, Exercise and Sport Science: The Five 'C' Concept
Authors: Jim Weese
Abstract:
The Kinesiology, Exercise and Sport Science environment remain excellent venues for leadership research. Prescribed leadership (coaching), emergent leadership (players and organizations), and executive leadership are all popular themes in the research literature. Leadership remains a popular area of inquiry in the sport management domain as well as an interesting area for practitioners who wish to heighten their leadership practices and effectiveness. The need for effective leadership in these areas given competing demands for attention and resources may be at an all-time high. The presenter has extensive research and practical experience in the area and has developed his concept based on the latest leadership literature. He refers to this as the Five ’C’s of Leadership. These components, noted below, have been empirically validated and have served as the foundation for extensive consulting with academic, sport, and business leaders. Credibility (C1) is considered the foundation of leadership. There are two components to this area, namely: (a) leaders being respected for having the relevant knowledge, insights, and experience to be seen as credible sources of information, and (b) followers perceiving the leader as being a person of character, someone who is honest, reliable, consistent, and trustworthy. Compelling Vision (C2) refers to the leader’s ability to focus the attention of followers on a desired end goal. Effective leaders understand trends and developments in their industry. They also listen attentively to the needs and desires of their stakeholders and use their own instincts and experience to shape these ideas into an inspiring vision that is effectively and continuously communicated. Charismatic Communicator (C3) refers to the leader’s ability to formally and informally communicate with members. Leaders must deploy mechanisms and communication techniques to keep their members informed and engaged. Effective leaders sprinkle in ‘proof points’ that reinforce the vision’s relevance and/or the unit’s progress towards its attainment. Contagious Enthusiasm (C4) draws on the emotional intelligence literature as it relates to exciting and inspiring followers. Effective leaders demonstrate a level of care, commitment, and passion for their people and feelings of engagement permeate the group. These leaders genuinely care about the task at hand, and for the people working to make it a reality. Culture Builder (C5) is the capstone component of the model and is critical to long-term success and survival. Organizational culture refers to the dominant beliefs, values and attitudes of members of a group or organization. Some have suggested that developing and/or imbedding a desired culture for an organization is the most important responsibility for a leader. The author outlines his Five ‘C’s’ of Leadership concept and provide direct application to executive leadership in Kinesiology, Exercise and Sport Science.Keywords: effectiveness, leadership, management, sport
Procedia PDF Downloads 300269 Colored Image Classification Using Quantum Convolutional Neural Networks Approach
Authors: Farina Riaz, Shahab Abdulla, Srinjoy Ganguly, Hajime Suzuki, Ravinesh C. Deo, Susan Hopkins
Abstract:
Recently, quantum machine learning has received significant attention. For various types of data, including text and images, numerous quantum machine learning (QML) models have been created and are being tested. Images are exceedingly complex data components that demand more processing power. Despite being mature, classical machine learning still has difficulties with big data applications. Furthermore, quantum technology has revolutionized how machine learning is thought of, by employing quantum features to address optimization issues. Since quantum hardware is currently extremely noisy, it is not practicable to run machine learning algorithms on it without risking the production of inaccurate results. To discover the advantages of quantum versus classical approaches, this research has concentrated on colored image data. Deep learning classification models are currently being created on Quantum platforms, but they are still in a very early stage. Black and white benchmark image datasets like MNIST and Fashion MINIST have been used in recent research. MNIST and CIFAR-10 were compared for binary classification, but the comparison showed that MNIST performed more accurately than colored CIFAR-10. This research will evaluate the performance of the QML algorithm on the colored benchmark dataset CIFAR-10 to advance QML's real-time applicability. However, deep learning classification models have not been developed to compare colored images like Quantum Convolutional Neural Network (QCNN) to determine how much it is better to classical. Only a few models, such as quantum variational circuits, take colored images. The methodology adopted in this research is a hybrid approach by using penny lane as a simulator. To process the 10 classes of CIFAR-10, the image data has been translated into grey scale and the 28 × 28-pixel image containing 10,000 test and 50,000 training images were used. The objective of this work is to determine how much the quantum approach can outperform a classical approach for a comprehensive dataset of color images. After pre-processing 50,000 images from a classical computer, the QCNN model adopted a hybrid method and encoded the images into a quantum simulator for feature extraction using quantum gate rotations. The measurements were carried out on the classical computer after the rotations were applied. According to the results, we note that the QCNN approach is ~12% more effective than the traditional classical CNN approaches and it is possible that applying data augmentation may increase the accuracy. This study has demonstrated that quantum machine and deep learning models can be relatively superior to the classical machine learning approaches in terms of their processing speed and accuracy when used to perform classification on colored classes.Keywords: CIFAR-10, quantum convolutional neural networks, quantum deep learning, quantum machine learning
Procedia PDF Downloads 129268 Hybrid Precoder Design Based on Iterative Hard Thresholding Algorithm for Millimeter Wave Multiple-Input-Multiple-Output Systems
Authors: Ameni Mejri, Moufida Hajjaj, Salem Hasnaoui, Ridha Bouallegue
Abstract:
The technology advances have most lately made the millimeter wave (mmWave) communication possible. Due to the huge amount of spectrum that is available in MmWave frequency bands, this promising candidate is considered as a key technology for the deployment of 5G cellular networks. In order to enhance system capacity and achieve spectral efficiency, very large antenna arrays are employed at mmWave systems by exploiting array gain. However, it has been shown that conventional beamforming strategies are not suitable for mmWave hardware implementation. Therefore, new features are required for mmWave cellular applications. Unlike traditional multiple-input-multiple-output (MIMO) systems for which only digital precoders are essential to accomplish precoding, MIMO technology seems to be different at mmWave because of digital precoding limitations. Moreover, precoding implements a greater number of radio frequency (RF) chains supporting more signal mixers and analog-to-digital converters. As RF chain cost and power consumption is increasing, we need to resort to another alternative. Although the hybrid precoding architecture has been regarded as the best solution based on a combination between a baseband precoder and an RF precoder, we still do not get the optimal design of hybrid precoders. According to the mapping strategies from RF chains to the different antenna elements, there are two main categories of hybrid precoding architecture. Given as a hybrid precoding sub-array architecture, the partially-connected structure reduces hardware complexity by using a less number of phase shifters, whereas it sacrifices some beamforming gain. In this paper, we treat the hybrid precoder design in mmWave MIMO systems as a problem of matrix factorization. Thus, we adopt the alternating minimization principle in order to solve the design problem. Further, we present our proposed algorithm for the partially-connected structure, which is based on the iterative hard thresholding method. Through simulation results, we show that our hybrid precoding algorithm provides significant performance gains over existing algorithms. We also show that the proposed approach reduces significantly the computational complexity. Furthermore, valuable design insights are provided when we use the proposed algorithm to make simulation comparisons between the hybrid precoding partially-connected structure and the fully-connected structure.Keywords: alternating minimization, hybrid precoding, iterative hard thresholding, low-complexity, millimeter wave communication, partially-connected structure
Procedia PDF Downloads 321267 Thermal and Visual Comfort Assessment in Office Buildings in Relation to Space Depth
Authors: Elham Soltani Dehnavi
Abstract:
In today’s compact cities, bringing daylighting and fresh air to buildings is a significant challenge, but it also presents opportunities to reduce energy consumption in buildings by reducing the need for artificial lighting and mechanical systems. Simple adjustments to building form can contribute to their efficiency. This paper examines how the relationship between the width and depth of the rooms in office buildings affects visual and thermal comfort, and consequently energy savings. Based on these evaluations, we can determine the best location for sedentary areas in a room. We can also propose improvements to occupant experience and minimize the difference between the predicted and measured performance in buildings by changing other design parameters, such as natural ventilation strategies, glazing properties, and shading. This study investigates the condition of spatial daylighting and thermal comfort for a range of room configurations using computer simulations, then it suggests the best depth for optimizing both daylighting and thermal comfort, and consequently energy performance in each room type. The Window-to-Wall Ratio (WWR) is 40% with 0.8m window sill and 0.4m window head. Also, there are some fixed parameters chosen according to building codes and standards, and the simulations are done in Seattle, USA. The simulation results are presented as evaluation grids using the thresholds for different metrics such as Daylight Autonomy (DA), spatial Daylight Autonomy (sDA), Annual Sunlight Exposure (ASE), and Daylight Glare Probability (DGP) for visual comfort, and Predicted Mean Vote (PMV), Predicted Percentage of Dissatisfied (PPD), occupied Thermal Comfort Percentage (occTCP), over-heated percent, under-heated percent, and Standard Effective Temperature (SET) for thermal comfort that are extracted from Grasshopper scripts. The simulation tools are Grasshopper plugins such as Ladybug, Honeybee, and EnergyPlus. According to the results, some metrics do not change much along the room depth and some of them change significantly. So, we can overlap these grids in order to determine the comfort zone. The overlapped grids contain 8 metrics, and the pixels that meet all 8 mentioned metrics’ thresholds define the comfort zone. With these overlapped maps, we can determine the comfort zones inside rooms and locate sedentary areas there. Other parts can be used for other tasks that are not used permanently or need lower or higher amounts of daylight and thermal comfort is less critical to user experience. The results can be reflected in a table to be used as a guideline by designers in the early stages of the design process.Keywords: occupant experience, office buildings, space depth, thermal comfort, visual comfort
Procedia PDF Downloads 183266 Biosocial Determinants of Maternal and Child Health in Northeast India: A Case Study
Authors: Benrithung Murry
Abstract:
This paper highlights the biosocial determinants of health-seeking behavior in tribal population groups of northeast India, focusing on maternal and child health. The northeastern region of India is a conglomeration of several ethnic groups, most of which are scheduled as tribal groups. A total of 750 ever-married women in reproductive ages (15-49 years) were interviewed from three tribal groups of Nagaland, India using pre-tested and modified maternal health schedule. Data pertaining to reproductive performance of the mothers and their children health status were collected from 12 villages of Dimapur district, Nagaland, India. The sample for study comprises 212 Angami women, 267 Ao women, and 271 Sumi women, all of which belonging to tribal populations of Northeast India. Sex ratios of 15-49 years in these three populations are 1018.18, 1086.69, and 1106.92, respectively. 90% of the populations in the study are nuclear families, with about 10% of households falling below the poverty line as per the cutoffs for India. Female literacy level in these population groups is higher than the national average of 65.46%; however, about 30% of all married women are not engaged in any sort of earnings. Total fertility rates of these populations are alarming (Total Fertility Rate ≥ 6) and far from replacement fertility level, while infant mortality rates are found to be much lower than the national average of 34 per 1000. The perception and practice of maternal health in this region is unimpressive despite the availability of medical amenities. Only 3 % of mothers in the study have reported 4 times antenatal checkups during last two pregnancies. Other mothers have reported 1 to 3 times of antenatal checkups, but about 25% of them never visited a doctor during the entire pregnancy period. About 15% of mothers never took tetanus injection, while 40% of mothers never took iron folic supplements during pregnancy. Almost half of all women and their husbands do not use birth control measures even for the spacing of children, which has an immense impact on prenatal mortality mainly due to deliberate abortions: the percentage of prenatal mortality among Angami, Ao and Sumi populations is 44.88, 31.88 and 54.98, respectively per 1000 live births. The steep decline in fertility levels in most countries is a consequence of the increasing use of modern methods of contraception. However, among users of birth control measures in these populations, it is seen that most couples use it only after they have the desired number of children, thus its use having no substantial influence in reducing fertility. It is also seen that the majority of the children were only partially vaccinated. With many child deliveries being done at home, many newborns are not administered with polio at birth. Two-third of all children do not have complete basic immunization against polio, diphtheria, tetanus, pertussis, bacillus, and hepatitis besides others. Certain adherence to traditional beliefs and customs apart from the socio-economic factors is believed to have been operating in these populations, which determines their health-seeking behavior. While a more in-depth study combining biological, socio-cultural, economic, and genetic factors is suggested, there is an urgent need for intervention in these populations to combat with the poor maternal and child health status.Keywords: case study, health behavior, mother and child, northeast india
Procedia PDF Downloads 129265 Modeling Diel Trends of Dissolved Oxygen for Estimating the Metabolism in Pristine Streams in the Brazilian Cerrado
Authors: Wesley A. Saltarelli, Nicolas R. Finkler, Adriana C. P. Miwa, Maria C. Calijuri, Davi G. F. Cunha
Abstract:
The metabolism of the streams is an indicator of ecosystem disturbance due to the influences of the catchment on the structure of the water bodies. The study of the respiration and photosynthesis allows the estimation of energy fluxes through the food webs and the analysis of the autotrophic and heterotrophic processes. We aimed at evaluating the metabolism in streams located in the Brazilian savannah, Cerrado (Sao Carlos, SP), by determining and modeling the daily changes of dissolved oxygen (DO) in the water during one year. Three water bodies with minimal anthropogenic interference in their surroundings were selected, Espraiado (ES), Broa (BR) and Canchim (CA). Every two months, water temperature, pH and conductivity are measured with a multiparameter probe. Nitrogen and phosphorus forms are determined according to standard methods. Also, canopy cover percentages are estimated in situ with a spherical densitometer. Stream flows are quantified through the conservative tracer (NaCl) method. For the metabolism study, DO (PME-MiniDOT) and light (Odyssey Photosynthetic Active Radiation) sensors log data for at least three consecutive days every ten minutes. The reaeration coefficient (k2) is estimated through the method of the tracer gas (SF6). Finally, we model the variations in DO concentrations and calculate the rates of gross and net primary production (GPP and NPP) and respiration based on the one station method described in the literature. Three sampling were carried out in October and December 2015 and February 2016 (the next will be in April, June and August 2016). The results from the first two periods are already available. The mean water temperatures in the streams were 20.0 +/- 0.8C (Oct) and 20.7 +/- 0.5C (Dec). In general, electrical conductivity values were low (ES: 20.5 +/- 3.5uS/cm; BR 5.5 +/- 0.7uS/cm; CA 33 +/- 1.4 uS/cm). The mean pH values were 5.0 (BR), 5.7 (ES) and 6.4 (CA). The mean concentrations of total phosphorus were 8.0ug/L (BR), 66.6ug/L (ES) and 51.5ug/L (CA), whereas soluble reactive phosphorus concentrations were always below 21.0ug/L. The BR stream had the lowest concentration of total nitrogen (0.55mg/L) as compared to CA (0.77mg/L) and ES (1.57mg/L). The average discharges were 8.8 +/- 6L/s (ES), 11.4 +/- 3L/s and CA 2.4 +/- 0.5L/s. The average percentages of canopy cover were 72% (ES), 75% (BR) and 79% (CA). Significant daily changes were observed in the DO concentrations, reflecting predominantly heterotrophic conditions (respiration exceeded the gross primary production, with negative net primary production). The GPP varied from 0-0.4g/m2.d (in Oct and Dec) and the R varied from 0.9-22.7g/m2.d (Oct) and from 0.9-7g/m2.d (Dec). The predominance of heterotrophic conditions suggests increased vulnerability of the ecosystems to artificial inputs of organic matter that would demand oxygen. The investigation of the metabolism in the pristine streams can help defining natural reference conditions of trophic state.Keywords: low-order streams, metabolism, net primary production, trophic state
Procedia PDF Downloads 258264 Population Diversity Studies in Dendrocalamus strictus Roxb. (Nees.) Through Morphological Parameters
Authors: Anugrah Tripathi, H. S. Ginwal, Charul Kainthola
Abstract:
Bamboos are considered as valuable resources which have the potential of meeting current economic, environmental and social needs. Bamboo has played a key role in humankind and its livelihood since ancient time. Distributed in diverse areas across the globe, bamboo makes an important natural resource for hundreds of millions of people across the world. In some of the Asian countries and northeast part of India, bamboo is the basis of life on many horizons. India possesses the largest bamboo-bearing area across the world and a great extent of species richness, but this rich genetic resource and its diversity have dwindled in the natural forest due to forest fire, over exploitation, lack of proper management policies, and gregarious flowering behavior. Bamboos which are well known for their peculiar, extraordinary morphology, show a lot of variation in many scales. Among the various bamboo species, Dendrocalamus strictus is the most abundant bamboo resource in India, which is a deciduous, solid, and densely tufted bamboo. This species can thrive in wide gradients of geographical as well as climatic conditions. Due to this, it exhibits a significant amount of variation among the populations of different origins for numerous morphological features. Morphological parameters are the front-line criteria for the selection and improvement of any forestry species. Study on the diversity among eight important morphological characters of D. strictus was carried out, covering 16 populations from wide geographical locations of India following INBAR standards. Among studied 16 populations, three populations viz. DS06 (Gaya, Bihar), DS15 (Mirzapur, Uttar Pradesh), and DS16 (Bhogpur, Pinjore, Haryana) were found as superior populations with higher mean values for parametric characters (clump height, no. of culms/ clump, circumference of clump, internode diameter and internode length) and with the higher sum of ranks in non-parametric characters (straightness, disease, and pest incidence and branching pattern). All of these parameters showed an ample amount of variations among the studied populations and revealed a significant difference among the populations. Variation in morphological characters is very common in a species having wide distribution and is usually evident at various levels, viz., between and within the populations. They are of paramount importance for growth, biomass, and quick production gains. Present study also gives an idea for the selection of the population on the basis of these morphological parameters. From this study on morphological parameters and their variation, we may find an overview of best-performing populations for growth and biomass accumulation. Some of the studied parameters also provide ideas to standardize mechanisms of selecting and sustainable harvesting of the clumps by applying simpler silvicultural systems so that they can be properly managed in homestead gardens for the community utilization as well as by commercial growers to meet the requirement of industries and other stakeholders.Keywords: Dendrocalamus strictus, homestead garden, gregarious flowering, stakeholders, INBAR
Procedia PDF Downloads 76263 The Potential for Maritime Tourism: An African Perspective
Authors: Lynn C. Jonas
Abstract:
The African continent is rich in coastal history, heritage, and culture, presenting immense potential for the development of maritime tourism. Shipping and its related components are generally associated with the maritime industry, and tourism’s link is to the various forms of nautical tourism. Activities may include cruising, yachting, visits to lighthouses, ports, harbors, and excursions to related sites of cultural, historical, or ecological significance. There have been hundreds of years of explorers leaving a string of shipwrecks along the various coastal areas on the continent in their pursuit of establishing trade routes between Europe, Africa, and the Far East. These shipwrecks present diving opportunities in artificial reefs and marine heritage to be explored in various ways in the maritime cultural zones. Along the South African coast, for example, six Portuguese shipwrecks highlight the Bartolomeu Dias legacy of exploration, and there are a number of warships in Tanzanian waters. Furthermore, decades of African countries being under colonized rule have left the continent with an intricate cultural heritage that is enmeshed in European language architecture interlinked with, in many instances, hard-fought independent littoral states. There is potential for coastal trails to be developed to follow these historical events as, at one point in history, France had colonized 35 African states, and subsequently, 32 African states were colonized by Britain. Countries such as Cameroon still have the legacy of Francophone versus Anglophone as a result of this shift in colonizers. Further to the colonized history of the African continent, there is an uncomfortable heritage of the slave trade history. To a certain extent, these coastal slave trade posts are being considered attractive to a niche tourism audience; however, there is potential for education and interpretive measures to grow this as a tourism product. Notwithstanding these potential opportunities, there are numerous challenges to consider, such as poor maritime infrastructure, maritime security concerns with issues such as piracy, transnational crimes including weapons and migrant smuggling, drug, and human trafficking. These and related maritime issues contribute to the concerns over the porous nature of African ocean gateways, adding to the security concerns for tourists. This theoretical paper will consider these trends and how they may contribute to the growth and development of maritime tourism on the African continent. African considerations of the growth potential of tourism in coastal and marine spaces are needed, particularly with a focus on embracing the continent's tumultuous past as part of its heritage. This has the potential to contribute to the creation of a sense of ownership of opportunities.Keywords: coastal trade routes, maritime tourism, shipwrecks, slave trade routes
Procedia PDF Downloads 19262 Circulating Public Perception on Agroforestry: Discourse Networks Analysis Using Social Media and Online News Media in Four Countries of the Sahel Region
Authors: Luisa Müting, Wisnu Harto Adiwijoyo
Abstract:
Agroforestry systems transform the agricultural landscapes in the Sahel region of Africa, providing food and farming products consumed for subsistence or sold for income. In the incrementally dry climate of the Sahel region, the spreading of agroforestry practices is integral for policymaker efforts to counteract land degradation and provide soil restoration in the region. Several measures on agroforestry practices have been implemented in the region by governmental and non-governmental institutions in recent years. However, despite the efforts, past research shows that awareness of how policies and interventions are being consumed and perceived by the public remains low. Therefore, interpreting public policy dilemmas by analyzing the public perception regarding agroforestry concepts and practices is necessary. Public perceptions and discourses can be an essential driver or constraint for the adoption of agroforestry practices in the region. Thus, understanding the public discourse behavior of crucial stakeholders could assist policymakers in developing inclusive and contextual policies that are relevant to the context of agroforestry adoption in Sahel region. To answer how information about agroforestry spreads and is perceived by the public. As internet usage increased drastically over the past decade, reaching a share of 33 percent of the population being connected to the internet, this research is based on online conversation data. Social media data from Facebook are gathered daily between April 2021 and April 2022 in Djibouti, Senegal, Mali, and Nigeria based on their share of active internet users compared to other countries in the Sahel region. A systematic methodology was applied to the extracted social media using discourse network analysis (DNA). This study then clustered the data by the types of agroforestry practices, sentiments, and country. Additionally, this research extracted the text data from online news media during the same period to pinpoint events related to the topic of agroforestry. The preliminary result indicates that tree management, crops, and livestock integration, diversifying species and genetic resources, and focusing on interactions and productivity across the agricultural system; are the most notable keywords in agroforestry-related conversations within the four countries in the Sahel region. Additionally, approximately 84 percent of the discussions were still dominated by big actors, such as NGO or government actors. Furthermore, as a subject of communication within agroforestry discourse, the Great Green Wall initiative generates almost 60 percent positive sentiment within the captured social media data, effectively having a more significant outreach than general agroforestry topics. This study provides an understanding for scholars and policymakers with a springboard for further research or policy design on agroforestry in the four countries of the Sahel region with systematically uncaptured novel data from the internet.Keywords: sahel, djibouti, senegal, mali, nigeria, social networks analysis, public discourse analysis, sentiment analysis, content analysis, social media, online news, agroforestry, land restoration
Procedia PDF Downloads 101261 Radar on Bike: Coarse Classification based on Multi-Level Clustering for Cyclist Safety Enhancement
Authors: Asma Omri, Noureddine Benothman, Sofiane Sayahi, Fethi Tlili, Hichem Besbes
Abstract:
Cycling, a popular mode of transportation, can also be perilous due to cyclists' vulnerability to collisions with vehicles and obstacles. This paper presents an innovative cyclist safety system based on radar technology designed to offer real-time collision risk warnings to cyclists. The system incorporates a low-power radar sensor affixed to the bicycle and connected to a microcontroller. It leverages radar point cloud detections, a clustering algorithm, and a supervised classifier. These algorithms are optimized for efficiency to run on the TI’s AWR 1843 BOOST radar, utilizing a coarse classification approach distinguishing between cars, trucks, two-wheeled vehicles, and other objects. To enhance the performance of clustering techniques, we propose a 2-Level clustering approach. This approach builds on the state-of-the-art Density-based spatial clustering of applications with noise (DBSCAN). The objective is to first cluster objects based on their velocity, then refine the analysis by clustering based on position. The initial level identifies groups of objects with similar velocities and movement patterns. The subsequent level refines the analysis by considering the spatial distribution of these objects. The clusters obtained from the first level serve as input for the second level of clustering. Our proposed technique surpasses the classical DBSCAN algorithm in terms of geometrical metrics, including homogeneity, completeness, and V-score. Relevant cluster features are extracted and utilized to classify objects using an SVM classifier. Potential obstacles are identified based on their velocity and proximity to the cyclist. To optimize the system, we used the View of Delft dataset for hyperparameter selection and SVM classifier training. The system's performance was assessed using our collected dataset of radar point clouds synchronized with a camera on an Nvidia Jetson Nano board. The radar-based cyclist safety system is a practical solution that can be easily installed on any bicycle and connected to smartphones or other devices, offering real-time feedback and navigation assistance to cyclists. We conducted experiments to validate the system's feasibility, achieving an impressive 85% accuracy in the classification task. This system has the potential to significantly reduce the number of accidents involving cyclists and enhance their safety on the road.Keywords: 2-level clustering, coarse classification, cyclist safety, warning system based on radar technology
Procedia PDF Downloads 79260 The Advancement of Smart Cushion Product and System Design Enhancing Public Health and Well-Being at Workplace
Authors: Dosun Shin, Assegid Kidane, Pavan Turaga
Abstract:
According to the National Institute of Health, living a sedentary lifestyle leads to a number of health issues, including increased risk of cardiovascular dis-ease, type 2 diabetes, obesity, and certain types of cancers. This project brings together experts in multiple disciplines to bring product design, sensor design, algorithms, and health intervention studies to develop a product and system that helps reduce the amount of time sitting at the workplace. This paper illustrates ongoing improvements to prototypes the research team developed in initial research; including working prototypes with a software application, which were developed and demonstrated for users. Additional modifications were made to improve functionality, aesthetics, and ease of use, which will be discussed in this paper. Extending on the foundations created in the initial phase, our approach sought to further improve the product by conducting additional human factor research, studying deficiencies in competitive products, testing various materials/forms, developing working prototypes, and obtaining feedback from additional potential users. The solution consisted of an aesthetically pleasing seat cover cushion that easily attaches to common office chairs found in most workplaces, ensuring a wide variety of people can use the product. The product discreetly contains sensors that track when the user sits on their chair, sending information to a phone app that triggers reminders for users to stand up and move around after sitting for a set amount of time. This paper also presents the analyzed typical office aesthetics and selected materials, colors, and forms that complimented the working environment. Comfort and ease of use remained a high priority as the design team sought to provide a product and system that integrated into the workplace. As the research team continues to test, improve, and implement this solution for the sedentary workplace, the team seeks to create a viable product that acts as an impetus for a more active workday and lifestyle, further decreasing the proliferation of chronic disease and health issues for sedentary working people. This paper illustrates in detail the processes of engineering, product design, methodology, and testing results.Keywords: anti-sedentary work behavior, new product development, sensor design, health intervention studies
Procedia PDF Downloads 158259 Enhancement of Hardness Related Properties of Grey Cast Iron Powder Reinforced AA7075 Metal Matrix Composites Through T6 and T8 Heat Treatments
Authors: S. S. Sharma, P. R. Prabhu, K. Jagannath, Achutha Kini U., Gowri Shankar M. C.
Abstract:
In present global scenario, aluminum alloys are coining the attention of many innovators as competing structural materials for automotive and space applications. Comparing to other challenging alloys, especially, 7xxx series aluminum alloys have been studied seriously because of their benefits such as moderate strength; better deforming characteristics, excellent chemical decay resistance, and affordable cost. 7075 Al-alloys have been used in the transportation industry for the fabrication of several types of automobile parts, such as wheel covers, panels and structures. It is expected that substitution of such aluminum alloys for steels will result in great improvements in energy economy, durability and recyclability. However, it is necessary to improve the strength and the formability levels at low temperatures in aluminium alloys for still better applications. Aluminum–Zinc–Magnesium with or without other wetting agent denoted as 7XXX series alloys are medium strength heat treatable alloys. Cu, Mn and Si are the other solute elements which contribute for the improvement in mechanical properties achievable by selecting and tailoring the suitable heat treatment process. On subjecting to suitable treatments like age hardening or cold deformation assisted heat treatments, known as low temperature thermomechanical treatments (LTMT) the challenging properties might be incorporated. T6 is the age hardening or precipitation hardening process with artificial aging cycle whereas T8 comprises of LTMT treatment aged artificially with X% cold deformation. When the cold deformation is provided after solution treatment, there is increase in hardness related properties such as wear resistance, yield and ultimate strength, toughness with the expense of ductility. During precipitation hardening both hardness and strength of the samples are increasing. Decreasing peak hardness value with increasing aging temperature is the well-known behavior of age hardenable alloys. The peak hardness value is further increasing when room temperature deformation is positively supported with age hardening known as thermomechanical treatment. Considering these aspects, it is intended to perform heat treatment and evaluate hardness, tensile strength, wear resistance and distribution pattern of reinforcement in the matrix. 2 to 2.5 and 3 to 3.5 times increase in hardness is reported in age hardening and LTMT treatments respectively as compared to as-cast composite. There was better distribution of reinforcements in the matrix, nearly two fold increase in strength levels and upto 5 times increase in wear resistance are also observed in the present study.Keywords: reinforcement, precipitation, thermomechanical, dislocation, strain hardening
Procedia PDF Downloads 311258 High Throughput LC-MS/MS Studies on Sperm Proteome of Malnad Gidda (Bos Indicus) Cattle
Authors: Kerekoppa Puttaiah Bhatta Ramesha, Uday Kannegundla, Praseeda Mol, Lathika Gopalakrishnan, Jagish Kour Reen, Gourav Dey, Manish Kumar, Sakthivel Jeyakumar, Arumugam Kumaresan, Kiran Kumar M., Thottethodi Subrahmanya Keshava Prasad
Abstract:
Spermatozoa are the highly specialized transcriptionally and translationally inactive haploid male gamete. The understanding of proteome of sperm is indispensable to explore the mechanism of sperm motility and fertility. Though there is a large number of human sperm proteomic studies, in-depth proteomic information on Bos indicus spermatozoa is not well established yet. Therefore, we illustrated the profile of sperm proteome in indigenous cattle, Malnad gidda (Bos Indicus), using high-resolution mass spectrometry. In the current study, two semen ejaculates from 3 breeding bulls were collected employing the artificial vaginal method. Using 45% percoll purification, spermatozoa cells were isolated. Protein was extracted using lysis buffer containing 2% Sodium Dodecyl Sulphate (SDS) and protein concentration was estimated. Fifty micrograms of protein from each individual were pooled for further downstream processing. Pooled sample was fractionated using SDS-Poly Acrylamide Gel Electrophoresis, which is followed by in-gel digestion. The peptides were subjected to C18 Stage Tip clean-up and analyzed in Orbitrap Fusion Tribrid mass spectrometer interfaced with Proxeon Easy-nano LC II system (Thermo Scientific, Bremen, Germany). We identified a total of 6773 peptides with 28426 peptide spectral matches, which belonged to 1081 proteins. Gene ontology analysis has been carried out to determine the biological processes, molecular functions and cellular components associated with sperm protein. The biological process chiefly represented our data is an oxidation-reduction process (5%), spermatogenesis (2.5%) and spermatid development (1.4%). The highlighted molecular functions are ATP, and GTP binding (14%) and the prominent cellular components most observed in our data were nuclear membrane (1.5%), acrosomal vesicle (1.4%), and motile cilium (1.3%). Seventeen percent of sperm proteins identified in this study were involved in metabolic pathways. To the best of our knowledge, this data represents the first total sperm proteome from indigenous cattle, Malnad Gidda. We believe that our preliminary findings could provide a strong base for the future understanding of bovine sperm proteomics.Keywords: Bos indicus, Malnad Gidda, mass spectrometry, spermatozoa
Procedia PDF Downloads 196257 Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison
Authors: Xiangtuo Chen, Paul-Henry Cournéde
Abstract:
Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.Keywords: crop yield prediction, crop model, sensitivity analysis, paramater estimation, particle swarm optimization, random forest
Procedia PDF Downloads 231256 Airon Project: IoT-Based Agriculture System for the Optimization of Irrigation Water Consumption
Authors: África Vicario, Fernando J. Álvarez, Felipe Parralejo, Fernando Aranda
Abstract:
The irrigation systems of traditional agriculture, such as gravity-fed irrigation, produce a great waste of water because, generally, there is no control over the amount of water supplied in relation to the water needed. The AIRON Project tries to solve this problem by implementing an IoT-based system to sensor the irrigation plots so that the state of the crops and the amount of water used for irrigation can be known remotely. The IoT system consists of a sensor network that measures the humidity of the soil, the weather conditions (temperature, relative humidity, wind and solar radiation) and the irrigation water flow. The communication between this network and a central gateway is conducted by means of long-range wireless communication that depends on the characteristics of the irrigation plot. The main objective of the AIRON project is to deploy an IoT sensor network in two different plots of the irrigation community of Aranjuez in the Spanish region of Madrid. The first plot is 2 km away from the central gateway, so LoRa has been used as the base communication technology. The problem with this plot is the absence of mains electric power, so devices with energy-saving modes have had to be used to maximize the external batteries' use time. An ESP32 SOC board with a LoRa module is employed in this case to gather data from the sensor network and send them to a gateway consisting of a Raspberry Pi with a LoRa hat. The second plot is located 18 km away from the gateway, a range that hampers the use of LoRa technology. In order to establish reliable communication in this case, the long-term evolution (LTE) standard is used, which makes it possible to reach much greater distances by using the cellular network. As mains electric power is available in this plot, a Raspberry Pi has been used instead of the ESP32 board to collect sensor data. All data received from the two plots are stored on a proprietary server located at the irrigation management company's headquarters. The analysis of these data by means of machine learning algorithms that are currently under development should allow a short-term prediction of the irrigation water demand that would significantly reduce the waste of this increasingly valuable natural resource. The major finding of this work is the real possibility of deploying a remote sensing system for irrigated plots by using Commercial-Off-The-Shelf (COTS) devices, easily scalable and adaptable to design requirements such as the distance to the control center or the availability of mains electrical power at the site.Keywords: internet of things, irrigation water control, LoRa, LTE, smart farming
Procedia PDF Downloads 84255 Superparamagnetic Core Shell Catalysts for the Environmental Production of Fuels from Renewable Lignin
Authors: Cristina Opris, Bogdan Cojocaru, Madalina Tudorache, Simona M. Coman, Vasile I. Parvulescu, Camelia Bala, Bahir Duraki, Jeroen A. Van Bokhoven
Abstract:
The tremendous achievements in the development of the society concretized by more sophisticated materials and systems are merely based on non-renewable resources. Consequently, after more than two centuries of intensive development, among others, we are faced with the decrease of the fossil fuel reserves, an increased impact of the greenhouse gases on the environment, and economic effects caused by the fluctuations in oil and mineral resource prices. The use of biomass may solve part of these problems, and recent analyses demonstrated that from the perspective of the reduction of the emissions of carbon dioxide, its valorization may bring important advantages conditioned by the usage of genetic modified fast growing trees or wastes, as primary sources. In this context, the abundance and complex structure of lignin may offer various possibilities of exploitation. However, its transformation in fuels or chemicals supposes a complex chemistry involving the cleavage of C-O and C-C bonds and altering of the functional groups. Chemistry offered various solutions in this sense. However, despite the intense work, there are still many drawbacks limiting the industrial application. Thus, the proposed technologies considered mainly homogeneous catalysts meaning expensive noble metals based systems that are hard to be recovered at the end of the reaction. Also, the reactions were carried out in organic solvents that are not acceptable today from the environmental point of view. To avoid these problems, the concept of this work was to investigate the synthesis of superparamagnetic core shell catalysts for the fragmentation of lignin directly in the aqueous phase. The magnetic nanoparticles were covered with a nanoshell of an oxide (niobia) with a double role: to protect the magnetic nanoparticles and to generate a proper (acidic) catalytic function and, on this composite, cobalt nanoparticles were deposed in order to catalyze the C-C bond splitting. With this purpose, we developed a protocol to prepare multifunctional and magnetic separable nano-composite Co@Nb2O5@Fe3O4 catalysts. We have also established an analytic protocol for the identification and quantification of the fragments resulted from lignin depolymerization in both liquid and solid phase. The fragmentation of various lignins occurred on the prepared materials in high yields and with very good selectivity in the desired fragments. The optimization of the catalyst composition indicated a cobalt loading of 4wt% as optimal. Working at 180 oC and 10 atm H2 this catalyst allowed a conversion of lignin up to 60% leading to a mixture containing over 96% in C20-C28 and C29-C37 fragments that were then completely fragmented to C12-C16 in a second stage. The investigated catalysts were completely recyclable, and no leaching of the elements included in the composition was determined by inductively coupled plasma optical emission spectrometry (ICP-OES).Keywords: superparamagnetic core-shell catalysts, environmental production of fuels, renewable lignin, recyclable catalysts
Procedia PDF Downloads 328254 Ethicality of Algorithmic Pricing and Consumers’ Resistance
Authors: Zainab Atia, Hongwei He, Panagiotis Sarantopoulos
Abstract:
Over the past few years, firms have witnessed a massive increase in sophisticated algorithmic deployment, which has become quite pervasive in today’s modern society. With the wide availability of data for retailers, the ability to track consumers using algorithmic pricing has become an integral option in online platforms. As more companies are transforming their businesses and relying more on massive technological advancement, pricing algorithmic systems have brought attention and given rise to its wide adoption, with many accompanying benefits and challenges to be found within its usage. With the overall aim of increasing profits by organizations, algorithmic pricing is becoming a sound option by enabling suppliers to cut costs, allowing better services, improving efficiency and product availability, and enhancing overall consumer experiences. The adoption of algorithms in retail has been pioneered and widely used in literature across varied fields, including marketing, computer science, engineering, economics, and public policy. However, what is more, alarming today is the comprehensive understanding and focus of this technology and its associated ethical influence on consumers’ perceptions and behaviours. Indeed, due to algorithmic ethical concerns, consumers are found to be reluctant in some instances to share their personal data with retailers, which reduces their retention and leads to negative consumer outcomes in some instances. This, in its turn, raises the question of whether firms can still manifest the acceptance of such technologies by consumers while minimizing the ethical transgressions accompanied by their deployment. As recent modest research within the area of marketing and consumer behavior, the current research advances the literature on algorithmic pricing, pricing ethics, consumers’ perceptions, and price fairness literature. With its empirical focus, this paper aims to contribute to the literature by applying the distinction of the two common types of algorithmic pricing, dynamic and personalized, while measuring their relative effect on consumers’ behavioural outcomes. From a managerial perspective, this research offers significant implications that pertain to providing a better human-machine interactive environment (whether online or offline) to improve both businesses’ overall performance and consumers’ wellbeing. Therefore, by allowing more transparent pricing systems, businesses can harness their generated ethical strategies, which fosters consumers’ loyalty and extend their post-purchase behaviour. Thus, by defining the correct balance of pricing and right measures, whether using dynamic or personalized (or both), managers can hence approach consumers more ethically while taking their expectations and responses at a critical stance.Keywords: algorithmic pricing, dynamic pricing, personalized pricing, price ethicality
Procedia PDF Downloads 91253 Measurement of Fatty Acid Changes in Post-Mortem Belowground Carcass (Sus-scrofa) Decomposition: A Semi-Quantitative Methodology for Determining the Post-Mortem Interval
Authors: Nada R. Abuknesha, John P. Morgan, Andrew J. Searle
Abstract:
Information regarding post-mortem interval (PMI) in criminal investigations is vital to establish a time frame when reconstructing events. PMI is defined as the time period that has elapsed between the occurrence of death and the discovery of the corpse. Adipocere, commonly referred to as ‘grave-wax’, is formed when post-mortem adipose tissue is converted into a solid material that is heavily comprised of fatty acids. Adipocere is of interest to forensic anthropologists, as its formation is able to slow down the decomposition process. Therefore, analysing the changes in the patterns of fatty acids during the early decomposition process may be able to estimate the period of burial, and hence the PMI. The current study concerned the investigation of the fatty acid composition and patterns in buried pig fat tissue. This was in an attempt to determine whether particular patterns of fatty acid composition can be shown to be associated with the duration of the burial, and hence may be used to estimate PMI. The use of adipose tissue from the abdominal region of domestic pigs (Sus-scrofa), was used to model the human decomposition process. 17 x 20cm piece of pork belly was buried in a shallow artificial grave, and weekly samples (n=3) from the buried pig fat tissue were collected over an 11-week period. Marker fatty acids: palmitic (C16:0), oleic (C18:1n-9) and linoleic (C18:2n-6) acid were extracted from the buried pig fat tissue and analysed as fatty acid methyl esters using the gas chromatography system. Levels of the marker fatty acids were quantified from their respective standards. The concentrations of C16:0 (69.2 mg/mL) and C18:1n-9 (44.3 mg/mL) from time zero exhibited significant fluctuations during the burial period. Levels rose (116 and 60.2 mg/mL, respectively) and fell starting from the second week to reach 19.3 and 18.3 mg/mL, respectively at week 6. Levels showed another increase at week 9 (66.3 and 44.1 mg/mL, respectively) followed by gradual decrease at week 10 (20.4 and 18.5 mg/mL, respectively). A sharp increase was observed in the final week (131.2 and 61.1 mg/mL, respectively). Conversely, the levels of C18:2n-6 remained more or less constant throughout the study. In addition to fluctuations in the concentrations, several new fatty acids appeared in the latter weeks. Other fatty acids which were detectable in the time zero sample, were lost in the latter weeks. There are several probable opportunities to utilise fatty acid analysis as a basic technique for approximating PMI: the quantification of marker fatty acids and the detection of selected fatty acids that either disappear or appear during the burial period. This pilot study indicates that this may be a potential semi-quantitative methodology for determining the PMI. Ideally, the analysis of particular fatty acid patterns in the early stages of decomposition could be an additional tool to the already available techniques or methods in improving the overall processes in estimating PMI of a corpse.Keywords: adipocere, fatty acids, gas chromatography, post-mortem interval
Procedia PDF Downloads 131252 One Species into Five: Nucleo-Mito Barcoding Reveals Cryptic Species in 'Frankliniella Schultzei Complex': Vector for Tospoviruses
Authors: Vikas Kumar, Kailash Chandra, Kaomud Tyagi
Abstract:
The insect order Thysanoptera includes small insects commonly called thrips. As insect vectors, only thrips are capable of Tospoviruses transmission (genus Tospovirus, family Bunyaviridae) affecting various crops. Currently, fifteen species of subfamily Thripinae (Thripidae) have been reported as vectors for tospoviruses. Frankliniella schultzei, which is reported as act as a vector for at least five tospovirses, have been suspected to be a species complex with more than one species. It is one of the historical unresolved issues where, two species namely, F. schultzei Trybom and F. sulphurea Schmutz were erected from South Africa and Srilanaka respectively. These two species were considered to be valid until 1968 when sulphurea was treated as colour morph (pale form) and synonymised under schultzei (dark form) However, these two have been considered as valid species by some of the thrips workers. Parallel studies have indicated that brown form of schultzei is a vector for tospoviruses while yellow form is a non-vector. However, recent studies have shown that yellow populations have also been documented as vectors. In view of all these facts, it is highly important to have a clear understanding whether these colour forms represent true species or merely different populations with different vector carrying capacities and whether there is some hidden diversity in 'Frankliniella schultzei species complex'. In this study, we aim to study the 'Frankliniella schultzei species complex' with molecular spectacles with DNA data from India and Australia and Africa. A total of fifty-five specimens was collected from diverse locations in India and Australia. We generated molecular data using partial fragments of mitochondrial cytochrome c oxidase I gene (mtCOI) and 28S rRNA gene. For COI dataset, there were seventy-four sequences, out of which data on fifty-five was generated in the current study and others were retrieved from NCBI. All the four different tree construction methods: neighbor-joining, maximum parsimony, maximum likelihood and Bayesian analysis, yielded the same tree topology and produced five cryptic species with high genetic divergence. For, rDNA, there were forty-five sequences, out of which data on thirty-nine was generated in the current study and others were retrieved from NCBI. The four tree building methods yielded four cryptic species with high bootstrap support value/posterior probability. Here we could not retrieve one cryptic species from South Africa as we could not generate data on rDNA from South Africa and sequence for rDNA from African region were not available in the database. The results of multiple species delimitation methods (barcode index numbers, automatic barcode gap discovery, general mixed Yule-coalescent, and Poisson-tree-processes) also supported the phylogenetic data and produced 5 and 4 Molecular Operational Taxonomic Units (MOTUs) for mtCOI and 28S dataset respectively. These results of our study indicate the likelihood that F. sulphurea may be a valid species, however, more morphological and molecular data is required on specimens from type localities of these two species and comparison with type specimens.Keywords: DNA barcoding, species complex, thrips, species delimitation
Procedia PDF Downloads 128251 Stochastic Pi Calculus in Financial Markets: An Alternate Approach to High Frequency Trading
Authors: Jerome Joshi
Abstract:
The paper presents the modelling of financial markets using the Stochastic Pi Calculus model. The Stochastic Pi Calculus model is mainly used for biological applications; however, the feature of this model promotes its use in financial markets, more prominently in high frequency trading. The trading system can be broadly classified into exchange, market makers or intermediary traders and fundamental traders. The exchange is where the action of the trade is executed, and the two types of traders act as market participants in the exchange. High frequency trading, with its complex networks and numerous market participants (intermediary and fundamental traders) poses a difficulty while modelling. It involves the participants to seek the advantage of complex trading algorithms and high execution speeds to carry out large volumes of trades. To earn profits from each trade, the trader must be at the top of the order book quite frequently by executing or processing multiple trades simultaneously. This would require highly automated systems as well as the right sentiment to outperform other traders. However, always being at the top of the book is also not best for the trader, since it was the reason for the outbreak of the ‘Hot – Potato Effect,’ which in turn demands for a better and more efficient model. The characteristics of the model should be such that it should be flexible and have diverse applications. Therefore, a model which has its application in a similar field characterized by such difficulty should be chosen. It should also be flexible in its simulation so that it can be further extended and adapted for future research as well as be equipped with certain tools so that it can be perfectly used in the field of finance. In this case, the Stochastic Pi Calculus model seems to be an ideal fit for financial applications, owing to its expertise in the field of biology. It is an extension of the original Pi Calculus model and acts as a solution and an alternative to the previously flawed algorithm, provided the application of this model is further extended. This model would focus on solving the problem which led to the ‘Flash Crash’ which is the ‘Hot –Potato Effect.’ The model consists of small sub-systems, which can be integrated to form a large system. It is designed in way such that the behavior of ‘noise traders’ is considered as a random process or noise in the system. While modelling, to get a better understanding of the problem, a broader picture is taken into consideration with the trader, the system, and the market participants. The paper goes on to explain trading in exchanges, types of traders, high frequency trading, ‘Flash Crash,’ ‘Hot-Potato Effect,’ evaluation of orders and time delay in further detail. For the future, there is a need to focus on the calibration of the module so that they would interact perfectly with other modules. This model, with its application extended, would provide a basis for researchers for further research in the field of finance and computing.Keywords: concurrent computing, high frequency trading, financial markets, stochastic pi calculus
Procedia PDF Downloads 77250 Design and Development of Graphene Oxide Modified by Chitosan Nanosheets Showing pH-Sensitive Surface as a Smart Drug Delivery System for Control Release of Doxorubicin
Authors: Parisa Shirzadeh
Abstract:
Drug delivery systems in which drugs are traditionally used, multi-stage and at specified intervals by patients, do not meet the needs of the world's up-to-date drug delivery. In today's world, we are dealing with a huge number of recombinant peptide and protean drugs and analogues of hormones in the body, most of which are made with genetic engineering techniques. Most of these drugs are used to treat critical diseases such as cancer. Due to the limitations of the traditional method, researchers sought to find ways to solve the problems of the traditional method to a large extent. Following these efforts, controlled drug release systems were introduced, which have many advantages. Using controlled release of the drug in the body, the concentration of the drug is kept at a certain level, and in a short time, it is done at a higher rate. Graphene is a natural material that is biodegradable, non-toxic, and natural compared to carbon nanotubes; its price is lower than carbon nanotubes and is cost-effective for industrialization. On the other hand, the presence of highly effective surfaces and wide surfaces of graphene plates makes it more effective to modify graphene than carbon nanotubes. Graphene oxide is often synthesized using concentrated oxidizers such as sulfuric acid, nitric acid, and potassium permanganate based on Hummer 1 method. In comparison with the initial graphene, the resulting graphene oxide is heavier and has carboxyl, hydroxyl, and epoxy groups. Therefore, graphene oxide is very hydrophilic and easily dissolves in water and creates a stable solution. On the other hand, because the hydroxyl, carboxyl, and epoxy groups created on the surface are highly reactive, they have the ability to work with other functional groups such as amines, esters, polymers, etc. Connect and bring new features to the surface of graphene. In fact, it can be concluded that the creation of hydroxyl groups, Carboxyl, and epoxy and in fact graphene oxidation is the first step and step in creating other functional groups on the surface of graphene. Chitosan is a natural polymer and does not cause toxicity in the body. Due to its chemical structure and having OH and NH groups, it is suitable for binding to graphene oxide and increasing its solubility in aqueous solutions. Graphene oxide (GO) has been modified by chitosan (CS) covalently, developed for control release of doxorubicin (DOX). In this study, GO is produced by the hummer method under acidic conditions. Then, it is chlorinated by oxalyl chloride to increase its reactivity against amine. After that, in the presence of chitosan, the amino reaction was performed to form amide transplantation, and the doxorubicin was connected to the carrier surface by π-π interaction in buffer phosphate. GO, GO-CS, and GO-CS-DOX characterized by FT-IR, RAMAN, TGA, and SEM. The ability to load and release is determined by UV-Visible spectroscopy. The loading result showed a high capacity of DOX absorption (99%) and pH dependence identified as a result of DOX release from GO-CS nanosheet at pH 5.3 and 7.4, which show a fast release rate in acidic conditions.Keywords: graphene oxide, chitosan, nanosheet, controlled drug release, doxorubicin
Procedia PDF Downloads 120249 Leveraging Power BI for Advanced Geotechnical Data Analysis and Visualization in Mining Projects
Authors: Elaheh Talebi, Fariba Yavari, Lucy Philip, Lesley Town
Abstract:
The mining industry generates vast amounts of data, necessitating robust data management systems and advanced analytics tools to achieve better decision-making processes in the development of mining production and maintaining safety. This paper highlights the advantages of Power BI, a powerful intelligence tool, over traditional Excel-based approaches for effectively managing and harnessing mining data. Power BI enables professionals to connect and integrate multiple data sources, ensuring real-time access to up-to-date information. Its interactive visualizations and dashboards offer an intuitive interface for exploring and analyzing geotechnical data. Advanced analytics is a collection of data analysis techniques to improve decision-making. Leveraging some of the most complex techniques in data science, advanced analytics is used to do everything from detecting data errors and ensuring data accuracy to directing the development of future project phases. However, while Power BI is a robust tool, specific visualizations required by geotechnical engineers may have limitations. This paper studies the capability to use Python or R programming within the Power BI dashboard to enable advanced analytics, additional functionalities, and customized visualizations. This dashboard provides comprehensive tools for analyzing and visualizing key geotechnical data metrics, including spatial representation on maps, field and lab test results, and subsurface rock and soil characteristics. Advanced visualizations like borehole logs and Stereonet were implemented using Python programming within the Power BI dashboard, enhancing the understanding and communication of geotechnical information. Moreover, the dashboard's flexibility allows for the incorporation of additional data and visualizations based on the project scope and available data, such as pit design, rock fall analyses, rock mass characterization, and drone data. This further enhances the dashboard's usefulness in future projects, including operation, development, closure, and rehabilitation phases. Additionally, this helps in minimizing the necessity of utilizing multiple software programs in projects. This geotechnical dashboard in Power BI serves as a user-friendly solution for analyzing, visualizing, and communicating both new and historical geotechnical data, aiding in informed decision-making and efficient project management throughout various project stages. Its ability to generate dynamic reports and share them with clients in a collaborative manner further enhances decision-making processes and facilitates effective communication within geotechnical projects in the mining industry.Keywords: geotechnical data analysis, power BI, visualization, decision-making, mining industry
Procedia PDF Downloads 92