Search results for: Adult dataset
115 A Foucauldian Analysis of Child Play: Case Study of a Preschool in the United States
Authors: Meng Wang
Abstract:
Historically, young members (children) in the society have been oppressed by adults through direct violent acts. Direct violence was evident in rampant child labor and child maltreatment cases. After acknowledging the rights of children from the United Nations, it is believed in public that children have been protected against direct physical violence. Nevertheless, at present, this paper argues from Foucauldian and disability study standpoints that similar to the old times, children are oppressed objects in the context of child play, which is constructed by adults to substitute direct violence in regulating children. Particularly, this paper suggests that on the one hand, preschool play is a new way that adults adopt to oppress preschoolers and regulate the society as a whole; on the other hand, preschoolers are taught how to play as an acquired skill and master self-regulation through play. There is a line of contemporary research that centers on child play from social constructivism perspective. Yet, current teaching practices pertaining to child play including guided child play and free play, in fact, serve the interest of adults and society at large. By acknowledging and deconstructing the prevalence of 'evidence-based best practice' in early childhood education field within western society, reconstruction of child-adult power relation could be achieved and alternative truth could be found in early childhood education. To support the argument of this paper, an on-going observational case study is conducted in a preschool setting in the United States. Age range of children is 2.5 to 4 years old. Approximately 10 children (5 boys) are participating in this case study. Observation is conducted throughout the weekdays as children follow through the classroom routine with a lead and an assistant teacher. Classroom teachers are interviewed pertaining to their classroom management strategies. Preliminary research finding of this case study suggested that preschool teachers tended to utilize scenarios from preschoolers’ dramatic play to impart core cultural values to young children. These values were pre-determined by adults. In addition, if young children have failed to follow teachers' guidance in terms of playing in a correct way, children ran the risk of being excluded from the play scenario by peers and adults. Furthermore, this study tended to indicate that through child play, preschoolers are obliged to develop an internal violence system, that is self-regulation skill to regulate their own behavior; and if this internal system is unestablished based on various assessments by adults, then potentially there will be consequences of negative labeling and disabling toward young children intended by adults. In conclusion, this paper applies Foucauldian analysis into the context of child play. At present, within preschool, child play is not free as it seems to be. Young children are expected to perform cultural tasks through their play activities designed by adults. Adults utilize child play as technologies of governmentality to further predict and regulate future society at large.Keywords: child play, developmentally appropriate practice, DAP, poststructuralism, technologies of governmentality
Procedia PDF Downloads 155114 Electrical Decomposition of Time Series of Power Consumption
Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats
Abstract:
Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).Keywords: electrical disaggregation, DTW, general appliance modeling, event detection
Procedia PDF Downloads 78113 Gut Microbial Dynamics in a Mouse Model of Inflammation-Linked Carcinogenesis as a Result of Diet Supplementation with Specific Mushroom Extracts
Authors: Alvarez M., Chapela M. J., Balboa E., Rubianes D., Sinde E., Fernandez de Ana C., Rodríguez-Blanco A.
Abstract:
The gut microbiota plays an important role as gut inflammation could contribute to colorectal cancer development; however, this role is still not fully understood, and tools able to prevent this progression are yet to be developed. The main objective of this study was to monitor the effects of a mushroom extracts formulation in gut microbial community composition of an Azoxymethane (AOM)/Dextran sodium sulfate (DSS) mice model of inflammation-linked carcinogenesis. For the in vivo study, 41 adult male mice of the C57BL / 6 strain were obtained. 36 of them have been induced in a state of colon carcinogenesis by a single intraperitoneal administration of AOM at a dose of 12.5 mg/kg; the control group animals received instead of the same volume of 0.9% saline. DSS is an extremely toxic polysaccharide sulfate that causes chronic inflammation of the colon mucosa, favoring the appearance of severe colitis and the production of tumors induced by AOM. Induction by AOM/DSS is an interesting platform for chemopreventive intervention studies. This time the model was used to monitor gut microbiota changes as a result of supplementation with a specific mushroom extracts formulation previously shown to have prebiotic activity. The animals have been divided into three groups: (i) Cancer + mushroom extracts formulation experimental group: to which the MicoDigest2.0 mushroom extracts formulation developed by Hifas da Terra S.L has been administered dissolved in drinking water at an estimated concentration of 100 mg / ml. (ii) Control group of animals with Cancer: to which normal water has been administered without any type of treatment. (iii) Control group of healthy animals: these are the animals that have not been induced cancer or have not received any treatment in drinking water. This treatment has been maintained for a period of 3 months, after which the animals were sacrificed to obtain tissues that were subsequently analyzed to verify the effects of the mushroom extract formulation. A microbiological analysis has been carried out to compare the microbial communities present in the intestines of the mice belonging to each of the study groups. For this, the methodology of massive sequencing by molecular analysis of the 16S gene has been used (Ion Torrent technology). Initially, DNA extraction and metagenomics libraries were prepared using the 16S Metagenomics kit, always following the manufacturer's instructions. This kit amplifies 7 of the 9 hypervariable regions of the 16S gene that will then be sequenced. Finally, the data obtained will be compared with a database that makes it possible to determine the degree of similarity of the sequences obtained with a wide range of bacterial genomes. Results obtained showed that, similarly to certain natural compounds preventing colorectal tumorigenesis, a mushroom formulation enriched the Firmicutes and Proteobacteria phyla and depleted Bacteroidetes. Therefore, it was demonstrated that the consumption of the mushroom extracts’ formulation developed could promote the recovery of the microbial balance that is disrupted in the mice model of carcinogenesis. More preclinical and clinical studies are needed to validate this promising approach.Keywords: carcinogenesis, microbiota, mushroom extracts, inflammation
Procedia PDF Downloads 149112 Species Profiling of Scarab Beetles with the Help of Light Trap in Western Himalayan Region of Uttarakhand
Authors: Ajay Kumar Pandey
Abstract:
White grub (Coleoptera: Scarabaeidae), locally known as Kurmula, Pagra, Chinchu, is a major destructive pest in western Himalayan region of Uttarakhand state of India. Various crops like cereals (up land paddy, wheat, and barley), vegetables (capsicum, cabbage, tomato, cauliflower, carrot etc) and some pulse (like pigeon pea, green gram, black gram) are grown with limited availability of primary resources. Among the various limitations in successful cultivation of these crops, white grub has been proved a major constraint in for all crops grown in hilly area. The losses incurred due to white grubs are huge in case of commercial crops like sugarcane, groundnut, potato, maize and upland rice. Moreover, it has been proved major constraint in potato production in mid and higher hills of India. Adults emerge in May-June following the onset of monsoon and thereafter defoliate the apple, apricot, plum, and walnut during night while 2nd and 3rd instar grubs feed on live roots of cultivated as well as non cultivated crops from August to January. Survey was conducted in hilly (Pauri and Tehri) as well as plain area (Haridwar district) of Uttarakhand state. Collection of beetle was done from various locations from August to September of five consecutive years with the help of light trap and directly from host plant. The grub was also collected by excavating one square meter area from different locations and reared in laboratory to find out adult. During the collection, the diseased or dead cadaver were also collected and brought in the laboratory and identified the causal organisms. Total 25 species of white grub was identified out of which Holotrichia longipennis, Anomala dimidiata, Holotrichia lineatopennis, Maladera insanabilis, Brahmina sp. make complex problem in different area of Uttarakhand where they cause severe damage to various crops. During the survey, it was observed that white grubs beetles have variation in preference of host plant, even in choice of fruit and leaves of host plant. It was observed that, a white grub species, which identified as Lepidiota mansueta Burmeister., was causing severe havoc to sugarcane crop grown in major sugarcane growing belt of Haridwar district. The study also revealed that Bacillus cereus, Beauveria bassiana, Metarhizium anisopliae, Steinernema, Heterorhabditis are major disease causing agents in immature stage of white grub under rain-fed condition of Uttarakhand which caused 15.55 to 21.63 percent natural mortality of grubs with an average of 18.91 percent. However, among the microorganisms, B. cereus found to be significantly more efficient (7.03 percent mortality) then the entomopathogenic fungi (3.80 percent mortality) and nematodes (3.20 percent mortality).Keywords: Lepidiota, profiling, Uttarakhand, whitegrub
Procedia PDF Downloads 220111 Healthcare Utilization and Costs of Specific Obesity Related Health Conditions in Alberta, Canada
Authors: Sonia Butalia, Huong Luu, Alexis Guigue, Karen J. B. Martins, Khanh Vu, Scott W. Klarenbach
Abstract:
Obesity-related health conditions impose a substantial economic burden on payers due to increased healthcare use. Estimates of healthcare resource use and costs associated with obesity-related comorbidities are needed to inform policies and interventions targeting these conditions. Methods: Adults living with obesity were identified (a procedure-related body mass index code for class 2/3 obesity between 2012 and 2019 in Alberta, Canada; excluding those with bariatric surgery), and outcomes were compared over 1-year (2019/2020) between those who had and did not have specific obesity-related comorbidities. The probability of using a healthcare service (based on the odds ratio of a zero [OR-zero] cost) was compared; 95% confidence intervals (CI) were reported. Logistic regression and a generalized linear model with log link and gamma distribution were used for total healthcare cost comparisons ($CDN); cost ratios and estimated cost differences (95% CI) were reported. Potential socio-demographic and clinical confounders were adjusted for, and incremental cost differences were representative of a referent case. Results: A total of 220,190 adults living with obesity were included; 44% had hypertension, 25% had osteoarthritis, 24% had type-2 diabetes, 17% had cardiovascular disease, 12% had insulin resistance, 9% had chronic back pain, and 4% of females had polycystic ovarian syndrome (PCOS). The probability of hospitalization, ED visit, and ambulatory care was higher in those with a following obesity-related comorbidity versus those without: chronic back pain (hospitalization: 1.8-times [OR-zero: 0.57 [0.55/0.59]] / ED visit: 1.9-times [OR-zero: 0.54 [0.53/0.56]] / ambulatory care visit: 2.4-times [OR-zero: 0.41 [0.40/0.43]]), cardiovascular disease (2.7-times [OR-zero: 0.37 [0.36/0.38]] / 1.9-times [OR-zero: 0.52 [0.51/0.53]] / 2.8-times [OR-zero: 0.36 [0.35/0.36]]), osteoarthritis (2.0-times [OR-zero: 0.51 [0.50/0.53]] / 1.4-times [OR-zero: 0.74 [0.73/0.76]] / 2.5-times [OR-zero: 0.40 [0.40/0.41]]), type-2 diabetes (1.9-times [OR-zero: 0.54 [0.52/0.55]] / 1.4-times [OR-zero: 0.72 [0.70/0.73]] / 2.1-times [OR-zero: 0.47 [0.46/0.47]]), hypertension (1.8-times [OR-zero: 0.56 [0.54/0.57]] / 1.3-times [OR-zero: 0.79 [0.77/0.80]] / 2.2-times [OR-zero: 0.46 [0.45/0.47]]), PCOS (not significant / 1.2-times [OR-zero: 0.83 [0.79/0.88]] / not significant), and insulin resistance (1.1-times [OR-zero: 0.88 [0.84/0.91]] / 1.1-times [OR-zero: 0.92 [0.89/0.94]] / 1.8-times [OR-zero: 0.56 [0.54/0.57]]). After fully adjusting for potential confounders, the total healthcare cost ratio was higher in those with a following obesity-related comorbidity versus those without: chronic back pain (1.54-times [1.51/1.56]), cardiovascular disease (1.45-times [1.43/1.47]), osteoarthritis (1.36-times [1.35/1.38]), type-2 diabetes (1.30-times [1.28/1.31]), hypertension (1.27-times [1.26/1.28]), PCOS (1.08-times [1.05/1.11]), and insulin resistance (1.03-times [1.01/1.04]). Conclusions: Adults with obesity who have specific disease-related health conditions have a higher probability of healthcare use and incur greater costs than those without specific comorbidities; incremental costs are larger when other obesity-related health conditions are not adjusted for. In a specific referent case, hypertension was costliest (44% had this condition with an additional annual cost of $715 [$678/$753]). If these findings hold for the Canadian population, hypertension in persons with obesity represents an estimated additional annual healthcare cost of $2.5 billion among adults living with obesity (based on an adult obesity rate of 26%). Results of this study can inform decision making on investment in interventions that are effective in treating obesity and its complications.Keywords: administrative data, healthcare cost, obesity-related comorbidities, real world evidence
Procedia PDF Downloads 149110 Deep Learning Framework for Predicting Bus Travel Times with Multiple Bus Routes: A Single-Step Multi-Station Forecasting Approach
Authors: Muhammad Ahnaf Zahin, Yaw Adu-Gyamfi
Abstract:
Bus transit is a crucial component of transportation networks, especially in urban areas. Any intelligent transportation system must have accurate real-time information on bus travel times since it minimizes waiting times for passengers at different stations along a route, improves service reliability, and significantly optimizes travel patterns. Bus agencies must enhance the quality of their information service to serve their passengers better and draw in more travelers since people waiting at bus stops are frequently anxious about when the bus will arrive at their starting point and when it will reach their destination. For solving this issue, different models have been developed for predicting bus travel times recently, but most of them are focused on smaller road networks due to their relatively subpar performance in high-density urban areas on a vast network. This paper develops a deep learning-based architecture using a single-step multi-station forecasting approach to predict average bus travel times for numerous routes, stops, and trips on a large-scale network using heterogeneous bus transit data collected from the GTFS database. Over one week, data was gathered from multiple bus routes in Saint Louis, Missouri. In this study, Gated Recurrent Unit (GRU) neural network was followed to predict the mean vehicle travel times for different hours of the day for multiple stations along multiple routes. Historical time steps and prediction horizon were set up to 5 and 1, respectively, which means that five hours of historical average travel time data were used to predict average travel time for the following hour. The spatial and temporal information and the historical average travel times were captured from the dataset for model input parameters. As adjacency matrices for the spatial input parameters, the station distances and sequence numbers were used, and the time of day (hour) was considered for the temporal inputs. Other inputs, including volatility information such as standard deviation and variance of journey durations, were also included in the model to make it more robust. The model's performance was evaluated based on a metric called mean absolute percentage error (MAPE). The observed prediction errors for various routes, trips, and stations remained consistent throughout the day. The results showed that the developed model could predict travel times more accurately during peak traffic hours, having a MAPE of around 14%, and performed less accurately during the latter part of the day. In the context of a complicated transportation network in high-density urban areas, the model showed its applicability for real-time travel time prediction of public transportation and ensured the high quality of the predictions generated by the model.Keywords: gated recurrent unit, mean absolute percentage error, single-step forecasting, travel time prediction.
Procedia PDF Downloads 72109 Rumen Epithelium Development of Bovine Fetuses and Newborn Calves
Authors: Juliana Shimara Pires Ferrão, Letícia Palmeira Pinto, Francisco Palma Rennó, Francisco Javier Hernandez Blazquez
Abstract:
The ruminant stomach is a complex and multi-chambered organ. Although the true stomach (abomasum) is fully differentiated and functional at birth, the same does not occur with the rumen chamber. At this moment, rumen papillae are small or nonexistent. The papillae only fully develop after weaning and during calf growth. Papillae development and ruminal epithelium specialization during the fetus growth and at birth must be two interdependent processes that will prepare the rumen to adapt to ruminant adult feeding. The microscopic study of rumen epithelium at these early phases of life is important to understand how this structure prepares the rumen to deal with the following weaning processes and its functional activation. Samples of ruminal mucosa of bovine fetuses (110- and 150 day-old) and newborn calves were collected (dorsal and ventral portions) and processed for light and electron microscopy and immunohistochemistry. The basal cell layer of the stratified pavimentous epithelium present in different ruminal portions of the fetuses was thicker than the same portions of newborn calves. The superficial and intermediate epithelial layers of 150 day-old fetuses were thicker than those found in the other 2 studied ages. At this age (150 days), dermal papillae begin to invade the intermediate epithelial layer which gradually disappears in newborn calves. At birth, the ruminal papillae project from the epithelial surface, probably by regression of the epithelial cells (transitory cells) surrounding the dermal papillae. The PCNA cell proliferation index (%) was calculated for all epithelial samples. Fetuses 150 day-old showed increased cell proliferation in basal cell layer (Dorsal Portion: 84.2%; Ventral Portion: 89.8%) compared to other ages studied. Newborn calves showed an intermediate index (Dorsal Portion: 65.1%; Ventral Portion: 48.9%), whereas 110 day-old fetuses had the lowest proliferation index (Dorsal Portion: 57.2%; Ventral Portion: 20.6%). Regarding the transitory epithelium, 110 day-old fetuses showed the lowest proliferation index (Dorsal Portion: 44.6%; Ventral Portion: 20.1%), 150 day-old fetuses showed an intermediate proliferation index (Dorsal Portion: 57.5%; Ventral Portion: 71.1%) and newborn calves presented a higher proliferation index (Dorsal Portion: 75.1%; Ventral Portion: 19.6%). Under TEM, the 110- and 150 day-old fetuses presented thicker and poorly organized basal cell layer, with large nuclei and dense cytoplasm. In newborn calves, the basal cell layer was more organized and with fewer layers, but typically similar in both regions of the rumen. For the transitory epithelium, fetuses displayed larger cells than those found in newborn calves with less electrondense cytoplasm than that found in the basal cells. The ruminal dorsal portion has an overall higher cell proliferation rate than the ventral portion. Thus we can infer that the dorsal portion may have a higher cell activity than the ventral portion during ruminal development. Moreover, the basal cell layer is thicker in the 110- and 150 day-old fetuses than in the newborn calves. The transitory epithelium, which is much reduced, at birth may have a structural support function of the developing dermal papillae. When it regresses or is sheared off, the papillae are “carved out” from the surrounding epithelial layer.Keywords: bovine, calf, epithelium, fetus, hematoxylin-eosin, immunohistochemistry, TEM, Rumen
Procedia PDF Downloads 388108 Creative Mapping Landuse and Human Activities: From the Inventories of Factories to the History of the City and Citizens
Authors: R. Tamborrino, F. Rinaudo
Abstract:
Digital technologies offer possibilities to effectively convert historical archives into instruments of knowledge able to provide a guide for the interpretation of historical phenomena. Digital conversion and management of those documents allow the possibility to add other sources in a unique and coherent model that permits the intersection of different data able to open new interpretations and understandings. Urban history uses, among other sources, the inventories that register human activities in a specific space (e.g. cadastres, censuses, etc.). The geographic localisation of that information inside cartographic supports allows for the comprehension and visualisation of specific relationships between different historical realities registering both the urban space and the peoples living there. These links that merge the different nature of data and documentation through a new organisation of the information can suggest a new interpretation of other related events. In all these kinds of analysis, the use of GIS platforms today represents the most appropriate answer. The design of the related databases is the key to realise the ad-hoc instrument to facilitate the analysis and the intersection of data of different origins. Moreover, GIS has become the digital platform where it is possible to add other kinds of data visualisation. This research deals with the industrial development of Turin at the beginning of the 20th century. A census of factories realized just prior to WWI provides the opportunity to test the potentialities of GIS platforms for the analysis of urban landscape modifications during the first industrial development of the town. The inventory includes data about location, activities, and people. GIS is shaped in a creative way linking different sources and digital systems aiming to create a new type of platform conceived as an interface integrating different kinds of data visualisation. The data processing allows linking this information to an urban space, and also visualising the growth of the city at that time. The sources, related to the urban landscape development in that period, are of a different nature. The emerging necessity to build, enlarge, modify and join different buildings to boost the industrial activities, according to their fast development, is recorded by different official permissions delivered by the municipality and now stored in the Historical Archive of the Municipality of Turin. Those documents, which are reports and drawings, contain numerous data on the buildings themselves, including the block where the plot is located, the district, and the people involved such as the owner, the investor, and the engineer or architect designing the industrial building. All these collected data offer the possibility to firstly re-build the process of change of the urban landscape by using GIS and 3D modelling technologies thanks to the access to the drawings (2D plans, sections and elevations) that show the previous and the planned situation. Furthermore, they access information for different queries of the linked dataset that could be useful for different research and targets such as economics, biographical, architectural, or demographical. By superimposing a layer of the present city, the past meets to the present-industrial heritage, and people meet urban history.Keywords: digital urban history, census, digitalisation, GIS, modelling, digital humanities
Procedia PDF Downloads 191107 4D Monitoring of Subsurface Conditions in Concrete Infrastructure Prior to Failure Using Ground Penetrating Radar
Authors: Lee Tasker, Ali Karrech, Jeffrey Shragge, Matthew Josh
Abstract:
Monitoring for the deterioration of concrete infrastructure is an important assessment tool for an engineer and difficulties can be experienced with monitoring for deterioration within an infrastructure. If a failure crack, or fluid seepage through such a crack, is observed from the surface often the source location of the deterioration is not known. Geophysical methods are used to assist engineers with assessing the subsurface conditions of materials. Techniques such as Ground Penetrating Radar (GPR) provide information on the location of buried infrastructure such as pipes and conduits, positions of reinforcements within concrete blocks, and regions of voids/cavities behind tunnel lining. This experiment underlines the application of GPR as an infrastructure-monitoring tool to highlight and monitor regions of possible deterioration within a concrete test wall due to an increase in the generation of fractures; in particular, during a time period of applied load to a concrete wall up to and including structural failure. A three-point load was applied to a concrete test wall of dimensions 1700 x 600 x 300 mm³ in increments of 10 kN, until the wall structurally failed at 107.6 kN. At each increment of applied load, the load was kept constant and the wall was scanned using GPR along profile lines across the wall surface. The measured radar amplitude responses of the GPR profiles, at each applied load interval, were reconstructed into depth-slice grids and presented at fixed depth-slice intervals. The corresponding depth-slices were subtracted from each data set to compare the radar amplitude response between datasets and monitor for changes in the radar amplitude response. At lower values of applied load (i.e., 0-60 kN), few changes were observed in the difference of radar amplitude responses between data sets. At higher values of applied load (i.e., 100 kN), closer to structural failure, larger differences in radar amplitude response between data sets were highlighted in the GPR data; up to 300% increase in radar amplitude response at some locations between the 0 kN and 100 kN radar datasets. Distinct regions were observed in the 100 kN difference dataset (i.e., 100 kN-0 kN) close to the location of the final failure crack. The key regions observed were a conical feature located between approximately 3.0-12.0 cm depth from surface and a vertical linear feature located approximately 12.1-21.0 cm depth from surface. These key regions have been interpreted as locations exhibiting an increased change in pore-space due to increased mechanical loading, or locations displaying an increase in volume of micro-cracks, or locations showing the development of a larger macro-crack. The experiment showed that GPR is a useful geophysical monitoring tool to assist engineers with highlighting and monitoring regions of large changes of radar amplitude response that may be associated with locations of significant internal structural change (e.g. crack development). GPR is a non-destructive technique that is fast to deploy in a production setting. GPR can assist with reducing risk and costs in future infrastructure maintenance programs by highlighting and monitoring locations within the structure exhibiting large changes in radar amplitude over calendar-time.Keywords: 4D GPR, engineering geophysics, ground penetrating radar, infrastructure monitoring
Procedia PDF Downloads 179106 Online Monitoring and Control of Continuous Mechanosynthesis by UV-Vis Spectrophotometry
Authors: Darren A. Whitaker, Dan Palmer, Jens Wesholowski, James Flaherty, John Mack, Ahmad B. Albadarin, Gavin Walker
Abstract:
Traditional mechanosynthesis has been performed by either ball milling or manual grinding. However, neither of these techniques allow the easy application of process control. The temperature may change unpredictably due to friction in the process. Hence the amount of energy transferred to the reactants is intrinsically non-uniform. Recently, it has been shown that the use of Twin-Screw extrusion (TSE) can overcome these limitations. Additionally, TSE enables a platform for continuous synthesis or manufacturing as it is an open-ended process, with feedstocks at one end and product at the other. Several materials including metal-organic frameworks (MOFs), co-crystals and small organic molecules have been produced mechanochemically using TSE. The described advantages of TSE are offset by drawbacks such as increased process complexity (a large number of process parameters) and variation in feedstock flow impacting on product quality. To handle the above-mentioned drawbacks, this study utilizes UV-Vis spectrophotometry (InSpectroX, ColVisTec) as an online tool to gain real-time information about the quality of the product. Additionally, this is combined with real-time process information in an Advanced Process Control system (PharmaMV, Perceptive Engineering) allowing full supervision and control of the TSE process. Further, by characterizing the dynamic behavior of the TSE, a model predictive controller (MPC) can be employed to ensure the process remains under control when perturbed by external disturbances. Two reactions were studied; a Knoevenagel condensation reaction of barbituric acid and vanillin and, the direct amidation of hydroquinone by ammonium acetate to form N-Acetyl-para-aminophenol (APAP) commonly known as paracetamol. Both reactions could be carried out continuously using TSE, nuclear magnetic resonance (NMR) spectroscopy was used to confirm the percentage conversion of starting materials to product. This information was used to construct partial least squares (PLS) calibration models within the PharmaMV development system, which relates the percent conversion to product to the acquired UV-Vis spectrum. Once this was complete, the model was deployed within the PharmaMV Real-Time System to carry out automated optimization experiments to maximize the percentage conversion based on a set of process parameters in a design of experiments (DoE) style methodology. With the optimum set of process parameters established, a series of PRBS process response tests (i.e. Pseudo-Random Binary Sequences) around the optimum were conducted. The resultant dataset was used to build a statistical model and associated MPC. The controller maximizes product quality whilst ensuring the process remains at the optimum even as disturbances such as raw material variability are introduced into the system. To summarize, a combination of online spectral monitoring and advanced process control was used to develop a robust system for optimization and control of two TSE based mechanosynthetic processes.Keywords: continuous synthesis, pharmaceutical, spectroscopy, advanced process control
Procedia PDF Downloads 178105 Assessing the Environmental Efficiency of China’s Power System: A Spatial Network Data Envelopment Analysis Approach
Authors: Jianli Jiang, Bai-Chen Xie
Abstract:
The climate issue has aroused global concern. Achieving sustainable development is a good path for countries to mitigate environmental and climatic pressures, although there are many difficulties. The first step towards sustainable development is to evaluate the environmental efficiency of the energy industry with proper methods. The power sector is a major source of CO2, SO2, and NOx emissions. Evaluating the environmental efficiency (EE) of power systems is the premise to alleviate the terrible situation of energy and the environment. Data Envelopment Analysis (DEA) has been widely used in efficiency studies. However, measuring the efficiency of a system (be it a nation, region, sector, or business) is a challenging task. The classic DEA takes the decision-making units (DMUs) as independent, which neglects the interaction between DMUs. While ignoring these inter-regional links may result in a systematic bias in the efficiency analysis; for instance, the renewable power generated in a certain region may benefit the adjacent regions while the SO2 and CO2 emissions act oppositely. This study proposes a spatial network DEA (SNDEA) with a slack measure that can capture the spatial spillover effects of inputs/outputs among DMUs to measure efficiency. This approach is used to study the EE of China's power system, which consists of generation, transmission, and distribution departments, using a panel dataset from 2014 to 2020. In the empirical example, the energy and patent inputs, the undesirable CO2 output, and the renewable energy (RE) power variables are tested for a significant spatial spillover effect. Compared with the classic network DEA, the SNDEA result shows an obvious difference tested by the global Moran' I index. From a dynamic perspective, the EE of the power system experiences a visible surge from 2015, then a sharp downtrend from 2019, which keeps the same trend with the power transmission department. This phenomenon benefits from the market-oriented reform in the Chinese power grid enacted in 2015. The rapid decline in the environmental efficiency of the transmission department in 2020 was mainly due to the Covid-19 epidemic, which hinders economic development seriously. While the EE of the power generation department witnesses a declining trend overall, this is reasonable, taking the RE power into consideration. The installed capacity of RE power in 2020 is 4.40 times that in 2014, while the power generation is 3.97 times; in other words, the power generation per installed capacity shrank. In addition, the consumption cost of renewable power increases rapidly with the increase of RE power generation. These two aspects make the EE of the power generation department show a declining trend. Incorporation of the interactions among inputs/outputs into the DEA model, this paper proposes an efficiency evaluation method on the basis of the DEA framework, which sheds some light on efficiency evaluation in regional studies. Furthermore, the SNDEA model and the spatial DEA concept can be extended to other fields, such as industry, country, and so on.Keywords: spatial network DEA, environmental efficiency, sustainable development, power system
Procedia PDF Downloads 109104 Plasmonic Biosensor for Early Detection of Environmental DNA (eDNA) Combined with Enzyme Amplification
Authors: Monisha Elumalai, Joana Guerreiro, Joana Carvalho, Marta Prado
Abstract:
DNA biosensors popularity has been increasing over the past few years. Traditional analytical techniques tend to require complex steps and expensive equipment however DNA biosensors have the advantage of getting simple, fast and economic. Additionally, the combination of DNA biosensors with nanomaterials offers the opportunity to improve the selectivity, sensitivity and the overall performance of the devices. DNA biosensors are based on oligonucleotides as sensing elements. These oligonucleotides are highly specific to complementary DNA sequences resulting in the hybridization of the strands. DNA biosensors are not only an advantage in the clinical field but also applicable in numerous research areas such as food analysis or environmental control. Zebra Mussels (ZM), Dreissena polymorpha are invasive species responsible for enormous negative impacts on the environment and ecosystems. Generally, the detection of ZM is made when the observation of adult or macroscopic larvae's is made however at this stage is too late to avoid the harmful effects. Therefore, there is a need to develop an analytical tool for the early detection of ZM. Here, we present a portable plasmonic biosensor for the detection of environmental DNA (eDNA) released to the environment from this invasive species. The plasmonic DNA biosensor combines gold nanoparticles, as transducer elements, due to their great optical properties and high sensitivity. The detection strategy is based on the immobilization of a short base pair DNA sequence on the nanoparticles surface followed by specific hybridization in the presence of a complementary target DNA. The hybridization events are tracked by the optical response provided by the nanospheres and their surrounding environment. The identification of the DNA sequences (synthetic target and probes) to detect Zebra mussel were designed by using Geneious software in order to maximize the specificity. Moreover, to increase the optical response enzyme amplification of DNA might be used. The gold nanospheres were synthesized and characterized by UV-visible spectrophotometry and transmission electron microscopy (TEM). The obtained nanospheres present the maximum localized surface plasmon resonance (LSPR) peak position are found to be around 519 nm and a diameter of 17nm. The DNA probes modified with a sulfur group at one end of the sequence were then loaded on the gold nanospheres at different ionic strengths and DNA probe concentrations. The optimal DNA probe loading will be selected based on the stability of the optical signal followed by the hybridization study. Hybridization process leads to either nanoparticle dispersion or aggregation based on the presence or absence of the target DNA. Finally, this detection system will be integrated into an optical sensing platform. Considering that the developed device will be used in the field, it should fulfill the inexpensive and portability requirements. The sensing devices based on specific DNA detection holds great potential and can be exploited for sensing applications in-loco.Keywords: ZM DNA, DNA probes, nicking enzyme, gold nanoparticles
Procedia PDF Downloads 245103 Effective Health Promotion Interventions Help Young Children to Maximize Their Future Well-Being by Early Childhood Development
Authors: Nadeesha Sewwandi, Dilini Shashikala, R. Kanapathy, S. Viyasan, R. M. S. Kumara, Duminda Guruge
Abstract:
Early childhood development is important to the emotional, social, and physical development of young children and it has a direct effect on their overall development and on the adult they become. Play is so important to optimal child developments including skill development, social development, imagination, creativity and it fulfills a baby’s inborn need to learn. So, health promotion approach empowers people about the development of early childhood. Play area is a new concept and this study focus how this play areas helps to the development of early childhood of children in rural villages in Sri Lanka. This study was conducted with a children society in a rural village called Welankulama in Sri Lanka. Survey was conducted with children society about emotional, social and physical development of young children (Under age eight) in this village using questionnaires. It described most children under eight years age have poor level of emotional, social and physical development in this village. Then children society wanted to find determinants for this problem and among them they prioritized determinants like parental interactions, learning environment and social interaction and address them using an innovative concept called play area. In this village there is a common place as play area under a big tamarind tree. It consists of a playhouse, innovative playing toys, mobile library, etc. Twice a week children, parents, grandparents gather to this nice place. Collective feeding takes place in this area once a week and it was conducted by several mothers groups in this village. Mostly grandparents taught about handicrafts and this is a very nice place to share their experiences with all. Healthy competitions were conducted in this place through playing to motivate the children. Happy calendar (mood of the children) was marked by children before and after coming to the play area. In terms of results qualitative changes got significant place in this study. By learning about colors and counting through playing the thinking and reasoning skills got developed among children. Children were widening their imagination by means of storytelling. We observed there were good developments of fine and gross motor skills of two differently abled children in this village. Children learn to empathize with other people, sharing, collaboration, team work and following of rules. And also children gain knowledge about fairness, through role playing, obtained insight on the right ways of displaying emotions such as stress, fear, anger, frustration, and develops knowledge of how they can manage their feelings. The reading and writing ability of the children got improved by 83% because of the mobile library. The weight of children got increased by 81% in the village. Happiness was increased by 76% among children in the society. Playing is very important for learning during early childhood period of a person. Health promotion interventions play a major role to the development of early childhood and it help children to adjust to the school setting and even to enhance children’s learning readiness, learning behaviors and problem solving skills.Keywords: early childhood development, health promotion approach, play and learning, working with children
Procedia PDF Downloads 139102 Empirical Decomposition of Time Series of Power Consumption
Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats
Abstract:
Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).Keywords: general appliance model, non intrusive load monitoring, events detection, unsupervised techniques;
Procedia PDF Downloads 82101 Timely Palliative Screening and Interventions in Oncology
Authors: Jaci Marie Mastrandrea, Rosario Haro
Abstract:
Background: The National Comprehensive Cancer Network (NCCN) recommends that healthcare institutions have established processes for integrating palliative care (PC) into cancer treatment and that all cancer patients be screened for PC needs upon initial diagnosis as well as throughout the entire continuum of care (National Comprehensive Cancer Network, 2021). Early PC screening and intervention is directly associated with improved patient outcomes. The Sky Lakes Cancer Treatment Center (SLCTC) is an institution that has access to PC services yet does not have protocols in place for identifying patients with palliative needs or a standardized referral process. The aim of this quality improvement project was to improve early access to PC services by establishing a standardized screening and referral process for outpatient oncology patients. Method: The sample population included all adult patients with an oncology diagnosis who presented to the SLCTC for treatment during the project timeline. The “Palliative and Supportive Needs Assessment'' (PSNA) screening tool was developed from validated, evidence-based PC referral criteria. The tool was initially implemented using paper forms, and data was collected over a period of eight weeks. Patients were screened by nurses on the SLCTC oncology treatment team. Nurses responsible for screening patients received an educational inservice prior to implementation. Patients with a PSNA score of three or higher received an educational handout on the topic of PC and education about PC and symptom management. A score of five or higher indicates that PC referral is strongly recommended, and the patient’s EHR is flagged for the oncology provider to review orders for PC referral. The PSNA tool was approved by Sky Lakes administration for full integration into Epic-Beacon. The project lead collaborated with the Sky Lakes’ information systems team and representatives from Epic on the tool’s aesthetic and functionality within the Epic system. SLCTC nurses and physicians were educated on how to document the PSNA within Epic and where to view results. Results: Prior to the implementation of the PSNA screening tool, the SLCTC had zero referrals to PC in the past year, excluding referrals to hospice. Data was collected from the completed screening assessments of 100 patients under active treatment at the SLCTC. Seventy-three percent of patients met criteria for PC referral with a score greater than or equal to three. Of those patients who met referral criteria, 53.4% (39 patients) were referred for a palliative and supportive care consultation. Patients that were not referred to PC upon meeting criteria were flagged in EPIC for re-screening within one to three months. Patients with lung cancer, chronic hematologic malignancies, breast cancer, and gastrointestinal malignancy most frequently met the criteria for PC referral and scored highest overall on the scale of 0-12. Conclusion: The implementation of a standardized PC screening tool at the SLCTC significantly increased awareness of PC needs among cancer patients in the outpatient setting. Additionally, data derived from this quality improvement project supports the national recommendation for PC to be an integral component of cancer treatment across the entire continuum of care.Keywords: oncology, palliative and supportive care, symptom management, outpatient oncology, palliative screening tool
Procedia PDF Downloads 112100 Comparison of Machine Learning-Based Models for Predicting Streptococcus pyogenes Virulence Factors and Antimicrobial Resistance
Authors: Fernanda Bravo Cornejo, Camilo Cerda Sarabia, Belén Díaz Díaz, Diego Santibañez Oyarce, Esteban Gómez Terán, Hugo Osses Prado, Raúl Caulier-Cisterna, Jorge Vergara-Quezada, Ana Moya-Beltrán
Abstract:
Streptococcus pyogenes is a gram-positive bacteria involved in a wide range of diseases and is a major-human-specific bacterial pathogen. In Chile, this year the 'Ministerio de Salud' declared an alert due to the increase in strains throughout the year. This increase can be attributed to the multitude of factors including antimicrobial resistance (AMR) and Virulence Factors (VF). Understanding these VF and AMR is crucial for developing effective strategies and improving public health responses. Moreover, experimental identification and characterization of these pathogenic mechanisms are labor-intensive and time-consuming. Therefore, new computational methods are required to provide robust techniques for accelerating this identification. Advances in Machine Learning (ML) algorithms represent the opportunity to refine and accelerate the discovery of VF associated with Streptococcus pyogenes. In this work, we evaluate the accuracy of various machine learning models in predicting the virulence factors and antimicrobial resistance of Streptococcus pyogenes, with the objective of providing new methods for identifying the pathogenic mechanisms of this organism.Our comprehensive approach involved the download of 32,798 genbank files of S. pyogenes from NCBI dataset, coupled with the incorporation of data from Virulence Factor Database (VFDB) and Antibiotic Resistance Database (CARD) which contains sequences of AMR gene sequence and resistance profiles. These datasets provided labeled examples of both virulent and non-virulent genes, enabling a robust foundation for feature extraction and model training. We employed preprocessing, characterization and feature extraction techniques on primary nucleotide/amino acid sequences and selected the optimal more for model training. The feature set was constructed using sequence-based descriptors (e.g., k-mers and One-hot encoding), and functional annotations based on database prediction. The ML models compared are logistic regression, decision trees, support vector machines, neural networks among others. The results of this work show some differences in accuracy between the algorithms, these differences allow us to identify different aspects that represent unique opportunities for a more precise and efficient characterization and identification of VF and AMR. This comparative analysis underscores the value of integrating machine learning techniques in predicting S. pyogenes virulence and AMR, offering potential pathways for more effective diagnostic and therapeutic strategies. Future work will focus on incorporating additional omics data, such as transcriptomics, and exploring advanced deep learning models to further enhance predictive capabilities.Keywords: antibiotic resistance, streptococcus pyogenes, virulence factors., machine learning
Procedia PDF Downloads 3199 Vehicle Timing Motion Detection Based on Multi-Dimensional Dynamic Detection Network
Authors: Jia Li, Xing Wei, Yuchen Hong, Yang Lu
Abstract:
Detecting vehicle behavior has always been the focus of intelligent transportation, but with the explosive growth of the number of vehicles and the complexity of the road environment, the vehicle behavior videos captured by traditional surveillance have been unable to satisfy the study of vehicle behavior. The traditional method of manually labeling vehicle behavior is too time-consuming and labor-intensive, but the existing object detection and tracking algorithms have poor practicability and low behavioral location detection rate. This paper proposes a vehicle behavior detection algorithm based on the dual-stream convolution network and the multi-dimensional video dynamic detection network. In the videos, the straight-line behavior of the vehicle will default to the background behavior. The Changing lanes, turning and turning around are set as target behaviors. The purpose of this model is to automatically mark the target behavior of the vehicle from the untrimmed videos. First, the target behavior proposals in the long video are extracted through the dual-stream convolution network. The model uses a dual-stream convolutional network to generate a one-dimensional action score waveform, and then extract segments with scores above a given threshold M into preliminary vehicle behavior proposals. Second, the preliminary proposals are pruned and identified using the multi-dimensional video dynamic detection network. Referring to the hierarchical reinforcement learning, the multi-dimensional network includes a Timer module and a Spacer module, where the Timer module mines time information in the video stream and the Spacer module extracts spatial information in the video frame. The Timer and Spacer module are implemented by Long Short-Term Memory (LSTM) and start from an all-zero hidden state. The Timer module uses the Transformer mechanism to extract timing information from the video stream and extract features by linear mapping and other methods. Finally, the model fuses time information and spatial information and obtains the location and category of the behavior through the softmax layer. This paper uses recall and precision to measure the performance of the model. Extensive experiments show that based on the dataset of this paper, the proposed model has obvious advantages compared with the existing state-of-the-art behavior detection algorithms. When the Time Intersection over Union (TIoU) threshold is 0.5, the Average-Precision (MP) reaches 36.3% (the MP of baselines is 21.5%). In summary, this paper proposes a vehicle behavior detection model based on multi-dimensional dynamic detection network. This paper introduces spatial information and temporal information to extract vehicle behaviors in long videos. Experiments show that the proposed algorithm is advanced and accurate in-vehicle timing behavior detection. In the future, the focus will be on simultaneously detecting the timing behavior of multiple vehicles in complex traffic scenes (such as a busy street) while ensuring accuracy.Keywords: vehicle behavior detection, convolutional neural network, long short-term memory, deep learning
Procedia PDF Downloads 13098 Categorical Metadata Encoding Schemes for Arteriovenous Fistula Blood Flow Sound Classification: Scaling Numerical Representations Leads to Improved Performance
Authors: George Zhou, Yunchan Chen, Candace Chien
Abstract:
Kidney replacement therapy is the current standard of care for end-stage renal diseases. In-center or home hemodialysis remains an integral component of the therapeutic regimen. Arteriovenous fistulas (AVF) make up the vascular circuit through which blood is filtered and returned. Naturally, AVF patency determines whether adequate clearance and filtration can be achieved and directly influences clinical outcomes. Our aim was to build a deep learning model for automated AVF stenosis screening based on the sound of blood flow through the AVF. A total of 311 patients with AVF were enrolled in this study. Blood flow sounds were collected using a digital stethoscope. For each patient, blood flow sounds were collected at 6 different locations along the patient’s AVF. The 6 locations are artery, anastomosis, distal vein, middle vein, proximal vein, and venous arch. A total of 1866 sounds were collected. The blood flow sounds are labeled as “patent” (normal) or “stenotic” (abnormal). The labels are validated from concurrent ultrasound. Our dataset included 1527 “patent” and 339 “stenotic” sounds. We show that blood flow sounds vary significantly along the AVF. For example, the blood flow sound is loudest at the anastomosis site and softest at the cephalic arch. Contextualizing the sound with location metadata significantly improves classification performance. How to encode and incorporate categorical metadata is an active area of research1. Herein, we study ordinal (i.e., integer) encoding schemes. The numerical representation is concatenated to the flattened feature vector. We train a vision transformer (ViT) on spectrogram image representations of the sound and demonstrate that using scalar multiples of our integer encodings improves classification performance. Models are evaluated using a 10-fold cross-validation procedure. The baseline performance of our ViT without any location metadata achieves an AuROC and AuPRC of 0.68 ± 0.05 and 0.28 ± 0.09, respectively. Using the following encodings of Artery:0; Arch: 1; Proximal: 2; Middle: 3; Distal 4: Anastomosis: 5, the ViT achieves an AuROC and AuPRC of 0.69 ± 0.06 and 0.30 ± 0.10, respectively. Using the following encodings of Artery:0; Arch: 10; Proximal: 20; Middle: 30; Distal 40: Anastomosis: 50, the ViT achieves an AuROC and AuPRC of 0.74 ± 0.06 and 0.38 ± 0.10, respectively. Using the following encodings of Artery:0; Arch: 100; Proximal: 200; Middle: 300; Distal 400: Anastomosis: 500, the ViT achieves an AuROC and AuPRC of 0.78 ± 0.06 and 0.43 ± 0.11. respectively. Interestingly, we see that using increasing scalar multiples of our integer encoding scheme (i.e., encoding “venous arch” as 1,10,100) results in progressively improved performance. In theory, the integer values do not matter since we are optimizing the same loss function; the model can learn to increase or decrease the weights associated with location encodings and converge on the same solution. However, in the setting of limited data and computation resources, increasing the importance at initialization either leads to faster convergence or helps the model escape a local minimum.Keywords: arteriovenous fistula, blood flow sounds, metadata encoding, deep learning
Procedia PDF Downloads 8897 A Geographic Information System Mapping Method for Creating Improved Satellite Solar Radiation Dataset Over Qatar
Authors: Sachin Jain, Daniel Perez-Astudillo, Dunia A. Bachour, Antonio P. Sanfilippo
Abstract:
The future of solar energy in Qatar is evolving steadily. Hence, high-quality spatial solar radiation data is of the uttermost requirement for any planning and commissioning of solar technology. Generally, two types of solar radiation data are available: satellite data and ground observations. Satellite solar radiation data is developed by the physical and statistical model. Ground data is collected by solar radiation measurement stations. The ground data is of high quality. However, they are limited to distributed point locations with the high cost of installation and maintenance for the ground stations. On the other hand, satellite solar radiation data is continuous and available throughout geographical locations, but they are relatively less accurate than ground data. To utilize the advantage of both data, a product has been developed here which provides spatial continuity and higher accuracy than any of the data alone. The popular satellite databases: National Solar radiation Data Base, NSRDB (PSM V3 model, spatial resolution: 4 km) is chosen here for merging with ground-measured solar radiation measurement in Qatar. The spatial distribution of ground solar radiation measurement stations is comprehensive in Qatar, with a network of 13 ground stations. The monthly average of the daily total Global Horizontal Irradiation (GHI) component from ground and satellite data is used for error analysis. The normalized root means square error (NRMSE) values of 3.31%, 6.53%, and 6.63% for October, November, and December 2019 were observed respectively when comparing in-situ and NSRDB data. The method is based on the Empirical Bayesian Kriging Regression Prediction model available in ArcGIS, ESRI. The workflow of the algorithm is based on the combination of regression and kriging methods. A regression model (OLS, ordinary least square) is fitted between the ground and NSBRD data points. A semi-variogram is fitted into the experimental semi-variogram obtained from the residuals. The kriging residuals obtained after fitting the semi-variogram model were added to NSRBD data predicted values obtained from the regression model to obtain the final predicted values. The NRMSE values obtained after merging are respectively 1.84%, 1.28%, and 1.81% for October, November, and December 2019. One more explanatory variable, that is the ground elevation, has been incorporated in the regression and kriging methods to reduce the error and to provide higher spatial resolution (30 m). The final GHI maps have been created after merging, and NRMSE values of 1.24%, 1.28%, and 1.28% have been observed for October, November, and December 2019, respectively. The proposed merging method has proven as a highly accurate method. An additional method is also proposed here to generate calibrated maps by using regression and kriging model and further to use the calibrated model to generate solar radiation maps from the explanatory variable only when not enough historical ground data is available for long-term analysis. The NRMSE values obtained after the comparison of the calibrated maps with ground data are 5.60% and 5.31% for November and December 2019 month respectively.Keywords: global horizontal irradiation, GIS, empirical bayesian kriging regression prediction, NSRDB
Procedia PDF Downloads 8996 Mondoc: Informal Lightweight Ontology for Faceted Semantic Classification of Hypernymy
Authors: M. Regina Carreira-Lopez
Abstract:
Lightweight ontologies seek to concrete union relationships between a parent node, and a secondary node, also called "child node". This logic relation (L) can be formally defined as a triple ontological relation (LO) equivalent to LO in ⟨LN, LE, LC⟩, and where LN represents a finite set of nodes (N); LE is a set of entities (E), each of which represents a relationship between nodes to form a rooted tree of ⟨LN, LE⟩; and LC is a finite set of concepts (C), encoded in a formal language (FL). Mondoc enables more refined searches on semantic and classified facets for retrieving specialized knowledge about Atlantic migrations, from the Declaration of Independence of the United States of America (1776) and to the end of the Spanish Civil War (1939). The model looks forward to increasing documentary relevance by applying an inverse frequency of co-ocurrent hypernymy phenomena for a concrete dataset of textual corpora, with RMySQL package. Mondoc profiles archival utilities implementing SQL programming code, and allows data export to XML schemas, for achieving semantic and faceted analysis of speech by analyzing keywords in context (KWIC). The methodology applies random and unrestricted sampling techniques with RMySQL to verify the resonance phenomena of inverse documentary relevance between the number of co-occurrences of the same term (t) in more than two documents of a set of texts (D). Secondly, the research also evidences co-associations between (t) and their corresponding synonyms and antonyms (synsets) are also inverse. The results from grouping facets or polysemic words with synsets in more than two textual corpora within their syntagmatic context (nouns, verbs, adjectives, etc.) state how to proceed with semantic indexing of hypernymy phenomena for subject-heading lists and for authority lists for documentary and archival purposes. Mondoc contributes to the development of web directories and seems to achieve a proper and more selective search of e-documents (classification ontology). It can also foster on-line catalogs production for semantic authorities, or concepts, through XML schemas, because its applications could be used for implementing data models, by a prior adaptation of the based-ontology to structured meta-languages, such as OWL, RDF (descriptive ontology). Mondoc serves to the classification of concepts and applies a semantic indexing approach of facets. It enables information retrieval, as well as quantitative and qualitative data interpretation. The model reproduces a triple tuple ⟨LN, LE, LT, LCF L, BKF⟩ where LN is a set of entities that connect with other nodes to concrete a rooted tree in ⟨LN, LE⟩. LT specifies a set of terms, and LCF acts as a finite set of concepts, encoded in a formal language, L. Mondoc only resolves partial problems of linguistic ambiguity (in case of synonymy and antonymy), but neither the pragmatic dimension of natural language nor the cognitive perspective is addressed. To achieve this goal, forthcoming programming developments should target at oriented meta-languages with structured documents in XML.Keywords: hypernymy, information retrieval, lightweight ontology, resonance
Procedia PDF Downloads 12595 Modern Detection and Description Methods for Natural Plants Recognition
Authors: Masoud Fathi Kazerouni, Jens Schlemper, Klaus-Dieter Kuhnert
Abstract:
Green planet is one of the Earth’s names which is known as a terrestrial planet and also can be named the fifth largest planet of the solar system as another scientific interpretation. Plants do not have a constant and steady distribution all around the world, and even plant species’ variations are not the same in one specific region. Presence of plants is not only limited to one field like botany; they exist in different fields such as literature and mythology and they hold useful and inestimable historical records. No one can imagine the world without oxygen which is produced mostly by plants. Their influences become more manifest since no other live species can exist on earth without plants as they form the basic food staples too. Regulation of water cycle and oxygen production are the other roles of plants. The roles affect environment and climate. Plants are the main components of agricultural activities. Many countries benefit from these activities. Therefore, plants have impacts on political and economic situations and future of countries. Due to importance of plants and their roles, study of plants is essential in various fields. Consideration of their different applications leads to focus on details of them too. Automatic recognition of plants is a novel field to contribute other researches and future of studies. Moreover, plants can survive their life in different places and regions by means of adaptations. Therefore, adaptations are their special factors to help them in hard life situations. Weather condition is one of the parameters which affect plants life and their existence in one area. Recognition of plants in different weather conditions is a new window of research in the field. Only natural images are usable to consider weather conditions as new factors. Thus, it will be a generalized and useful system. In order to have a general system, distance from the camera to plants is considered as another factor. The other considered factor is change of light intensity in environment as it changes during the day. Adding these factors leads to a huge challenge to invent an accurate and secure system. Development of an efficient plant recognition system is essential and effective. One important component of plant is leaf which can be used to implement automatic systems for plant recognition without any human interface and interaction. Due to the nature of used images, characteristic investigation of plants is done. Leaves of plants are the first characteristics to select as trusty parts. Four different plant species are specified for the goal to classify them with an accurate system. The current paper is devoted to principal directions of the proposed methods and implemented system, image dataset, and results. The procedure of algorithm and classification is explained in details. First steps, feature detection and description of visual information, are outperformed by using Scale invariant feature transform (SIFT), HARRIS-SIFT, and FAST-SIFT methods. The accuracy of the implemented methods is computed. In addition to comparison, robustness and efficiency of results in different conditions are investigated and explained.Keywords: SIFT combination, feature extraction, feature detection, natural images, natural plant recognition, HARRIS-SIFT, FAST-SIFT
Procedia PDF Downloads 27694 Experimental Research of Canine Mandibular Defect Construction with the Controlled Meshy Titanium Alloy Scaffold Fabricated by Electron Beam Melting Combined with BMSCs-Encapsulating Chitosan Hydrogel
Authors: Wang Hong, Liu Chang Kui, Zhao Bing Jing, Hu Min
Abstract:
Objection We observed the repairment effection of canine mandibular defect with meshy Ti6Al4V scaffold fabricated by electron beam melting (EBM) combined with bone marrow mesenchymal stem cells (BMMSCs) encapsulated in chitosan hydrogel. Method Meshy titanium scaffolds were prepared by EBM of commercial Ti6Al4V power. The length of scaffolds was 24 mm, the width was 5 mm and height was 8mm. The pore size and porosity were evaluated by scanning electron microscopy (SEM). Chitosan /Bio-Oss hydrogel was prepared by chitosan, β- sodium glycerophosphate and Bio-Oss power. BMMSCs were harvested from canine iliac crests. BMMSCs were seeded in titanium scaffolds and encapsulated in Chitosan /Bio-Oss hydrogel. The validity of BMMSCs was evaluated by cell count kit-8 (CCK-8). The osteogenic differentiation ability was evaluated by alkaline phosphatase (ALP) activity and gene expression of OC, OPN and CoⅠ. Combination were performed by injecting BMMSCs/ Chitosan /Bio-Oss hydrogel into the meshy Ti6Al4V scaffolds and solidified. 24 mm long box-shaped bone defects were made at the mid-portion of mandible of adult beagles. The defects were randomly filled with BMMSCs/ Chitosan/Bio-Oss + titanium, Chitosan /Bio-Oss+titanium, titanium alone. Autogenous iliac crests graft as control group in 3 beagles. Radionuclide bone imaging was used to monitor the new bone tissue at 2, 4, 8 and 12 weeks after surgery. CT examination was made on the surgery day and 4 weeks, 12 weeks and 24 weeks after surgery. The animals were sacrificed in 4, 12 and 24 weeks after surgery. The bone formation were evaluated by histology and micro-CT. Results: The pores of the scaffolds was interconnected, the pore size was about 1 mm, the average porosity was about 76%. The pore size of the hydrogel was 50-200μm and the average porosity was approximately 90%. The hydrogel were solidified under the condition of 37℃in 10 minutes. The validity and the osteogenic differentiation ability of BMSCs were not affected by titanium scaffolds and hydrogel. Radionuclide bone imaging shown an increasing tendency of the revascularization and bone regeneration was observed in all the groups at 2, 4, 8 weeks after operation, and there were no changes at 12weeks.The tendency was more obvious in the BMMSCs/ Chitosan/Bio-Oss +titanium group and autogenous group. CT, Micro-CT and histology shown that new bone formed increasingly with the time extend. There were more new bone regenerated in BMMSCs/ Chitosan /Bio-Oss + titanium group and autogenous group than the other two groups. At 24 weeks, the autogenous group was achieved bone union. The BMSCs/ Chitosan /Bio-Oss group was seen extensive new bone formed around the scaffolds and more new bone inside of the central pores of scaffolds than Chitosan /Bio-Oss + titanium group and titanium group. The difference was significantly. Conclusion: The titanium scaffolds fabricated by EBM had controlled porous structure, good bone conduction and biocompatibility. Chitosan /Bio-Oss hydrogel had injectable plasticity, thermosensitive property and good biocompatibility. The meshy Ti6Al4V scaffold produced by EBM combined BMSCs encapsulated in chitosan hydrogel had good capacity on mandibular bone defect repair.Keywords: mandibular reconstruction, tissue engineering, electron beam melting, titanium alloy
Procedia PDF Downloads 44593 Gastroprotective Effect of Copper Complex On Indomethacin-Induced Gastric Ulcer In Rats. Histological and Immunohistochemical Study
Authors: Heba M. Saad Eldien, Ola Abdel-Tawab Hussein, Ahmed Yassein Nassar
Abstract:
Background: Indomethacin is a non-steroidal anti inflammatory drug. Indomethacin induces an injury to gastrointestinal mucosa in experimental animals and humans and their use is associated with a significant risk of hemorrhage, erosions and perforation of both gastric and intestinal ulcers. The anti-inflammatory action of copper complexes is an important activity of their anti-ulcer effect achieved by their intermediary role as a transport form of copper that allow activation of the several copper-dependent enzymes. Therefore, several copper complexes were synthesized and investigated as promising alternative anti-ulcer therapy. Aim of the work: The purpose of this study was to evaluate a copper chelating complex consisting of egg albumin and copper as one of the copper peptides that can be used as anti-inflammatory agent and effective in ameliorates the hazards of the indomethacin on the histological structure of the fundus of the stomach that could be added to raise the efficacy of the currently used simple and cheap gastric anti-inflammatory drug mucogel. Material &methods: This study was carried out on 40 adult male albino rats,divided equally into 4 groups;Group I(control group) received distilled water,Group II(indomethacin treated group) received (25 mg/kg body weight, oral intubation) once, Group III (mucogel treated group)2 mL/rat once daily, oral incubation, Group IV(copper complex group) 1 mL /rat of 30 gm of copper albumin complex was mixed uniformly with mucogel to 100 mL. Treatment has been started six hour after Induction of Ulcers and continued till the 3rd day. The animals sacrificed and was processed for light, transmission electron microscopy(TEM) and immunostaining for inducible nitric oxide synthase(iNOS). Results: Fundic mucosa of group II, showed exfoliation of epithelial cells lining the gland, discontinuity of surface epithelial cells (ulcer formation), vacuolation and detachment of cells, eosinophilic infiltration and congestion of blood vessels in the lamina propria and submucosa. There was thickening and disarrangement of mucosa, weak positive reaction for PAS and marked increase in the collagen fibers lamina propria and the submucosa of the fundus. TEM revealed degeneration of cheif and parietal cells.Marked increase positive reactive of iNOS in all cells of the fundic gland. Group III showed reconstruction of gastric gland with cystic dilatation and vacuolation, moderate decrease of collagen fibers, reduced the intensity of iNOS while in Group IV healthy mucosa with normal surface lining epithelium and fundic glands, strong positive reaction for PAS, marked decrease of collagen fibers and positive reaction for iNOS. TEM revealed regeneration of cheif and parietal cells. Conclusion: Co treatment of copper-albumin complex seems to be useful for gastric ulcer treatment and ameliorates most of hazards of indomethacin.Keywords: copper complex, gastric ulcer, indomethacin, rat
Procedia PDF Downloads 33992 Early Diagnosis of Myocardial Ischemia Based on Support Vector Machine and Gaussian Mixture Model by Using Features of ECG Recordings
Authors: Merve Begum Terzi, Orhan Arikan, Adnan Abaci, Mustafa Candemir
Abstract:
Acute myocardial infarction is a major cause of death in the world. Therefore, its fast and reliable diagnosis is a major clinical need. ECG is the most important diagnostic methodology which is used to make decisions about the management of the cardiovascular diseases. In patients with acute myocardial ischemia, temporary chest pains together with changes in ST segment and T wave of ECG occur shortly before the start of myocardial infarction. In this study, a technique which detects changes in ST/T sections of ECG is developed for the early diagnosis of acute myocardial ischemia. For this purpose, a database of real ECG recordings that contains a set of records from 75 patients presenting symptoms of chest pain who underwent elective percutaneous coronary intervention (PCI) is constituted. 12-lead ECG’s of the patients were recorded before and during the PCI procedure. Two ECG epochs, which are the pre-inflation ECG which is acquired before any catheter insertion and the occlusion ECG which is acquired during balloon inflation, are analyzed for each patient. By using pre-inflation and occlusion recordings, ECG features that are critical in the detection of acute myocardial ischemia are identified and the most discriminative features for the detection of acute myocardial ischemia are extracted. A classification technique based on support vector machine (SVM) approach operating with linear and radial basis function (RBF) kernels to detect ischemic events by using ST-T derived joint features from non-ischemic and ischemic states of the patients is developed. The dataset is randomly divided into training and testing sets and the training set is used to optimize SVM hyperparameters by using grid-search method and 10fold cross-validation. SVMs are designed specifically for each patient by tuning the kernel parameters in order to obtain the optimal classification performance results. As a result of implementing the developed classification technique to real ECG recordings, it is shown that the proposed technique provides highly reliable detections of the anomalies in ECG signals. Furthermore, to develop a detection technique that can be used in the absence of ECG recording obtained during healthy stage, the detection of acute myocardial ischemia based on ECG recordings of the patients obtained during ischemia is also investigated. For this purpose, a Gaussian mixture model (GMM) is used to represent the joint pdf of the most discriminating ECG features of myocardial ischemia. Then, a Neyman-Pearson type of approach is developed to provide detection of outliers that would correspond to acute myocardial ischemia. Neyman – Pearson decision strategy is used by computing the average log likelihood values of ECG segments and comparing them with a range of different threshold values. For different discrimination threshold values and number of ECG segments, probability of detection and probability of false alarm values are computed, and the corresponding ROC curves are obtained. The results indicate that increasing number of ECG segments provide higher performance for GMM based classification. Moreover, the comparison between the performances of SVM and GMM based classification showed that SVM provides higher classification performance results over ECG recordings of considerable number of patients.Keywords: ECG classification, Gaussian mixture model, Neyman–Pearson approach, support vector machine
Procedia PDF Downloads 16291 Deep Learning Based Text to Image Synthesis for Accurate Facial Composites in Criminal Investigations
Authors: Zhao Gao, Eran Edirisinghe
Abstract:
The production of an accurate sketch of a suspect based on a verbal description obtained from a witness is an essential task for most criminal investigations. The criminal investigation system employs specifically trained professional artists to manually draw a facial image of the suspect according to the descriptions of an eyewitness for subsequent identification. Within the advancement of Deep Learning, Recurrent Neural Networks (RNN) have shown great promise in Natural Language Processing (NLP) tasks. Additionally, Generative Adversarial Networks (GAN) have also proven to be very effective in image generation. In this study, a trained GAN conditioned on textual features such as keywords automatically encoded from a verbal description of a human face using an RNN is used to generate photo-realistic facial images for criminal investigations. The intention of the proposed system is to map corresponding features into text generated from verbal descriptions. With this, it becomes possible to generate many reasonably accurate alternatives to which the witness can use to hopefully identify a suspect from. This reduces subjectivity in decision making both by the eyewitness and the artist while giving an opportunity for the witness to evaluate and reconsider decisions. Furthermore, the proposed approach benefits law enforcement agencies by reducing the time taken to physically draw each potential sketch, thus increasing response times and mitigating potentially malicious human intervention. With publically available 'CelebFaces Attributes Dataset' (CelebA) and additionally providing verbal description as training data, the proposed architecture is able to effectively produce facial structures from given text. Word Embeddings are learnt by applying the RNN architecture in order to perform semantic parsing, the output of which is fed into the GAN for synthesizing photo-realistic images. Rather than the grid search method, a metaheuristic search based on genetic algorithms is applied to evolve the network with the intent of achieving optimal hyperparameters in a fraction the time of a typical brute force approach. With the exception of the ‘CelebA’ training database, further novel test cases are supplied to the network for evaluation. Witness reports detailing criminals from Interpol or other law enforcement agencies are sampled on the network. Using the descriptions provided, samples are generated and compared with the ground truth images of a criminal in order to calculate the similarities. Two factors are used for performance evaluation: The Structural Similarity Index (SSIM) and the Peak Signal-to-Noise Ratio (PSNR). A high percentile output from this performance matrix should attribute to demonstrating the accuracy, in hope of proving that the proposed approach can be an effective tool for law enforcement agencies. The proposed approach to criminal facial image generation has potential to increase the ratio of criminal cases that can be ultimately resolved using eyewitness information gathering.Keywords: RNN, GAN, NLP, facial composition, criminal investigation
Procedia PDF Downloads 16290 Learning the History of a Tuscan Village: A Serious Game Using Geolocation Augmented Reality
Authors: Irene Capecchi, Tommaso Borghini, Iacopo Bernetti
Abstract:
An important tool for the enhancement of cultural sites is serious games (SG), i.e., games designed for educational purposes; SG is applied in cultural sites through trivia, puzzles, and mini-games for participation in interactive exhibitions, mobile applications, and simulations of past events. The combination of Augmented Reality (AR) and digital cultural content has also produced examples of cultural heritage recovery and revitalization around the world. Through AR, the user perceives the information of the visited place in a more real and interactive way. Another interesting technological development for the revitalization of cultural sites is the combination of AR and Global Positioning System (GPS), which integrated have the ability to enhance the user's perception of reality by providing historical and architectural information linked to specific locations organized on a route. To the author’s best knowledge, there are currently no applications that combine GPS AR and SG for cultural heritage revitalization. The present research focused on the development of an SG based on GPS and AR. The study area is the village of Caldana in Tuscany, Italy. Caldana is a fortified Renaissance village; the most important architectures are the walls, the church of San Biagio, the rectory, and the marquis' palace. The historical information is derived from extensive research by the Department of Architecture at the University of Florence. The storyboard of the SG is based on the history of the three characters who built the village: marquis Marcello Agostini, who was commissioned by Cosimo I de Medici, Grand Duke of Tuscany, to build the village, his son Ippolito and his architect Lorenzo Pomarelli. The three historical characters were modeled in 3D using the freeware MakeHuman and imported into Blender and Mixamo to associate a skeleton and blend shapes to have gestural animations and reproduce lip movement during speech. The Unity Rhubarb Lip Syncer plugin was used for the lip sync animation. The historical costumes were created by Marvelous Designer. The application was developed using the Unity 3D graphics and game engine. The AR+GPS Location plugin was used to position the 3D historical characters based on GPS coordinates. The ARFoundation library was used to display AR content. The SG is available in two versions: for children and adults. the children's version consists of finding a digital treasure consisting of valuable items and historical rarities. Players must find 9 village locations where 3D AR models of historical figures explaining the history of the village provide clues. To stimulate players, there are 3 levels of rewards for every 3 clues discovered. The rewards consist of AR masks for archaeologist, professor, and explorer. At the adult level, the SG consists of finding the 16 historical landmarks in the village, and learning historical and architectural information interactively and engagingly. The application is being tested on a sample of adults and children. Test subjects will be surveyed on a Likert scale to find out their perceptions of using the app and the learning experience between the guided tour and interaction with the app.Keywords: augmented reality, cultural heritage, GPS, serious game
Procedia PDF Downloads 9589 Health and Greenhouse Gas Emission Implications of Reducing Meat Intakes in Hong Kong
Authors: Cynthia Sau Chun Yip, Richard Fielding
Abstract:
High meat and especially red meat intakes are significantly and positively associated with a multiple burden of diseases and also high greenhouse gas (GHG) emissions. This study investigated population meat intake patterns in Hong Kong. It quantified the burden of disease and GHG emission outcomes by modeling to adjust Hong Kong population meat intakes to recommended healthy levels. It compared age- and sex-specific population meat, fruit and vegetable intakes obtained from a population survey among adults aged 20 years and over in Hong Kong in 2005-2007, against intake recommendations suggested in the Modelling System to Inform the Revision of the Australian Guide to Healthy Eating (AGHE-2011-MS) technical document. This study found that meat and meat alternatives, especially red meat intakes among Hong Kong males aged 20+ years and over are significantly higher than recommended. Red meat intakes among females aged 50-69 years and other meat and alternatives intakes among aged 20-59 years are also higher than recommended. Taking the 2005-07 age- and sex-specific population meat intake as baselines, three counterfactual scenarios of adjusting Hong Kong adult population meat intakes to AGHE-2011-MS and Pre-2011 AGHE recommendations by the year 2030 were established. Consequent energy intake gaps were substituted with additional legume, fruit and vegetable intakes. To quantify the consequent GHG emission outcomes associated with Hong Kong meat intakes, Cradle-to-ready-to-eat lifecycle assessment emission outcome modelling was used. Comparative risk assessment of burden of disease model was used to quantify the health outcomes. This study found adjusting meat intakes to recommended levels could reduce Hong Kong GHG emission by 17%-44% when compared against baseline meat intake emissions, and prevent 2,519 to 7,012 premature deaths in males and 53 to 1,342 in females, as well as multiple burden of diseases when compared to the baseline meat intake scenario. Comparing lump sum meat intake reduction and outcome measures across the entire population, and using emission factors, and relative risks from individual studies in previous co-benefit studies, this study used age- and sex-specific input and output measures, emission factors and relative risks obtained from high quality meta-analysis and meta-review respectively, and has taken government dietary recommendations into account. Hence evaluations in this study are of better quality and more reflective of real life practices. Further to previous co-benefit studies, this study pinpointed age- and sex-specific population and meat-type-specific intervention points and leverages. When compared with similar studies in Australia, this study also showed that intervention points and leverages among populations in different geographic and cultural background could be different, and that globalization also globalizes meat consumption emission effects. More regional and cultural specific evaluations are recommended to promote more sustainable meat consumption and enhance global food security.Keywords: burden of diseases, greenhouse gas emissions, Hong Kong diet, sustainable meat consumption
Procedia PDF Downloads 31188 Occipital Squama Convexity and Neurocranial Covariation in Extant Homo sapiens
Authors: Miranda E. Karban
Abstract:
A distinctive pattern of occipital squama convexity, known as the occipital bun or chignon, has traditionally been considered a derived Neandertal trait. However, some early modern and extant Homo sapiens share similar occipital bone morphology, showing pronounced internal and external occipital squama curvature and paralambdoidal flattening. It has been posited that these morphological patterns are homologous in the two groups, but this claim remains disputed. Many developmental hypotheses have been proposed, including assertions that the chignon represents a developmental response to a long and narrow cranial vault, a narrow or flexed basicranium, or a prognathic face. These claims, however, remain to be metrically quantified in a large subadult sample, and little is known about the feature’s developmental, functional, or evolutionary significance. This study assesses patterns of chignon development and covariation in a comparative sample of extant human growth study cephalograms. Cephalograms from a total of 549 European-derived North American subjects (286 male, 263 female) were scored on a 5-stage ranking system of chignon prominence. Occipital squama shape was found to exist along a continuum, with 34 subjects (6.19%) possessing defined chignons, and 54 subjects (9.84%) possessing very little occipital squama convexity. From this larger sample, those subjects represented by a complete radiographic series were selected for metric analysis. Measurements were collected from lateral and posteroanterior (PA) cephalograms of 26 subjects (16 male, 10 female), each represented at 3 longitudinal age groups. Age group 1 (range: 3.0-6.0 years) includes subjects during a period of rapid brain growth. Age group 2 (range: 8.0-9.5 years) includes subjects during a stage in which brain growth has largely ceased, but cranial and facial development continues. Age group 3 (range: 15.9-20.4 years) includes subjects at their adult stage. A total of 16 landmarks and 153 sliding semi-landmarks were digitized at each age point, and geometric morphometric analyses, including relative warps analysis and two-block partial least squares analysis, were conducted to study covariation patterns between midsagittal occipital bone shape and other aspects of craniofacial morphology. A convex occipital squama was found to covary significantly with a low, elongated neurocranial vault, and this pattern was found to exist from the youngest age group. Other tested patterns of covariation, including cranial and basicranial breadth, basicranial angle, midcoronal cranial vault shape, and facial prognathism, were not found to be significant at any age group. These results suggest that the chignon, at least in this sample, should not be considered an independent feature, but rather the result of developmental interactions relating to neurocranial elongation. While more work must be done to quantify chignon morphology in fossil subadults, this study finds no evidence to disprove the developmental homology of the feature in modern humans and Neandertals.Keywords: chignon, craniofacial covariation, human cranial development, longitudinal growth study, occipital bun
Procedia PDF Downloads 20187 Dietary Intakes and Associated Demographic, Behavioural and Other Health-Related Factors in Mexican College Students
Authors: Laura E. Hall, Joel Monárrez-Espino, Luz María Tejada Tayabas
Abstract:
College students are at risk of weight gain and poor dietary habits, and health behaviours established during this period have been shown to track into midlife. They may therefore be an important target group for health promotion strategies, yet there is a lack of literature regarding dietary intakes and associated factors in this group, particularly in middle-income countries such as Mexico. The aim of this exploratory research was to describe and compare reported dietary intakes among nursing and nutrition college students at two public universities in Mexico, and to explore the relationship between demographic, behavioural and other health-related factors and the risk of low diet quality. Mexican college students (n=444) majoring in nutrition or nursing at two urban universities completed questionnaires regarding dietary and health-related behaviours and risks. Dietary intake was assessed via 24-hour recall. Weight, height and abdominal circumference were measured. Descriptive statistics were reported and nutrient intakes were compared between colleges and study tracks using Student’s t tests, odds ratios and Pearson chi square tests. Two dietary quality scores were constructed to explore the relationship between demographic, behavioural and other health-related factors and the diet quality scores using binary logistic regression. Analysis was performed using SPSS statistics, with differences considered statistically significant at p<0.05. The response rate to the survey was 91%. When macronutrients were considered as a percentage of total energy, the majority of students had protein intakes within recommended ranges, however one quarter of students had carbohydrate and fat intakes exceeding recommended levels. Three quarters had fibre intakes that were below recommendations. More than half of the students reported intakes of magnesium, zinc, vitamin A, folate and vitamin E that were below estimated average requirements. Students studying nutrition reported macronutrient and micronutrient intakes that were more compliant with recommendations compared to nursing students, and students studying in central-north Mexico were more compliant than those studying in southeast Mexico. Breakfast skipping (Adjusted Odds Ratio (OR) = 5.3; 95% Confidence Interval (CI) = 1.2-22.7), risk of anxiety (OR = 2.3; CI = 1.3-4.4), and university location (OR = 1.6; CI = 1.03-2.6) were associated with a greater risk of having a low macronutrient score. Caloric intakes <1800kcal (OR = 5.8; CI = 3.5-9.7), breakfast skipping (OR = 3.7; CI = 1.4-10.3), vigorous exercise ≤1h/week (OR = 2.6; CI = 1.3-5.2), soda consumption >250mls/day (OR = 2.0; CI = 1.2-3.3), unhealthy diet perception (OR = 1.9; CI = 1.2-3.0), and university location (OR = 1.8; CI = 1.1-2.8) were significantly associated with greater odds of having a low micronutrient score. College students studying nursing and nutrition did not report ideal diets, and these students should not be overlooked in public health interventions. Differences in dietary intakes between universities and study tracks were evident, with more favourable profiles evident in nutrition compared to nursing, and North-central compared to Southeast students. Further, demographic, behavioural and other health-related factors were associated with diet quality scores, warranting further research.Keywords: college student, diet quality, nutrient intake, young adult
Procedia PDF Downloads 45286 A Patient-Centered Approach to Clinical Trial Development: Real-World Evidence from a Canadian Medical Cannabis Clinic
Authors: Lucile Rapin, Cynthia El Hage, Rihab Gamaoun, Maria-Fernanda Arboleda, Erin Prosk
Abstract:
Introduction: Sante Cannabis (SC), a Canadian group of clinics dedicated to medical cannabis, based in Montreal and in the province of Quebec, has served more than 8000 patients seeking cannabis-based treatment over the past five years. As randomized clinical trials with natural medical cannabis are scarce, real-world evidence offers the opportunity to fill research gaps between scientific evidence and clinical practice. Data on the use of medical cannabis products from SC patients were prospectively collected, leading to a large real-world database on the use of medical cannabis. The aim of this study was to report information on the profiles of both patients and prescribed medical cannabis products at SC clinics, and to assess the safety of medical cannabis among Canadian patients. Methods: This is an observational retrospective study of 1342 adult patients who were authorized with medical cannabis products between October 2017 and September 2019. Information regarding demographic characteristics, therapeutic indications for medical cannabis use, patterns in dosing and dosage form of medical cannabis and adverse effects over one-year follow-up (initial and 4 follow-up (FUP) visits) were collected. Results: 59% of SC patients were female, with a mean age of 56.7 (SD= 15.6, range= (19-97)). Cannabis products were authorized mainly for patients with a diagnosis of chronic pain (68.8% of patients), cancer (6.7%), neurological disorders (5.6%), and mood disorders (5.4 %). At initial visit, a large majority (70%) of patients were authorized exclusively medical cannabis products, 27% were authorized a combination of pharmaceutical cannabinoids and medical cannabis and 3% were prescribed only pharmaceutical cannabinoids. This pattern was recurrent over the one-year follow-up. Overall, oil was the preferred formulation (average over visits 72.5%) followed by a combination of oil and dry (average 19%), other routes of administration accounted for less than 4%. Patients were predominantly prescribed products with a balanced THC:CBD ratio (59%-75% across visits). 28% of patients reported at least one adverse effect (AE) at the 3-month follow-up visit and 12% at the six-month FUP visit. 84.8% of total AEs were mild and transient. No serious AE was reported. Overall, the most common side effects reported were dizziness (11.95% of total AEs), drowsiness (11.4%), dry mouth (5.5%), nausea (4.8%), headaches (4.6%), cough (4.4%), anxiety (4.1%) and euphoria (3.5%). Other adverse effects accounted for less than 3% of total AE. Conclusion: Our results confirm that the primary area of clinical use for medical cannabis is in pain management. Patients in this cohort are largely utilizing plant-based cannabis oil products with a balanced ratio of THC:CBD. Reported adverse effects were mild and included dizziness and drowsiness. This real-world data confirms the tolerable safety profile of medical cannabis and suggests medical indications not yet validated in controlled clinical trials. Such data offers an important opportunity for the investigation of the long-term effects of cannabinoid exposure in real-life conditions. Real-world evidence can be used to direct clinical trial research efforts on specific indications and dosing patterns for product development.Keywords: medical cannabis, safety, real-world data, Canada
Procedia PDF Downloads 132