Search results for: characteristic points
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3783

Search results for: characteristic points

633 Walkability with the Use of Mobile Apps

Authors: Dimitra Riza

Abstract:

This paper examines different ways of exploring a city by using smart phones' applications while walking, and the way this new attitude will change our perception of the urban environment. By referring to various examples of such applications we will consider options and possibilities that open up with new technologies, their advantages and disadvantages, as well as ways of experiencing and interpreting the urban environment. The widespread use of smart phones gave access to information, maps, knowledge, etc. at all times and places. The city tourism marketing takes advantage of this event and promotes the city's attractions through technology. Mobile mediated walking tours, provide new possibilities and modify the way we used to explore cities, for instance by giving directions proper to find easily destinations, by displaying our exact location on the map, by creating our own tours through picking points of interest and interconnecting them to create a route. These apps act as interactive ones, as they filter the user's interests, movements, etc. Discovering a city on foot and visiting interesting sites and landmarks, became very easy, and has been revolutionized through the help of navigational and other applications. In contrast to the re-invention of the city as suggested by the Baudelaire's Flâneur in the 19th century, or to the construction of situations by the Situationists in 60s, the new technological means do not allow people to "get lost", as these follow and record our moves. In the case of strolling or drifting around the city, the option of "getting lost" is desired, as the goal is not the "wayfinding" or the destination, but it is the experience of walking itself. Getting lost is not always about dislocation, but it is about getting a feeling, free of the urban environment while experiencing it. So, on the one hand, walking is considered to be a physical and embodied experience, as the observer becomes an actor and participates with all his senses in the city activities. On the other hand, the use of a screen turns out to become a disembodied experience of the urban environment, as we perceive it in a fragmented and distanced way. Relations with the city are similar to Alberti’s isolated viewer, detached from any urban stage. The smartphone, even if we are present, acts as a mediator: we interact directly with it and indirectly with the environment. Contrary to the Flaneur and to the Situationists, who discovered the city with their own bodies, today the body itself is being detached from that experience. While contemporary cities turn out to become more walkable, the new technological applications tend to open out all possibilities in order to explore them by suggesting multiple routes. Exploration becomes easier, but Perception changes.

Keywords: body, experience, mobile apps, walking

Procedia PDF Downloads 408
632 The Analyzer: Clustering Based System for Improving Business Productivity by Analyzing User Profiles to Enhance Human Computer Interaction

Authors: Dona Shaini Abhilasha Nanayakkara, Kurugamage Jude Pravinda Gregory Perera

Abstract:

E-commerce platforms have revolutionized the shopping experience, offering convenient ways for consumers to make purchases. To improve interactions with customers and optimize marketing strategies, it is essential for businesses to understand user behavior, preferences, and needs on these platforms. This paper focuses on recommending businesses to customize interactions with users based on their behavioral patterns, leveraging data-driven analysis and machine learning techniques. Businesses can improve engagement and boost the adoption of e-commerce platforms by aligning behavioral patterns with user goals of usability and satisfaction. We propose TheAnalyzer, a clustering-based system designed to enhance business productivity by analyzing user-profiles and improving human-computer interaction. The Analyzer seamlessly integrates with business applications, collecting relevant data points based on users' natural interactions without additional burdens such as questionnaires or surveys. It defines five key user analytics as features for its dataset, which are easily captured through users' interactions with e-commerce platforms. This research presents a study demonstrating the successful distinction of users into specific groups based on the five key analytics considered by TheAnalyzer. With the assistance of domain experts, customized business rules can be attached to each group, enabling The Analyzer to influence business applications and provide an enhanced personalized user experience. The outcomes are evaluated quantitatively and qualitatively, demonstrating that utilizing TheAnalyzer’s capabilities can optimize business outcomes, enhance customer satisfaction, and drive sustainable growth. The findings of this research contribute to the advancement of personalized interactions in e-commerce platforms. By leveraging user behavioral patterns and analyzing both new and existing users, businesses can effectively tailor their interactions to improve customer satisfaction, loyalty and ultimately drive sales.

Keywords: data clustering, data standardization, dimensionality reduction, human computer interaction, user profiling

Procedia PDF Downloads 71
631 Positive Disruption: Towards a Definition of Artist-in-Residence Impact on Organisational Creativity

Authors: Denise Bianco

Abstract:

Several studies on innovation and creativity in organisations emphasise the need to expand horizons and take on alternative and unexpected views to produce something new. This paper theorises the potential impact artists can have as creative catalysts, working embedded in non-artistic organisations. It begins from an understanding that in today's ever-changing scenario, organisations are increasingly seeking to open up new creative thinking through deviant behaviours to produce innovation and that art residencies need to be critically revised in this specific context in light of their disruptive potential. On the one hand, this paper builds upon recent contributions made on workplace creativity and related concepts of deviance and disruption. Research suggests that creativity is likely to be lower in work contexts where utter conformity is a cardinal value and higher in work contexts that show some tolerance for uncertainty and deviance. On the other hand, this paper draws attention to Artist-in-Residence as a vehicle for epistemic friction between divergent and convergent thinking, which allows the creation of unparalleled ways of knowing in the dailiness of situated and contextualised social processes. In order to do so, this contribution brings together insights from the most relevant theories on organisational creativity and unconventional agile methods such as Art Thinking and direct insights from ethnographic fieldwork in the context of embedded art residencies within work organisations to propose a redefinition of Artist-in-Residence and their potential impact on organisational creativity. The result is a re-definition of embedded Artist-in-Residence in organisational settings from a more comprehensive, multi-disciplinary, and relational perspective that builds on three focal points. First the notion that organisational creativity is a dynamic and synergistic process throughout which an idea is framed by recurrent activities subjected to multiple influences. Second, the definition of embedded Artist-in-Residence as an assemblage of dynamic, productive relations and unexpected possibilities for new networks of relationality that encourage the recombination of knowledge. Third, and most importantly, the acknowledgment that embedded residencies are, at the very essence, bi-cultural knowledge contexts where creativity flourishes as the result of open-to-change processes that are highly relational, constantly negotiated, and contextualised in time and space.

Keywords: artist-in-residence, convergent and divergent thinking, creativity, creative friction, deviance and creativity

Procedia PDF Downloads 94
630 Evaluation of the Irritation Potential of Three Topical Formulations of Minoxidil 5% Using Patch Test

Authors: Sule Pallavi, Shah Priyank, Thavkar Amit, Mehta Suyog, Rohira Poonam

Abstract:

Minoxidil is used topically to help hair growth in the treatment of male androgenetic alopecia. The objective of this study is to compare irritation potential of three conventional formulation of minoxidil 5% topical solution of in human patch test. The study was a single centre, double blind, non-randomized controlled study in 56 healthy adult Indian subjects. Occlusive patch test for 24 hours was performed with three formulation of minoxidil 5% topical solution. Products tested included aqueous based minoxidil 5% (AnasureTM 5%, Sun Pharma, India – Brand A), alcohol based minoxidil 5% (Brand B) and aqueous based minoxidil 5% (Brand C). Isotonic saline 0.9% and 1% w/w sodium lauryl sulphate were included as negative control and positive control respectively. Patches were applied and removed after 24hours. The skin reaction was assessed and clinically scored 24 hours after the removal of the patches under constant artificial daylight source using Draize scale (0-4 points scale for erythema/wrinkles/dryness and for oedema). A combined mean score up to 2.0/8.0 indicates a product is “non-irritant” and score between 2.0/8.0 and 4.0/8.0 indicates “mildly irritant” and score above 4.0/8.0 indicates “irritant”. Follow-up was scheduled after one week to confirm recovery for any reaction. The procedure of the patch test followed the principles outlined by Bureau of Indian standards (BIS) (IS 4011:2018; Methods of Test for safety evaluation of Cosmetics-3rd revision). Fifty six subjects with mean age 30.9 years (27 males and 29 females) participated in the study. The combined mean score (± standard deviation) were: 0.13 ± 0.33 (Brand A), 0.39 ± 0.49 (Brand B), 0.22 ± 0.41 (Brand C), 2.91 ± 0.79 (Positive control) and 0.02 ± 0.13 (Negative control). The mean score of Brand A (Sun Pharma product) was significantly lower than Brand B (p=0.001) and was comparable with Brand C (p=0.21). The combined mean erythema score (± standard deviation) were: 0.09 ± 0.29 (Brand A), 0.27 ± 0.5 (Brand B), 0.18 ± 0.39 (Brand C), 2.02 ± 0.49 (Positive control) and 0.0 ± 0.0 (Negative control). The mean erythema score of Brand A was significantly lower than Brand B (p=0.01) and was comparable with Brand C (p=0.16). Any reaction observed at 24hours after patch removal subsided in a week. All the three topical formulation of minoxidil 5% were non-irritant. Brand A of 5% minoxidil (Sun Pharma) was found to be least irritant than Brand B and Brand C based on the combined mean score and mean erythema score in the human patch test as per the BIS, IS 4011;2018.

Keywords: erythema, irritation, minoxidil, patch test

Procedia PDF Downloads 93
629 Research on Reducing Food Losses by Extending the Date of Minimum Durability on the Example of Cereal Products

Authors: Monika Trzaskowska, Dorota Zielinska, Anna Lepecka, Katarzyna Neffe-Skocinska, Beata Bilska, Marzena Tomaszewska, Danuta Kolozyn-Krajewska

Abstract:

Microbiological quality and food safety are important food characteristics. Regulation (EU) No 1169/2011 of the European Parliament and of the Council on the provision of food information to consumers introduces the obligation to provide information on the 'use-by' date or the date of minimum durability (DMD). The second term is the date until which the properly stored or transported foodstuff retains its physical, chemical, microbiological and organoleptic properties. The date should be preceded by 'best before'. It is used for durable products, e.g., pasta. In relation to reducing food losses, the question may be asked whether products with the date of minimum durability currently declared retain quality and safety beyond this. The aim of the study was to assess the sensory quality and microbiological safety of selected cereal products, i.e., pasta and millet after DMD. The scope of the study was to determine the markers of microbiological quality, i.e., the total viable count (TVC), the number of bacteria from the Enterobacteriaceae family and the number of yeast and mold (TYMC) on the last day of DMD and after 1 and 3 months of storage. In addition, the presence of Salmonella and Listeria monocytogenes was examined on the last day of DMD. The sensory quality of products was assessed by quantitative descriptive analysis (QDA), the intensity of 14 differentiators and overall quality were defined and determined. In the tested samples of millet and pasta, no pathogenic bacteria Salmonella and Listeria monocytogenes were found. The value of the distinguishing features of selected quality and microbiological safety indicators on the last DMD day was in the range of about 3-1 log cfu/g. This demonstrates the good microbiological quality of the tested food. Comparing the products, a higher number of microorganisms was found in the samples of millet. After 3 months of storage, TVC decreased in millet, while in pasta, it was found to increase in value. In both products, the number of bacteria from the Enterobacretiaceae family decreased. In contrast, the number of TYMCs increased in samples of millet, and in pasta decreased. The intensity of sensory characteristic in the studied period varied. It remained at a similar level or increased. Millet was found to increase the intensity and flavor of 'cooked porridge' 3 months after DMD. Similarly, in the pasta, the smell and taste of 'cooked pasta' was more intense. To sum up, the researched products on the last day of the minimum durability date were characterized by very good microbiological and sensory quality, which was maintained for 3 months after this date. Based on these results, the date of minimum durability of tested products could be extended. The publication was financed on the basis of an agreement with the National Center for Research and Development No. Gospostrateg 1/385753/1/NCBR/2018 for the implementation and financing of the project under the strategic research and development program 'social and economic development of Poland in the conditions of globalizing markets – GOSPOSTRATEG - acronym PROM'.

Keywords: date of minimum durability, food losses, food quality and safety, millet, pasta

Procedia PDF Downloads 158
628 Evaluation of the Effect of Learning Disabilities and Accommodations on the Prediction of the Exam Performance: Ordinal Decision-Tree Algorithm

Authors: G. Singer, M. Golan

Abstract:

Providing students with learning disabilities (LD) with extra time to grant them equal access to the exam is a necessary but insufficient condition to compensate for their LD; there should also be a clear indication that the additional time was actually used. For example, if students with LD use more time than students without LD and yet receive lower grades, this may indicate that a different accommodation is required. If they achieve higher grades but use the same amount of time, then the effectiveness of the accommodation has not been demonstrated. The main goal of this study is to evaluate the effect of including parameters related to LD and extended exam time, along with other commonly-used characteristics (e.g., student background and ability measures such as high-school grades), on the ability of ordinal decision-tree algorithms to predict exam performance. We use naturally-occurring data collected from hundreds of undergraduate engineering students. The sub-goals are i) to examine the improvement in prediction accuracy when the indicator of exam performance includes 'actual time used' in addition to the conventional indicator (exam grade) employed in most research; ii) to explore the effectiveness of extended exam time on exam performance for different courses and for LD students with different profiles (i.e., sets of characteristics). This is achieved by using the patterns (i.e., subgroups) generated by the algorithms to identify pairs of subgroups that differ in just one characteristic (e.g., course or type of LD) but have different outcomes in terms of exam performance (grade and time used). Since grade and time used to exhibit an ordering form, we propose a method based on ordinal decision-trees, which applies a weighted information-gain ratio (WIGR) measure for selecting the classifying attributes. Unlike other known ordinal algorithms, our method does not assume monotonicity in the data. The proposed WIGR is an extension of an information-theoretic measure, in the sense that it adjusts to the case of an ordinal target and takes into account the error severity between two different target classes. Specifically, we use ordinal C4.5, random-forest, and AdaBoost algorithms, as well as an ensemble technique composed of ordinal and non-ordinal classifiers. Firstly, we find that the inclusion of LD and extended exam-time parameters improves prediction of exam performance (compared to specifications of the algorithms that do not include these variables). Secondly, when the indicator of exam performance includes 'actual time used' together with grade (as opposed to grade only), the prediction accuracy improves. Thirdly, our subgroup analyses show clear differences in the effect of extended exam time on exam performance among different courses and different student profiles. From a methodological perspective, we find that the ordinal decision-tree based algorithms outperform their conventional, non-ordinal counterparts. Further, we demonstrate that the ensemble-based approach leverages the strengths of each type of classifier (ordinal and non-ordinal) and yields better performance than each classifier individually.

Keywords: actual exam time usage, ensemble learning, learning disabilities, ordinal classification, time extension

Procedia PDF Downloads 99
627 Standardization of the Roots of Gnidia stenophylla Gilg: A Potential Medicinal Plant of South Eastern Ethiopia Traditionally Used as an Antimalarial

Authors: Mebruka Mohammed, Daniel Bisrat, Asfaw Debella, Tarekegn Birhanu

Abstract:

Lack of quality control standards for medicinal plants and their preparations is considered major barrier to their integration in to effective primary health care in Ethiopia. Poor quality herbal preparations led to countless adverse reactions extending to death. Denial of penetration for the Ethiopian medicinal plants in to the world’s booming herbal market is also another significant loss resulting from absence of herbal quality control system. Thus, in the present study, Gnidia stenophylla Gilg (popular antimalarial plant of south eastern Ethiopia), is standardized and a full monograph is produced that can serve as a guideline in quality control of the crude drug. Morphologically, the roots are found to be cylindrical and tapering towards the end. It has a hard, corky and friable touch with saddle brown color externally and it is relatively smooth and pale brown internally. It has got characteristic pungent odor and very bitter taste. Microscopically it has showed lignified xylem vessels, wider medullary rays with some calcium oxalate crystals, reddish brown secondary metabolite contents and slender shaped long fibres. Physicochemical standards quantified and resulted: foreign matter (5.25%), moisture content (6.69%), total ash (40.80%), acid insoluble ash (8.00%), water soluble ash (2.30%), alcohol soluble extractive (15.27%), water soluble extractive (10.98%), foaming index (100.01 ml/g), swelling index (7.60 ml/g). Phytochemically: Phenols, flavonoids, steroids, tannins and saponins were detected in the root extract; TLC and HPLC fingerprints were produced and an analytical marker was also tentatively characterized as 3-(3,4-dihydro-3,5-dihydroxy-2-(4-hydroxy-5-methylhex-1-en-2-yl)-7-methoxy-4-oxo-2H-chromen-8-yl)-5-hydroxy-2-(4-hydroxyphenyl)-7-methoxy-4H-chromen-4-one. Residue wise pesticides (i.e. DDT, DDE, g-BHC) and radiochemical levels fall below the WHO limit while Heavy metals (i.e. Co, Ni, Cr, Pb, and Cu), total aerobic count and fungal load lie way above the WHO limit. In conclusion, the result can be taken as signal that employing non standardized medicinal plants could cause many health risks of the Ethiopian people and Africans’ at large (as 80% of inhabitants in the continent depends on it for primary health care). Therefore, following a more universal approach to herbal quality by adopting the WHO guidelines and developing monographs using the various quality parameters is inevitable to minimize quality breach and promote effective herbal drug usage.

Keywords: Gnidia stenophylla Gilg, standardization/monograph, pharmacognostic, residue/impurity, quality

Procedia PDF Downloads 284
626 The Changing Landscape of Fire Safety in Covered Car Parks with the Arrival of Electric Vehicles

Authors: Matt Stallwood, Michael Spearpoint

Abstract:

In 2020, the UK government announced that sales of new petrol and diesel cars would end in 2030, and battery-powered cars made up 1 in 8 new cars sold in 2021 – more than the total from the previous five years. The guidance across the UK for the fire safety design of covered car parks is changing in response to the projected rapid growth in electric vehicle (EV) use. This paper discusses the current knowledge on the fire safety concerns posed by EVs, in particular those powered by lithium-ion batteries, when considering the likelihood of vehicle ignition, fire severity and spread of fire to other vehicles. The paper builds on previous work that has investigated the frequency of fires starting in cars powered by internal combustion engines (ICE), the hazard posed by such fires in covered car parks and the potential for neighboring vehicles to become involved in an incident. Historical data has been used to determine the ignition frequency of ICE car fires, whereas such data is scarce when it comes to EV fires. Should a fire occur, then the fire development has conventionally been assessed to match a ‘medium’ growth rate and to have a 95th percentile peak heat release of 9 MW. The paper examines recent literature in which researchers have measured the burning characteristics of EVs to assess whether these values need to be changed. These findings are used to assess the risk posed by EVs when compared to ICE vehicles. The paper examines what new design guidance is being issued by various organizations across the UK, such as fire and rescue services, insurers, local government bodies and regulators and discusses the impact these are having on the arrangement of parking bays, particularly in residential and mixed-use buildings. For example, the paper illustrates how updated guidance published by the Fire Protection Association (FPA) on the installation of sprinkler systems has increased the hazard classification of parking buildings that can have a considerable impact on the feasibility of a building to meet all its design intents when specifying water supply tanks. Another guidance on the provision of smoke ventilation systems and structural fire resistance is also presented. The paper points to where further research is needed on the fire safety risks posed by EVs in covered car parks. This will ensure that any guidance is commensurate with the need to provide an adequate level of life and property safety in the built environment.

Keywords: covered car parks, electric vehicles, fire safety, risk

Procedia PDF Downloads 70
625 Deep Learning Approach for Colorectal Cancer’s Automatic Tumor Grading on Whole Slide Images

Authors: Shenlun Chen, Leonard Wee

Abstract:

Tumor grading is an essential reference for colorectal cancer (CRC) staging and survival prognostication. The widely used World Health Organization (WHO) grading system defines histological grade of CRC adenocarcinoma based on the density of glandular formation on whole slide images (WSI). Tumors are classified as well-, moderately-, poorly- or un-differentiated depending on the percentage of the tumor that is gland forming; >95%, 50-95%, 5-50% and <5%, respectively. However, manually grading WSIs is a time-consuming process and can cause observer error due to subjective judgment and unnoticed regions. Furthermore, pathologists’ grading is usually coarse while a finer and continuous differentiation grade may help to stratifying CRC patients better. In this study, a deep learning based automatic differentiation grading algorithm was developed and evaluated by survival analysis. Firstly, a gland segmentation model was developed for segmenting gland structures. Gland regions of WSIs were delineated and used for differentiation annotating. Tumor regions were annotated by experienced pathologists into high-, medium-, low-differentiation and normal tissue, which correspond to tumor with clear-, unclear-, no-gland structure and non-tumor, respectively. Then a differentiation prediction model was developed on these human annotations. Finally, all enrolled WSIs were processed by gland segmentation model and differentiation prediction model. The differentiation grade can be calculated by deep learning models’ prediction of tumor regions and tumor differentiation status according to WHO’s defines. If multiple WSIs were possessed by a patient, the highest differentiation grade was chosen. Additionally, the differentiation grade was normalized into scale between 0 to 1. The Cancer Genome Atlas, project COAD (TCGA-COAD) project was enrolled into this study. For the gland segmentation model, receiver operating characteristic (ROC) reached 0.981 and accuracy reached 0.932 in validation set. For the differentiation prediction model, ROC reached 0.983, 0.963, 0.963, 0.981 and accuracy reached 0.880, 0.923, 0.668, 0.881 for groups of low-, medium-, high-differentiation and normal tissue in validation set. Four hundred and one patients were selected after removing WSIs without gland regions and patients without follow up data. The concordance index reached to 0.609. Optimized cut off point of 51% was found by “Maxstat” method which was almost the same as WHO system’s cut off point of 50%. Both WHO system’s cut off point and optimized cut off point performed impressively in Kaplan-Meier curves and both p value of logrank test were below 0.005. In this study, gland structure of WSIs and differentiation status of tumor regions were proven to be predictable through deep leaning method. A finer and continuous differentiation grade can also be automatically calculated through above models. The differentiation grade was proven to stratify CAC patients well in survival analysis, whose optimized cut off point was almost the same as WHO tumor grading system. The tool of automatically calculating differentiation grade may show potential in field of therapy decision making and personalized treatment.

Keywords: colorectal cancer, differentiation, survival analysis, tumor grading

Procedia PDF Downloads 133
624 The Effect of the Precursor Powder Size on the Electrical and Sensor Characteristics of Fully Stabilized Zirconia-Based Solid Electrolytes

Authors: Olga Yu Kurapova, Alexander V. Shorokhov, Vladimir G. Konakov

Abstract:

Nowadays, due to their exceptional anion conductivity at high temperatures cubic zirconia solid solutions, stabilized by rare-earth and alkaline-earth metal oxides, are widely used as a solid electrolyte (SE) materials in different electrochemical devices such as gas sensors, oxygen pumps, solid oxide fuel cells (SOFC), etc. Nowadays the intensive studies are carried out in a field of novel fully stabilized zirconia based SE development. The use of precursor powders for SE manufacturing allows predetermining the microstructure, electrical and sensor characteristics of zirconia based ceramics used as SE. Thus the goal of the present work was the investigation of the effect of precursor powder size on the electrical and sensor characteristics of fully stabilized zirconia-based solid electrolytes with compositions of 0,08Y2O3∙0,92ZrO2 (YSZ), 0,06Ce2O3∙ 0,06Y2O3∙0,88ZrO2 and 0,09Ce2O3∙0,06Y2O3-0,85ZrO2. The synthesis of precursors powders with different mean particle size was performed by sol-gel synthesis in the form of reversed co-precipitation from aqueous solutions. The cakes were washed until the neutral pH and pan-dried at 110 °С. Also, YSZ ceramics was obtained by conventional solid state synthesis including milling into a planetary mill. Then the powder was cold pressed into the pellets with a diameter of 7.2 and ~4 mm thickness at P ~16 kg/cm2 and then hydrostatically pressed. The pellets were annealed at 1600 °С for 2 hours. The phase composition of as-synthesized SE was investigated by X-Ray photoelectron spectroscopy ESCA (spectrometer ESCA-5400, PHI) X-ray diffraction analysis - XRD (Shimadzu XRD-6000). Following galvanic cell О2 (РО2(1)), Pt | SE | Pt, (РО2(2) = 0.21 atm) was used for SE sensor properties investigation. The value of РО2(1) was set by mixing of O2 and N2 in the defined proportions with the accuracy of  5%. The temperature was measured by Pt/Pt-10% Rh thermocouple, The cell electromotive force (EMF) measurement was carried out with ± 0.1 mV accuracy. During the operation at the constant temperature, reproducibility was better than 5 mV. Asymmetric potential measured for all SE appeared to be negligible. It was shown that the resistivity of YSZ ceramics decreases in about two times upon the mean agglomerates decrease from 200-250 to 40 nm. It is likely due to the both surface and bulk resistivity decrease in grains. So the overall decrease of grain size in ceramic SE results in the significant decrease of the total ceramics resistivity allowing sensor operation at lower temperatures. For the SE manufactured the estimation of oxygen ion transfer number tion was carried out in the range 600-800 °С. YSZ ceramics manufactured from powders with the mean particle size 40-140 nm, shows the highest values i.e. 0.97-0.98. SE manufactured from precursors with the mean particle size 40-140 nm shows higher sensor characteristic i.e. temperature and oxygen concentration EMF dependencies, EMF (ENernst - Ereal), tion, response time, then ceramics, manufactured by conventional solid state synthesis.

Keywords: oxygen sensors, precursor powders, sol-gel synthesis, stabilized zirconia ceramics

Procedia PDF Downloads 277
623 Detailed Sensitive Detection of Impurities in Waste Engine Oils Using Laser Induced Breakdown Spectroscopy, Rotating Disk Electrode Optical Emission Spectroscopy and Surface Plasmon Resonance

Authors: Cherry Dhiman, Ayushi Paliwal, Mohd. Shahid Khan, M. N. Reddy, Vinay Gupta, Monika Tomar

Abstract:

The laser based high resolution spectroscopic experimental techniques such as Laser Induced Breakdown Spectroscopy (LIBS), Rotating Disk Electrode Optical Emission spectroscopy (RDE-OES) and Surface Plasmon Resonance (SPR) have been used for the study of composition and degradation analysis of used engine oils. Engine oils are mainly composed of aliphatic and aromatics compounds and its soot contains hazardous components in the form of fine, coarse and ultrafine particles consisting of wear metal elements. Such coarse particulates matter (PM) and toxic elements are extremely dangerous for human health that can cause respiratory and genetic disorder in humans. The combustible soot from thermal power plants, industry, aircrafts, ships and vehicles can lead to the environmental and climate destabilization. It contributes towards global pollution for land, water, air and global warming for environment. The detection of such toxicants in the form of elemental analysis is a very serious issue for the waste material management of various organic, inorganic hydrocarbons and radioactive waste elements. In view of such important points, the current study on used engine oils was performed. The fundamental characterization of engine oils was conducted by measuring water content and kinematic viscosity test that proves the crude analysis of the degradation of used engine oils samples. The microscopic quantitative and qualitative analysis was presented by RDE-OES technique which confirms the presence of elemental impurities of Pb, Al, Cu, Si, Fe, Cr, Na and Ba lines for used waste engine oil samples in few ppm. The presence of such elemental impurities was confirmed by LIBS spectral analysis at various transition levels of atomic line. The recorded transition line of Pb confirms the maximum degradation which was found in used engine oil sample no. 3 and 4. Apart from the basic tests, the calculations for dielectric constants and refractive index of the engine oils were performed via SPR analysis.

Keywords: surface plasmon resonance, laser-induced breakdown spectroscopy, ICCD spectrometer, engine oil

Procedia PDF Downloads 136
622 Testing of Infill Walls with Joint Reinforcement Subjected to in Plane Lateral Load

Authors: J. Martin Leal-Graciano, Juan J. Pérez-Gavilán, A. Reyes-Salazar, J. H. Castorena, J. L. Rivera-Salas

Abstract:

The experimental results about the global behavior of twelve 1:2 scaled reinforced concrete frame subject to in-plane lateral load are presented. The main objective was to generate experimental evidence about the use of steel bars within mortar bed-joints as shear reinforcement in infill walls. Similar to the Canadian and New Zealand standards, the Mexican code includes specifications for this type of reinforcement. However, these specifications were obtained through experimental studies of load-bearing walls, mainly confined walls. Little information is found in the existing literature about the effects of joint reinforcement on the seismic behavior of infill masonry walls. Consequently, the Mexican code establishes the same equations to estimate the contribution of joint reinforcement for both confined walls and infill walls. A confined masonry construction and a reinforced concrete frame infilled with masonry walls have similar appearances. However, substantial differences exist between these two construction systems, which are mainly related to the sequence of construction and to how these structures support vertical and lateral loads. To achieve the objective established, ten reinforced concrete frames with masonry infill walls were built and tested in pairs, having both specimens in the pair identical characteristics except that one of them included joint reinforcement. The variables between pairs were the type of units, the size of the columns of the frame and the aspect ratio of the wall. All cases included tie-columns and tie-beams on the perimeter of the wall to anchor the joint reinforcement. Also, two bare frame with identical characteristic to the infilled frames were tested. The purpose was to investigate the effects of the infill wall on the behavior of the system to in-plane lateral load. In addition, the experimental results were compared with the prediction of the Mexican code. All the specimens were tested in cantilever under reversible cyclic lateral load. To simulate gravity load, constant vertical load was applied on the top of the columns. The results indicate that the contribution of the joint reinforcement to lateral strength depends on the size of the columns of the frame. Larger size columns produce a failure mode that is predominantly a sliding mode. Sliding inhibits the production of new inclined cracks, which are necessary to activate (deform) the joint reinforcement. Regarding the effects of joint reinforcement in the performance of confined masonry walls, many facts were confirmed for infill walls: this type of reinforcement increases the lateral strength of the wall, produces a more distributed cracking and reduces the width of the cracks. Moreover, it reduces the ductility demand of the system at maximum strength. The prediction of the lateral strength provided by the Mexican code is property in some cases; however, the effect of the size of the columns on the contribution of joint reinforcement needs to be better understood.

Keywords: experimental study, Infill wall, Infilled frame, masonry wall

Procedia PDF Downloads 74
621 Thermodynamic Analyses of Information Dissipation along the Passive Dendritic Trees and Active Action Potential

Authors: Bahar Hazal Yalçınkaya, Bayram Yılmaz, Mustafa Özilgen

Abstract:

Brain information transmission in the neuronal network occurs in the form of electrical signals. Neural work transmits information between the neurons or neurons and target cells by moving charged particles in a voltage field; a fraction of the energy utilized in this process is dissipated via entropy generation. Exergy loss and entropy generation models demonstrate the inefficiencies of the communication along the dendritic trees. In this study, neurons of 4 different animals were analyzed with one dimensional cable model with N=6 identical dendritic trees and M=3 order of symmetrical branching. Each branch symmetrically bifurcates in accordance with the 3/2 power law in an infinitely long cylinder with the usual core conductor assumptions, where membrane potential is conserved in the core conductor at all branching points. In the model, exergy loss and entropy generation rates are calculated for each branch of equivalent cylinders of electrotonic length (L) ranging from 0.1 to 1.5 for four different dendritic branches, input branch (BI), and sister branch (BS) and two cousin branches (BC-1 & BC-2). Thermodynamic analysis with the data coming from two different cat motoneuron studies show that in both experiments nearly the same amount of exergy is lost while generating nearly the same amount of entropy. Guinea pig vagal motoneuron loses twofold more exergy compared to the cat models and the squid exergy loss and entropy generation were nearly tenfold compared to the guinea pig vagal motoneuron model. Thermodynamic analysis show that the dissipated energy in the dendritic tress is directly proportional with the electrotonic length, exergy loss and entropy generation. Entropy generation and exergy loss show variability not only between the vertebrate and invertebrates but also within the same class. Concurrently, single action potential Na+ ion load, metabolic energy utilization and its thermodynamic aspect contributed for squid giant axon and mammalian motoneuron model. Energy demand is supplied to the neurons in the form of Adenosine triphosphate (ATP). Exergy destruction and entropy generation upon ATP hydrolysis are calculated. ATP utilization, exergy destruction and entropy generation showed differences in each model depending on the variations in the ion transport along the channels.

Keywords: ATP utilization, entropy generation, exergy loss, neuronal information transmittance

Procedia PDF Downloads 391
620 Reducing Later Life Loneliness: A Systematic Literature Review of Loneliness Interventions

Authors: Dhruv Sharma, Lynne Blair, Stephen Clune

Abstract:

Later life loneliness is a social issue that is increasing alongside an upward global population trend. As a society, one way that we have responded to this social challenge is through developing non-pharmacological interventions such as befriending services, activity clubs, meet-ups, etc. Through a systematic literature review, this paper suggests that currently there is an underrepresentation of radical innovation, and underutilization of digital technologies in developing loneliness interventions for older adults. This paper examines intervention studies that were published in English language, within peer reviewed journals between January 2005 and December 2014 across 4 electronic databases. In addition to academic databases, interventions found in grey literature in the form of websites, blogs, and Twitter were also included in the overall review. This approach yielded 129 interventions that were included in the study. A systematic approach allowed the minimization of any bias dictating the selection of interventions to study. A coding strategy based on a pattern analysis approach was devised to be able to compare and contrast the loneliness interventions. Firstly, interventions were categorized on the basis of their objective to identify whether they were preventative, supportive, or remedial in nature. Secondly, depending on their scope, they were categorized as one-to-one, community-based, or group based. It was also ascertained whether interventions represented an improvement, an incremental innovation, a major advance or a radical departure, in comparison to the most basic form of a loneliness intervention. Finally, interventions were also assessed on the basis of the extent to which they utilized digital technologies. Individual visualizations representing the four levels of coding were created for each intervention, followed by an aggregated visual to facilitate analysis. To keep the inquiry within scope and to present a coherent view of the findings, the analysis was primarily concerned the level of innovation, and the use of digital technologies. This analysis highlights a weak but positive correlation between the level of innovation and the use of digital technologies in designing and deploying loneliness interventions, and also emphasizes how certain existing interventions could be tweaked to enable their migration from representing incremental innovation to radical innovation for example. This analysis also points out the value of including grey literature, especially from Twitter, in systematic literature reviews to get a contemporary view of latest work in the area under investigation.

Keywords: ageing, loneliness, innovation, digital

Procedia PDF Downloads 119
619 Utilising Indigenous Knowledge to Design Dykes in Malawi

Authors: Martin Kleynhans, Margot Soler, Gavin Quibell

Abstract:

Malawi is one of the world’s poorest nations and consequently, the design of flood risk management infrastructure comes with a different set of challenges. There is a lack of good quality hydromet data, both in spatial terms and in the quality thereof and the challenge in the design of flood risk management infrastructure is compounded by the fact that maintenance is almost completely non-existent and that solutions have to be simple to be effective. Solutions should not require any further resources to remain functional after completion, and they should be resilient. They also have to be cost effective. The Lower Shire Valley of Malawi suffers from frequent flood events. Various flood risk management interventions have been designed across the valley during the course of the Shire River Basin Management Project – Phase I, and due to the data poor environment, indigenous knowledge was relied upon to a great extent for hydrological and hydraulic model calibration and verification. However, indigenous knowledge comes with the caveat that it is ‘fuzzy’ and that it can be manipulated for political reasons. The experience in the Lower Shire valley suggests that indigenous knowledge is unlikely to invent a problem where none exists, but that flood depths and extents may be exaggerated to secure prioritization of the intervention. Indigenous knowledge relies on the memory of a community and cannot foresee events that exceed past experience, that could occur differently to those that have occurred in the past, or where flood management interventions change the flow regime. This complicates communication of planned interventions to local inhabitants. Indigenous knowledge is, for the most part, intuitive, but flooding can sometimes be counter intuitive, and the rural poor may have a lower trust of technology. Due to a near complete lack of maintenance of infrastructure, infrastructure has to be designed with no moving parts and no requirement for energy inputs. This precludes pumps, valves, flap gates and sophisticated warning systems. Designs of dykes during this project included ‘flood warning spillways’, that double up as pedestrian and animal crossing points, which provide warning of impending dangerous water levels behind dykes to residents before water levels that could cause a possible dyke failure are reached. Locally available materials and erosion protection using vegetation were used wherever possible to keep costs down.

Keywords: design of dykes in low-income countries, flood warning spillways, indigenous knowledge, Malawi

Procedia PDF Downloads 273
618 Maneuvering Modelling of a One-Degree-of-Freedom Articulated Vehicle: Modeling and Experimental Verification

Authors: Mauricio E. Cruz, Ilse Cervantes, Manuel J. Fabela

Abstract:

The evaluation of the maneuverability of road vehicles is generally carried out through the use of specialized computer programs due to the advantages they offer compared to the experimental method. These programs are based on purely geometric considerations of the characteristics of the vehicles, such as main dimensions, the location of the axles, and points of articulation, without considering parameters such as weight distribution and magnitude, tire properties, etc. In this paper, we address the problem of maneuverability in a semi-trailer truck to navigate urban streets, maneuvering yards, and parking lots, using the Ackerman principle to propose a kinematic model that, through geometric considerations, it is possible to determine the space necessary to maneuver safely. The model was experimentally validated by conducting maneuverability tests with an articulated vehicle. The measurements were made through a GPS that allows us to know the position, trajectory, and speed of the vehicle, an inertial motion unit (IMU) that allows measuring the accelerations and angular speeds in the semi-trailer, and an instrumented steering wheel that allows measuring the angle of rotation of the flywheel, the angular velocity and the torque applied to the flywheel. To obtain the steering angle of the tires, a parameterization of the complete travel of the steering wheel and its equivalent in the tires was carried out. For the tests, 3 different angles were selected, and 3 turns were made for each angle in both directions of rotation (left and right turn). The results showed that the proposed kinematic model achieved 95% accuracy for speeds below 5 km / h. The experiments revealed that that tighter maneuvers increased significantly the space required and that the vehicle maneuverability was limited by the size of the semi-trailer. The maneuverability was also tested as a function of the vehicle load and 3 different load levels we used: light, medium, and heavy. It was found that the internal turning radii also increased with the load, probably due to the changes in the tires' adhesion to the pavement since heavier loads had larger contact wheel-road surfaces. The load was found as an important factor affecting the precision of the model (up to 30%), and therefore I should be considered. The model obtained is expected to be used to improve maneuverability through a robust control system.

Keywords: articuled vehicle, experimental validation, kinematic model, maneuverability, semi-trailer truck

Procedia PDF Downloads 114
617 Air Handling Units Power Consumption Using Generalized Additive Model for Anomaly Detection: A Case Study in a Singapore Campus

Authors: Ju Peng Poh, Jun Yu Charles Lee, Jonathan Chew Hoe Khoo

Abstract:

The emergence of digital twin technology, a digital replica of physical world, has improved the real-time access to data from sensors about the performance of buildings. This digital transformation has opened up many opportunities to improve the management of the building by using the data collected to help monitor consumption patterns and energy leakages. One example is the integration of predictive models for anomaly detection. In this paper, we use the GAM (Generalised Additive Model) for the anomaly detection of Air Handling Units (AHU) power consumption pattern. There is ample research work on the use of GAM for the prediction of power consumption at the office building and nation-wide level. However, there is limited illustration of its anomaly detection capabilities, prescriptive analytics case study, and its integration with the latest development of digital twin technology. In this paper, we applied the general GAM modelling framework on the historical data of the AHU power consumption and cooling load of the building between Jan 2018 to Aug 2019 from an education campus in Singapore to train prediction models that, in turn, yield predicted values and ranges. The historical data are seamlessly extracted from the digital twin for modelling purposes. We enhanced the utility of the GAM model by using it to power a real-time anomaly detection system based on the forward predicted ranges. The magnitude of deviation from the upper and lower bounds of the uncertainty intervals is used to inform and identify anomalous data points, all based on historical data, without explicit intervention from domain experts. Notwithstanding, the domain expert fits in through an optional feedback loop through which iterative data cleansing is performed. After an anomalously high or low level of power consumption detected, a set of rule-based conditions are evaluated in real-time to help determine the next course of action for the facilities manager. The performance of GAM is then compared with other approaches to evaluate its effectiveness. Lastly, we discuss the successfully deployment of this approach for the detection of anomalous power consumption pattern and illustrated with real-world use cases.

Keywords: anomaly detection, digital twin, generalised additive model, GAM, power consumption, supervised learning

Procedia PDF Downloads 149
616 The Situation in Afghanistan as a Step Forward in Putting an End to Impunity

Authors: Jelena Radmanovic

Abstract:

On 5 March 2020, the International Criminal Court has decided to authorize the investigation into the crimes allegedly committed on the territory of Afghanistan after 1 May 2003. The said determination has raised several controversies, including the recently imposed sanctions by the United States, furthering the United States' long-standing rejection of the authority of the International Criminal Court. The purpose of this research is to address the said investigation in light of its importance for the prevention of impunity in the cases where the perpetrators are nationals of Non-Party States to the Rome Statute. Difficulties that the International Criminal Court has been facing, concerning the establishment of its jurisdiction in those instances where an involved state is not a Party to the Rome Statute, have become the most significant stumbling block undermining the importance, integrity, and influence of the Court. The Situation in Afghanistan raises even further concern, bearing in mind that the Prosecutor’s Request for authorization of an investigation pursuant to article 15 from 20 November 2017 has initially been rejected with the ‘interests of justice’ as an applied rationale. The first method used for the present research is the description of the actual events regarding the aforementioned decisions and the following reactions in the international community, while with the second method – the method of conceptual analysis, the research will address the decisions pertaining to the International Criminal Court’s jurisdiction and will attempt to address the mentioned Decision of 5 March 2020 as an example of good practice and a precedent that should be followed in all similar situations. The research will attempt parsing the reasons used by the International Criminal Court, giving rather greater attention to the latter decision that has authorized the investigation and the points raised by the officials of the United States. It is a find of this research that the International Criminal Court, together with other similar judicial instances (Nuremberg and Tokyo Tribunals, The International Criminal Tribunal for the former Yugoslavia, The International Criminal Tribunal for Rwanda), has presented the world with the possibility of non-impunity, attempting to prosecute those responsible for the gravest of crimes known to the humanity and has shown that such persons should not enjoy the benefits of their immunities, with its focus primarily on the victims of such crimes. Whilst it is an issue that will most certainly be addressed further in the future, with the situations that will be brought before the International Criminal Court, the present research will make an attempt at pointing to the significance of the situation in Afghanistan, the International Criminal Court as such and the international criminal justice as a whole, for the purpose of putting an end to impunity.

Keywords: Afghanistan, impunity, international criminal court, sanctions, United States

Procedia PDF Downloads 121
615 The Mapping of Pastoral Area as a Basis of Ecological for Beef Cattle in Pinrang Regency, South Sulawesi, Indonesia

Authors: Jasmal A. Syamsu, Muhammad Yusuf, Hikmah M. Ali, Mawardi A. Asja, Zulkharnaim

Abstract:

This study was conducted and aimed in identifying and mapping the pasture as an ecological base of beef cattle. A survey was carried out during a period of April to June 2016, in Suppa, Mattirobulu, the district of Pinrang, South Sulawesi province. The mapping process of grazing area was conducted in several stages; inputting and tracking of data points into Google Earth Pro (version 7.1.4.1529), affirmation and confirmation of tracking line visualized by satellite with a variety of records at the point, a certain point and tracking input data into ArcMap Application (ArcGIS version 10.1), data processing DEM/SRTM (S04E119) with respect to the location of the grazing areas, creation of a contour map (a distance of 5 m) and mapping tilt (slope) of land and land cover map-making. Analysis of land cover, particularly the state of the vegetation was done through the identification procedure NDVI (Normalized Differences Vegetation Index). This procedure was performed by making use of the Landsat-8. The results showed that the topography of the grazing areas of hills and some sloping surfaces and flat with elevation vary from 74 to 145 above sea level (asl), while the requirements for growing superior grass and legume is an altitude of up to 143-159 asl. Slope varied between 0 - > 40% and was dominated by a slope of 0-15%, according to the slope/topography pasture maximum of 15%. The range of NDVI values for pasture image analysis results was between 0.1 and 0.27. Characteristics of vegetation cover of pasture land in the category of vegetation density were low, 70% of the land was the land for cattle grazing, while the remaining approximately 30% was a grove and forest included plant water where the place for shelter of the cattle during the heat and drinking water supply. There are seven types of graminae and 5 types of legume that was dominant in the region. Proportionally, graminae class dominated up 75.6% and legume crops up to 22.1% and the remaining 2.3% was another plant trees that grow in the region. The dominant weed species in the region were Cromolaenaodorata and Lantana camara, besides that there were 6 types of floor plant that did not include as forage fodder.

Keywords: pastoral, ecology, mapping, beef cattle

Procedia PDF Downloads 349
614 In vitro Establishment and Characterization of Oral Squamous Cell Carcinoma Derived Cancer Stem-Like Cells

Authors: Varsha Salian, Shama Rao, N. Narendra, B. Mohana Kumar

Abstract:

Evolving evidence proposes the existence of a highly tumorigenic subpopulation of undifferentiated, self-renewing cancer stem cells, responsible for exhibiting resistance to conventional anti-cancer therapy, recurrence, metastasis and heterogeneous tumor formation. Importantly, the mechanisms exploited by cancer stem cells to resist chemotherapy are very less understood. Oral squamous cell carcinoma (OSCC) is one of the most regularly diagnosed cancer types in India and is associated commonly with alcohol and tobacco use. Therefore, the isolation and in vitro characterization of cancer stem-like cells from patients with OSCC is a critical step to advance the understanding of the chemoresistance processes and for designing therapeutic strategies. With this, the present study aimed to establish and characterize cancer stem-like cells in vitro from OSCC. The primary cultures of cancer stem-like cell lines were established from the tissue biopsies of patients with clinical evidence of an ulceroproliferative lesion and histopathological confirmation of OSCC. The viability of cells assessed by trypan blue exclusion assay showed more than 95% at passage 1 (P1), P2 and P3. Replication rate was performed by plating cells in 12-well plate and counting them at various time points of culture. Cells had a more marked proliferative activity and the average doubling time was less than 20 hrs. After being cultured for 10 to 14 days, cancer stem-like cells gradually aggregated and formed sphere-like bodies. More spheroid bodies were observed when cultured in DMEM/F-12 under low serum conditions. Interestingly, cells with higher proliferative activity had a tendency to form more sphere-like bodies. Expression of specific markers, including membrane proteins or cell enzymes, such as CD24, CD29, CD44, CD133, and aldehyde dehydrogenase 1 (ALDH1) is being explored for further characterization of cancer stem-like cells. To summarize the findings, the establishment of OSCC derived cancer stem-like cells may provide scope for better understanding the cause for recurrence and metastasis in oral epithelial malignancies. Particularly, identification and characterization studies on cancer stem-like cells in Indian population seem to be lacking thus provoking the need for such studies in a population where alcohol consumption and tobacco chewing are major risk habits.

Keywords: cancer stem-like cells, characterization, in vitro, oral squamous cell carcinoma

Procedia PDF Downloads 217
613 R&D Diffusion and Productivity in a Globalized World: Country Capabilities in an MRIO Framework

Authors: S. Jimenez, R.Duarte, J.Sanchez-Choliz, I. Villanua

Abstract:

There is a certain consensus in economic literature about the factors that have influenced in historical differences in growth rates observed between developed and developing countries. However, it is less clear what elements have marked different paths of growth in developed economies in recent decades. R&D has always been seen as one of the major sources of technological progress, and productivity growth, which is directly influenced by technological developments. Following recent literature, we can say that ‘innovation pushes the technological frontier forward’ as well as encourage future innovation through the creation of externalities. In other words, productivity benefits from innovation are not fully appropriated by innovators, but it also spread through the rest of the economies encouraging absorptive capacities, what have become especially important in a context of increasing fragmentation of production This paper aims to contribute to this literature in two ways, first, exploring alternative indexes of R&D flows embodied in inter-country, inter-sectorial flows of good and services (as approximation to technology spillovers) capturing structural and technological characteristic of countries and, second, analyzing the impact of direct and embodied R&D on the evolution of labor productivity at the country/sector level in recent decades. The traditional way of calculation through a multiregional input-output framework assumes that all countries have the same capabilities to absorb technology, but it is not, each one has different structural features and, this implies, different capabilities as part of literature, claim. In order to capture these differences, we propose to use a weight based on specialization structure indexes; one related with the specialization of countries in high-tech sectors and the other one based on a dispersion index. We propose these two measures because, as far as we understood, country capabilities can be captured through different ways; countries specialization in knowledge-intensive sectors, such as Chemicals or Electrical Equipment, or an intermediate technology effort across different sectors. Results suggest the increasing importance of country capabilities while increasing the trade openness. Besides, if we focus in the country rankings, we can observe that with high-tech weighted R&D embodied countries as China, Taiwan and Germany arose the top five despite not having the highest intensities of R&D expenditure, showing the importance of country capabilities. Additionally, through a fixed effects panel data model we show that, in fact, R&D embodied is important to explain labor productivity increases, in fact, even more that direct R&D investments. This is reflecting that globalization is more important than has been said until now. However, it is true that almost all analysis done in relation with that consider the effect of t-1 direct R&D intensity over economic growth. Nevertheless, from our point of view R&D evolve as a delayed flow and it is necessary some time to be able to see its effects on the economy, as some authors have already claimed. Our estimations tend to corroborate this hypothesis obtaining a gap between 4-5 years.

Keywords: economic growth, embodied, input-output, technology

Procedia PDF Downloads 120
612 Estimating the Ladder Angle and the Camera Position From a 2D Photograph Based on Applications of Projective Geometry and Matrix Analysis

Authors: Inigo Beckett

Abstract:

In forensic investigations, it is often the case that the most potentially useful recorded evidence derives from coincidental imagery, recorded immediately before or during an incident, and that during the incident (e.g. a ‘failure’ or fire event), the evidence is changed or destroyed. To an image analysis expert involved in photogrammetric analysis for Civil or Criminal Proceedings, traditional computer vision methods involving calibrated cameras is often not appropriate because image metadata cannot be relied upon. This paper presents an approach for resolving this problem, considering in particular and by way of a case study, the angle of a simple ladder shown in a photograph. The UK Health and Safety Executive (HSE) guidance document published in 2014 (INDG455) advises that a leaning ladder should be erected at 75 degrees to the horizontal axis. Personal injury cases can arise in the construction industry because a ladder is too steep or too shallow. Ad-hoc photographs of such ladders in their incident position provide a basis for analysis of their angle. This paper presents a direct approach for ascertaining the position of the camera and the angle of the ladder simultaneously from the photograph(s) by way of a workflow that encompasses a novel application of projective geometry and matrix analysis. Mathematical analysis shows that for a given pixel ratio of directly measured collinear points (i.e. features that lie on the same line segment) from the 2D digital photograph with respect to a given viewing point, we can constrain the 3D camera position to a surface of a sphere in the scene. Depending on what we know about the ladder, we can enforce another independent constraint on the possible camera positions which enables us to constrain the possible positions even further. Experiments were conducted using synthetic and real-world data. The synthetic data modeled a vertical plane with a ladder on a horizontally flat plane resting against a vertical wall. The real-world data was captured using an Apple iPhone 13 Pro and 3D laser scan survey data whereby a ladder was placed in a known location and angle to the vertical axis. For each case, we calculated camera positions and the ladder angles using this method and cross-compared them against their respective ‘true’ values.

Keywords: image analysis, projective geometry, homography, photogrammetry, ladders, Forensics, Mathematical modeling, planar geometry, matrix analysis, collinear, cameras, photographs

Procedia PDF Downloads 45
611 Laminar Periodic Vortex Shedding over a Square Cylinder in Pseudoplastic Fluid Flow

Authors: Shubham Kumar, Chaitanya Goswami, Sudipto Sarkar

Abstract:

Pseudoplastic (n < 1, n being the power index) fluid flow can be found in food, pharmaceutical and process industries and has very complex flow nature. To our knowledge, inadequate research work has been done in this kind of flow even at very low Reynolds numbers. Here, in the present computation, we have considered unsteady laminar flow over a square cylinder in pseudoplastic flow environment. For Newtonian fluid flow, this laminar vortex shedding range lies between Re = 47-180. In this problem, we consider Re = 100 (Re = U∞ a/ ν, U∞ is the free stream velocity of the flow, a is the side of the cylinder and ν is the kinematic viscosity of the fluid). The pseudoplastic fluid range has been chosen from close to the Newtonian fluid (n = 0.8) to very high pseudoplasticity (n = 0.1). The flow domain is constituted using Gambit 2.2.30 and this software is also used to generate mesh and to impose the boundary conditions. For all places, the domain size is considered as 36a × 16a with 280 ×192 grid point in the streamwise and flow normal directions respectively. The domain and the grid points are selected after a thorough grid independent study at n = 1.0. Fine and equal grid spacing is used close to the square cylinder to capture the upper and lower shear layers shed from the cylinder. Away from the cylinder the grid is unequal in size and stretched out in all direction. Velocity inlet (u = U∞), pressure outlet (Neumann condition), symmetry (free-slip boundary condition du/dy = 0, v = 0) at upper and lower domain boundary conditions are used for this simulation. Wall boundary (u = v = 0) is considered on the square cylinder surface. Fully conservative 2-D unsteady Navier-Stokes equations are discretized and then solved by Ansys Fluent 14.5 to understand the flow nature. SIMPLE algorithm written in finite volume method is selected for this purpose which is the default solver in scripted in Fluent. The result obtained for Newtonian fluid flow agrees well with previous work supporting Fluent’s usefulness in academic research. A minute analysis of instantaneous and time averaged flow field is obtained both for Newtonian and pseudoplastic fluid flow. It has been observed that drag coefficient increases continuously with the reduced value of n. Also, the vortex shedding phenomenon changes at n = 0.4 due to flow instability. These are some of the remarkable findings for laminar periodic vortex shedding regime in pseudoplastic flow environment.

Keywords: Ansys Fluent, CFD, periodic vortex shedding, pseudoplastic fluid flow

Procedia PDF Downloads 193
610 Metaphysics of the Unified Field of the Universe

Authors: Santosh Kaware, Dnyandeo Patil, Moninder Modgil, Hemant Bhoir, Debendra Behera

Abstract:

The Unified Field Theory has been an area of intensive research since many decades. This paper focuses on philosophy and metaphysics of unified field theory at Planck scale - and its relationship with super string theory and Quantum Vacuum Dynamic Physics. We examined the epistemology of questions such as - (1) what is the Unified Field of universe? (2) can it actually - (a) permeate the complete universe - or (b) be localized in bound regions of the universe - or, (c) extend into the extra dimensions? - -or (d) live only in extra dimensions? (3) What should be the emergent ontological properties of Unified field? (4) How the universe is manifesting through its Quantum Vacuum energies? (5) How is the space time metric coupled to the Unified field? We present a number of ansatz - which we outline below. It is proposed that the unified field possesses consciousness as well as a memory - a recording of past history - analogous to ‘Consistent Histories’ interpretation of quantum mechanics. We proposed Planck scale geometry of Unified Field with circle like topology and having 32 energy points on its periphery which are the connected to each other by 10 dimensional meta-strings which are sources for manifestation of different fundamentals forces and particles of universe through its Quantum Vacuum energies. It is also proposed that the sub energy levels of ‘Conscious Unified Field’ are used for the process of creation, preservation and rejuvenation of the universe over a period of time by means of negentropy. These epochs can be for the complete universe, or for localized regions such as galaxies or cluster of galaxies. It is proposed that Unified field operates through geometric patterns of its Quantum Vacuum energies - manifesting as various elementary particles by giving spins to zero point energy elements. Epistemological relationship between unified field theory and super-string theories is examined. Properties of ‘consciousness’ and 'memory' cascades from universe, into macroscopic objects - and further onto the elementary particles - via a fractal pattern. Other properties of fundamental particles - such as mass, charge, spin, iso-spin also spill out of such a cascade. The manifestations of the unified field can reach into the parallel universes or the ‘multi-verse’ and essentially have an existence independent of the space-time. It is proposed that mass, length, time scales of the unified theory are less than even the Planck scale - and can be called at a level which we call that of 'Super Quantum Gravity (SQG)'.

Keywords: super string theory, Planck scale geometry, negentropy, super quantum gravity

Procedia PDF Downloads 272
609 Doing Durable Organisational Identity Work in the Transforming World of Work: Meeting the Challenge of Different Workplace Strategies

Authors: Theo Heyns Veldsman, Dieter Veldsman

Abstract:

Organisational Identity (OI) refers to who and what the organisation is, what it stands for and does, and what it aspires to become. OI explores the perspectives of how we see ourselves, are seen by others and aspire to be seen. It provides as rationale the ‘why’ for the organisation’s continued existence. The most widely accepted differentiating features of OI are encapsulated in the organisation’s core, distinctive, differentiating, and enduring attributes. OI finds its concrete expression in the organisation’s Purpose, Vision, Strategy, Core Ideology, and Legacy. In the emerging new order infused by hyper-turbulence and hyper-fluidity, the VICCAS world, OI provides a secure anchor and steady reference point for the organisation, particularly the growing widespread focus on Purpose, which is indicative of the organisation’s sense of social citizenship. However, the transforming world of work (TWOW) - particularly the potent mix of ongoing disruptive innovation, the 4th Industrial Revolution, and the gig economy with the totally unpredicted COVID19 pandemic - has resulted in the consequential adoption of different workplace strategies by organisations in terms of how, where, and when work takes place. Different employment relations (transient to permanent); work locations (on-site to remote); work time arrangements (full-time at work to flexible work schedules); and technology enablement (face-to-face to virtual) now form the basis of the employer/employee relationship. The different workplace strategies, fueled by the demands of TWOW, pose a substantive challenge to organisations of doing durable OI work, able to fulfill OI’s critical attributes of core, distinctive, differentiating, and enduring. OI work is contained in the ongoing, reciprocally interdependent stages of sense-breaking, sense-giving, internalisation, enactment, and affirmation. The objective of our paper is to explore how to do durable OI work relative to different workplace strategies in the TWOW. Using a conceptual-theoretical approach from a practice-based orientation, the paper addresses the following topics: distinguishes different workplace strategies based upon a time/place continuum; explicates stage-wise the differential organisational content and process consequences of these strategies for durable OI work; indicates the critical success factors of durable OI work under these differential conditions; recommends guidelines for OI work relative to TWOW; and points out ethical implications of all of the above.

Keywords: organisational identity, workplace strategies, new world of work, durable organisational identity work

Procedia PDF Downloads 194
608 Cytokine Profiling in Cultured Endometrial Cells after Hormonal Treatment

Authors: Mark Gavriel, Ariel J. Jaffa, Dan Grisaru, David Elad

Abstract:

The human endometrium-myometrium interface (EMI) is the uterine inner barrier without a separatig layer. It is composed of endometrial epithelial cells (EEC) and endometrial stromal cells (ESC) in the endometrium and myometrial smooth muscle cells (MSMC) in the myometrium. The EMI undergoes structural remodeling during the menstruation cycle which are essential for human reproduction. Recently, we co-cultured a layer-by-layer in vitro model of EEC, ESC and MSMC on a synthetic membrane for mechanobiology experiments. We also treated the model with progesterone and β-estradiol in order to mimic the in vivo receptive uterus In the present study we analyzed the cytokines profile in a single layer of EEC the hormonal treated in vitro model of the EMI. The methodologies of this research include simple tissue-engineering . First, we cultured commercial EEC (RL95-2, ATCC® CRL-1671™) in 24-wellplate. Then, we applied an hormonal stimuli protocol with 17-β-estradiol and progesterone in time dependent concentration according to the human physiology that mimics the menstrual cycle. We collected cell supernatant samples of control, pre-ovulation, ovulation and post-ovulaton periods for analysis of the secreted proteins and cytokines. The cytokine profiling was performed using the Proteome Profiler Human XL Cytokine Array Kit (R&D Systems, Inc., USA) that can detect105 human soluble cytokines. The relative quantification of all the cytokines will be analyzed using xMAP – LUMINEX. We conducted a fishing expedition with the 4 membranes Proteome Profiler. We processed the images, quantified the spots intensity and normalized these values by the negative control and reference spots at the membrane. Analyses of the relative quantities that reflected change higher than 5% of the control points of the kit revealed the The results clearly showed that there are significant changes in the cytokine level for inflammation and angiogenesis pathways. Analysis of tissue-engineered models of the uterine wall will enable deeper investigation of molecular and biomechanical aspects of early reproductive stages (e.g. the window of implantation) or developments of pathologies.

Keywords: tissue-engineering, hormonal stimuli, reproduction, multi-layer uterine model, progesterone, β-estradiol, receptive uterine model, fertility

Procedia PDF Downloads 127
607 High Altitude Glacier Surface Mapping in Dhauliganga Basin of Himalayan Environment Using Remote Sensing Technique

Authors: Aayushi Pandey, Manoj Kumar Pandey, Ashutosh Tiwari, Kireet Kumar

Abstract:

Glaciers play an important role in climate change and are sensitive phenomena of global climate change scenario. Glaciers in Himalayas are unique as they are predominantly valley type and are located in tropical, high altitude regions. These glaciers are often covered with debris which greatly affects ablation rate of glaciers and work as a sensitive indicator of glacier health. The aim of this study is to map high altitude Glacier surface with a focus on glacial lake and debris estimation using different techniques in Nagling glacier of dhauliganga basin in Himalayan region. Different Image Classification techniques i.e. thresholding on different band ratios and supervised classification using maximum likelihood classifier (MLC) have been used on high resolution sentinel 2A level 1c satellite imagery of 14 October 2017.Here Near Infrared (NIR)/Shortwave Infrared (SWIR) ratio image was used to extract the glaciated classes (Snow, Ice, Ice Mixed Debris) from other non-glaciated terrain classes. SWIR/BLUE Ratio Image was used to map valley rock and Debris while Green/NIR ratio image was found most suitable for mapping Glacial Lake. Accuracy assessment was performed using high resolution (3 meters) Planetscope Imagery using 60 stratified random points. The overall accuracy of MLC was 85 % while the accuracy of Band Ratios was 96.66 %. According to Band Ratio technique total areal extent of glaciated classes (Snow, Ice ,IMD) in Nagling glacier was 10.70 km2 nearly 38.07% of study area comprising of 30.87 % Snow covered area, 3.93% Ice and 3.27 % IMD covered area. Non-glaciated classes (vegetation, glacial lake, debris and valley rock) covered 61.93 % of the total area out of which valley rock is dominant with 33.83% coverage followed by debris covering 27.7 % of the area in nagling glacier. Glacial lake and Debris were accurately mapped using Band ratio technique Hence, Band Ratio approach appears to be useful for the mapping of debris covered glacier in Himalayan Region.

Keywords: band ratio, Dhauliganga basin, glacier mapping, Himalayan region, maximum likelihood classifier (MLC), Sentinel-2 satellite image

Procedia PDF Downloads 223
606 Gut Microbiota in Patients with Opioid Use Disorder: A 12-week Follow up Study

Authors: Sheng-Yu Lee

Abstract:

Aim: Opioid use disorder is often characterized by repetitive drug-seeking and drug-taking behaviors with severe public health consequences. Animal model showed that opioid-induced perturbations in the gut microbiota causally relate to neuroinflammation, deficits in reward responding, and opioid tolerance, possibly due to changes in gut microbiota. Therefore, we propose that the dysbiosis of gut microbiota can be associated with pathogenesis of opioid dependence. In this current study, we explored the differences in gut microbiota between patients and normal controls and in patients before and after initiation of methadone treatment program for 12 weeks. Methods: Patients with opioid use disorder between 20 and 65 years were recruited from the methadone maintenance outpatient clinic in 2 medical centers in the Southern Taiwan. Healthy controls without any family history of major psychiatric disorders (schizophrenia, bipolar disorder and major depressive disorder) were recruited from the community. After initial screening, 15 patients with opioid use disorder joined the study for initial evaluation (Week 0), 12 of them completed the 12-week follow-up while receiving methadone treatment and ceased heroin use (Week 12). Fecal samples were collected from the patients at baseline and the end of 12th week. A one-time fecal sample was collected from the healthy controls. The microbiota of fecal samples were investigated using 16S rRNA V3V4 amplicon sequencing, followed by bioinformatics and statistical analyses. Results: We found no significant differences in species diversity in opioid dependent patients between Week 0 and Week 12, nor compared between patients at both points and controls. For beta diversity, using principal component analysis, we found no significant differences between patients at Week 0 and Week 12, however, both patient groups showed significant differences compared to control (P=0.011). Furthermore, the linear discriminant analysis effect size (LEfSe) analysis was used to identify differentially enriched bacteria between opioid use patients and healthy controls. Compared to controls, the relative abundance of Lactobacillaceae Lactobacillus (L. Lactobacillus), Megasphaera Megasphaerahexanoica (M. Megasphaerahexanoica) and Caecibacter Caecibactermassiliensis (C Caecibactermassiliensis) were increased in patients at Week 0, while Coriobacteriales Atopobiaceae (C. Atopobiaceae), Acidaminococcus Acidaminococcusintestini (A. Acidaminococcusintestini) and Tractidigestivibacter Tractidigestivibacterscatoligenes (T. Tractidigestivibacterscatoligenes) were increased in patients at Week 12. Conclusion: In conclusion, we suggest that the gut microbiome community maybe linked to opioid use disorder, such differences may not be altered even after 12-week of cessation of opioid use.

Keywords: opioid use disorder, gut microbiota, methadone treatment, follow up study

Procedia PDF Downloads 104
605 Temperature Dependence of the Optoelectronic Properties of InAs(Sb)-Based LED Heterostructures

Authors: Antonina Semakova, Karim Mynbaev, Nikolai Bazhenov, Anton Chernyaev, Sergei Kizhaev, Nikolai Stoyanov

Abstract:

At present, heterostructures are used for fabrication of almost all types of optoelectronic devices. Our research focuses on the optoelectronic properties of InAs(Sb) solid solutions that are widely used in fabrication of light emitting diodes (LEDs) operating in middle wavelength infrared range (MWIR). This spectral range (2-6 μm) is relevant for laser diode spectroscopy of gases and molecules, for systems for the detection of explosive substances, medical applications, and for environmental monitoring. The fabrication of MWIR LEDs that operate efficiently at room temperature is mainly hindered by the predominance of non-radiative Auger recombination of charge carriers over the process of radiative recombination, which makes practical application of LEDs difficult. However, non-radiative recombination can be partly suppressed in quantum-well structures. In this regard, studies of such structures are quite topical. In this work, electroluminescence (EL) of LED heterostructures based on InAs(Sb) epitaxial films with the molar fraction of InSb ranging from 0 to 0.09 and multi quantum-well (MQW) structures was studied in the temperature range 4.2-300 K. The growth of the heterostructures was performed by metal-organic chemical vapour deposition on InAs substrates. On top of the active layer, a wide-bandgap InAsSb(Ga,P) barrier was formed. At low temperatures (4.2-100 K) stimulated emission was observed. As the temperature increased, the emission became spontaneous. The transition from stimulated emission to spontaneous one occurred at different temperatures for structures with different InSb contents in the active region. The temperature-dependent carrier lifetime, limited by radiative recombination and the most probable Auger processes (for the materials under consideration, CHHS and CHCC), were calculated within the framework of the Kane model. The effect of various recombination processes on the carrier lifetime was studied, and the dominant role of Auger processes was established. For MQW structures quantization energies for electrons, light and heavy holes were calculated. A characteristic feature of the experimental EL spectra of these structures was the presence of peaks with energy different from that of calculated optical transitions between the first quantization levels for electrons and heavy holes. The obtained results showed strong effect of the specific electronic structure of InAsSb on the energy and intensity of optical transitions in nanostructures based on this material. For the structure with MQWs in the active layer, a very weak temperature dependence of EL peak was observed at high temperatures (>150 K), which makes it attractive for fabricating temperature-resistant gas sensors operating in the middle-infrared range.

Keywords: Electroluminescence, InAsSb, light emitting diode, quantum wells

Procedia PDF Downloads 206
604 Computer Aide Discrimination of Benign and Malignant Thyroid Nodules by Ultrasound Imaging

Authors: Akbar Gharbali, Ali Abbasian Ardekani, Afshin Mohammadi

Abstract:

Introduction: Thyroid nodules have an incidence of 33-68% in the general population. More than 5-15% of these nodules are malignant. Early detection and treatment of thyroid nodules increase the cure rate and provide optimal treatment. Between the medical imaging methods, Ultrasound is the chosen imaging technique for assessment of thyroid nodules. The confirming of the diagnosis usually demands repeated fine-needle aspiration biopsy (FNAB). So, current management has morbidity and non-zero mortality. Objective: To explore diagnostic potential of automatic texture analysis (TA) methods in differentiation benign and malignant thyroid nodules by ultrasound imaging in order to help for reliable diagnosis and monitoring of the thyroid nodules in their early stages with no need biopsy. Material and Methods: The thyroid US image database consists of 70 patients (26 benign and 44 malignant) which were reported by Radiologist and proven by the biopsy. Two slices per patient were loaded in Mazda Software version 4.6 for automatic texture analysis. Regions of interests (ROIs) were defined within the abnormal part of the thyroid nodules ultrasound images. Gray levels within an ROI normalized according to three normalization schemes: N1: default or original gray levels, N2: +/- 3 Sigma or dynamic intensity limited to µ+/- 3σ, and N3: present intensity limited to 1% - 99%. Up to 270 multiscale texture features parameters per ROIs per each normalization schemes were computed from well-known statistical methods employed in Mazda software. From the statistical point of view, all calculated texture features parameters are not useful for texture analysis. So, the features based on maximum Fisher coefficient and the minimum probability of classification error and average correlation coefficients (POE+ACC) eliminated to 10 best and most effective features per normalization schemes. We analyze this feature under two standardization states (standard (S) and non-standard (NS)) with Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA) and Non-Linear Discriminant Analysis (NDA). The 1NN classifier was performed to distinguish between benign and malignant tumors. The confusion matrix and Receiver operating characteristic (ROC) curve analysis were used for the formulation of more reliable criteria of the performance of employed texture analysis methods. Results: The results demonstrated the influence of the normalization schemes and reduction methods on the effectiveness of the obtained features as a descriptor on discrimination power and classification results. The selected subset features under 1%-99% normalization, POE+ACC reduction and NDA texture analysis yielded a high discrimination performance with the area under the ROC curve (Az) of 0.9722, in distinguishing Benign from Malignant Thyroid Nodules which correspond to sensitivity of 94.45%, specificity of 100%, and accuracy of 97.14%. Conclusions: Our results indicate computer-aided diagnosis is a reliable method, and can provide useful information to help radiologists in the detection and classification of benign and malignant thyroid nodules.

Keywords: ultrasound imaging, thyroid nodules, computer aided diagnosis, texture analysis, PCA, LDA, NDA

Procedia PDF Downloads 276