Search results for: information technology enabled services
2971 The Study on Corpse Floating Time in Shanghai Region of China
Authors: Hang Meng, Wen-Bin Liu, Bi Xiao, Kai-Jun Ma, Jian-Hui Xie, Geng Fei, Tian-Ye Zhang, Lu-Yi Xu, Dong-Chuan Zhang
Abstract:
The victims in water are often found in the coastal region, along river region or the region with lakes. In China, the examination for the bodies of victims in the water is conducted by forensic doctors working in the public security bureau. Because the enter water time for most of the victims are not clear, and often lack of monitor images and other information, so to find out the corpse enter water time for victims is very difficult. After the corpse of the victim enters the water, it sinks first, then corruption gas produces, which can make the density of the corpse to be less than water, and thus rise again. So the factor that determines the corpse floating time is temperature. On the basis of the temperature data obtained in Shanghai region of China (Shanghai is a north subtropical marine monsoon climate, with an average annual temperature of about 17.1℃. The hottest month is July, the average monthly temperature is 28.6℃, and the coldest month is January, the average monthly temperature is 4.8℃). This study selected about 100 cases with definite corpse enter water time and corpse floating time, analyzed the cases and obtained the empirical law of the corpse floating time. For example, in the Shanghai region, on June 15th and October 15th, the corpse floating time is about 1.5 days. In early December, the bodies who entered the water will go up around January 1st of the following year, and the bodies who enter water in late December will float in March of next year. The results of this study can be used to roughly estimate the water enter time of the victims in Shanghai. Forensic doctors around the world can also draw on the results of this study to infer the time when the corpses of the victims in the water go up.Keywords: corpse enter water time, corpse floating time, drowning, forensic pathology, victims in the water
Procedia PDF Downloads 1962970 Fully Automated Methods for the Detection and Segmentation of Mitochondria in Microscopy Images
Authors: Blessing Ojeme, Frederick Quinn, Russell Karls, Shannon Quinn
Abstract:
The detection and segmentation of mitochondria from fluorescence microscopy are crucial for understanding the complex structure of the nervous system. However, the constant fission and fusion of mitochondria and image distortion in the background make the task of detection and segmentation challenging. In the literature, a number of open-source software tools and artificial intelligence (AI) methods have been described for analyzing mitochondrial images, achieving remarkable classification and quantitation results. However, the availability of combined expertise in the medical field and AI required to utilize these tools poses a challenge to its full adoption and use in clinical settings. Motivated by the advantages of automated methods in terms of good performance, minimum detection time, ease of implementation, and cross-platform compatibility, this study proposes a fully automated framework for the detection and segmentation of mitochondria using both image shape information and descriptive statistics. Using the low-cost, open-source python and openCV library, the algorithms are implemented in three stages: pre-processing, image binarization, and coarse-to-fine segmentation. The proposed model is validated using the mitochondrial fluorescence dataset. Ground truth labels generated using a Lab kit were also used to evaluate the performance of our detection and segmentation model. The study produces good detection and segmentation results and reports the challenges encountered during the image analysis of mitochondrial morphology from the fluorescence mitochondrial dataset. A discussion on the methods and future perspectives of fully automated frameworks conclude the paper.Keywords: 2D, binarization, CLAHE, detection, fluorescence microscopy, mitochondria, segmentation
Procedia PDF Downloads 3572969 The Impact of E-Commerce in Changing Shopping Lifestyle of Urban Communities in Jakarta
Authors: Juliana Kurniawati, Helen Diana Vida
Abstract:
Visiting mall is one of the Indonesian communities’ lifestyle who live in urban areas. Indonesian people, especially who live in Jakarta, use a shopping mall as one of the favourite places to get pleasure. This mall visitors come from various social classes. They use the shopping mall as a place to identify themselves as urban people. Jakarta has a number of great shopping malls such as Plaza Indonesia, Plaza Senayan, Pondok Indah Mall, etc. The shopping malls become one of the popular places since Jakarta's public sphere such as parks and playgrounds are very limited in number compared to that of shopping malls. In Jakarta, people do not come to a shopping mall only for shopping. Sometimes they go there to look around, meet up with some friends, or watch a movie. We can find everything in the shopping malls. The principle of one-stop shopping becomes an attractive offer for urban people. The items for selling are various, from the cheap goods to the expensive ones. A new era in consumer culture began with the advent of shopping was localized in France in the 19th century. Since the development of the online store and the easier way to access the internet, everyone can shop 24 hours anywhere they want. The emergence of online store indirectly has an impact on the viability of conventional stores. In October 2017, in Indonesia, two outlets branded goods namely Lotus and Debenhams were closed. This may a result of increasingly rampant online stores and shopping style urban society shift. The rising of technology gives some influence on the development of e-commerce in Indonesia. Everyone can access e-commerce. However, those who can do it are the middle up class to high class people. The development of e-commerce in Indonesia is quite fast, we can observe the emergence of various online shopping sites on various social media platforms such as Zalora, Berrybenka, Bukalapak, Lazada, and Tokopedia. E-commerce is increasingly affecting people's lives in line with the development of lifestyle and increasing revenue. This research aims to know the reasons of urban society choosing e-commerce as a medium for grocery shopping, how e-commerce is affecting their shopping styles, as well as why society provides confidence in the online store for shopping. This research uses theories of lifestyle by David Chaney. The subject of this research is urban society who actively shop online on Zalora, the communities based in Jakarta. Zalora site was chosen because the site is selling branded goods. This research is expected to explain in detail about the changing style of the urban community from the shopping mall to digital media by emphasizing the aspect of public confidence towards the online store.Keywords: e-commerce, shopping, lifestyle, changing
Procedia PDF Downloads 2982968 Body Fluids Identification by Raman Spectroscopy and Matrix-Assisted Laser Desorption/Ionization Time-of-Flight Mass Spectrometry
Authors: Huixia Shi, Can Hu, Jun Zhu, Hongling Guo, Haiyan Li, Hongyan Du
Abstract:
The identification of human body fluids during forensic investigations is a critical step to determine key details, and present strong evidence to testify criminal in a case. With the popularity of DNA and improved detection technology, the potential question must be revolved that whether the suspect’s DNA derived from saliva or semen, menstrual or peripheral blood, how to identify the red substance or aged blood traces on the spot is blood; How to determine who contribute the right one in mixed stains. In recent years, molecular approaches have been developing increasingly on mRNA, miRNA, DNA methylation and microbial markers, but appear expensive, time-consuming, and destructive disadvantages. Physicochemical methods are utilized frequently such us scanning electron microscopy/energy spectroscopy and X-ray fluorescence and so on, but results only showing one or two characteristics of body fluid itself and that out of working in unknown or mixed body fluid stains. This paper focuses on using chemistry methods Raman spectroscopy and matrix-assisted laser desorption/ionization time-of-flight mass spectrometry to discriminate species of peripheral blood, menstrual blood, semen, saliva, vaginal secretions, urine or sweat. Firstly, non-destructive, confirmatory, convenient and fast Raman spectroscopy method combined with more accurate matrix-assisted laser desorption/ionization time-of-flight mass spectrometry method can totally distinguish one from other body fluids. Secondly, 11 spectral signatures and specific metabolic molecules have been obtained by analysis results after 70 samples detected. Thirdly, Raman results showed peripheral and menstrual blood, saliva and vaginal have highly similar spectroscopic features. Advanced statistical analysis of the multiple Raman spectra must be requested to classify one to another. On the other hand, it seems that the lactic acid can differentiate peripheral and menstrual blood detected by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry, but that is not a specific metabolic molecule, more sensitivity ones will be analyzed in a forward study. These results demonstrate the great potential of the developed chemistry methods for forensic applications, although more work is needed for method validation.Keywords: body fluids, identification, Raman spectroscopy, matrix-assisted laser desorption/ionization time-of-flight mass spectrometry
Procedia PDF Downloads 1372967 The Diversity of Contexts within Which Adolescents Engage with Digital Media: Contributing to More Challenging Tasks for Parents and a Need for Third Party Mediation
Authors: Ifeanyi Adigwe, Thomas Van der Walt
Abstract:
Digital media has been integrated into the social and entertainment life of young children, and as such, the impact of digital media appears to affect young people of all ages and it is believed that this will continue to shape the world of young children. Since, technological advancement of digital media presents adolescents with diverse contexts, platforms and avenues to engage with digital media outside the home environment and from parents' supervision, a wide range of new challenges has further complicated the already difficult tasks for parents and altered the landscape of parenting. Despite the fact that adolescents now have access to a wide range of digital media technologies both at home and in the learning environment, parenting practices such as active, restrictive, co-use, participatory and technical mediations are important in mitigating of online risks adolescents may encounter as a result of digital media use. However, these mediation practices only focus on the home environment including digital media present in the home and may not necessarily transcend outside the home and other learning environments where adolescents use digital media for school work and other activities. This poses the question of who mediates adolescent's digital media use outside the home environment. The learning environment could be a ''loose platform'' where an adolescent can maximise digital media use considering the fact that there is no restriction in terms of content and time allotted to using digital media during school hours. That is to say that an adolescent can play the ''bad boy'' online in school because there is little or no restriction of digital media use and be exposed to online risks and play the ''good boy'' at home because of ''heavy'' parental mediation. This is the reason why parent mediation practices have been ineffective because a parent may not be able to track adolescents digital media use considering the diversity of contexts, platforms and avenues adolescents use digital media. This study argues that due to the diverse nature of digital media technology, parents may not be able to monitor the 'whereabouts' of their children in the digital space. This is because adolescent digital media usage may not only be confined to the home environment but other learning environments like schools. This calls for urgent attention on the part of teachers to understand the intricacies of how digital media continue to shape the world in which young children are developing and learning. It is, therefore, imperative for parents to liaise with the schools of their children to mediate digital media use during school hours. The implication of parents- teachers mediation practices are discussed. The article concludes by suggesting that third party mediation by teachers in schools and other learning environments should be encouraged and future research needs to consider the emergent strategy of teacher-children mediation approach and the implication for policy for both the home and learning environments.Keywords: digital media, digital age, parent mediation, third party mediation
Procedia PDF Downloads 1582966 Computational and Experimental Determination of Acoustic Impedance of Internal Combustion Engine Exhaust
Authors: A. O. Glazkov, A. S. Krylova, G. G. Nadareishvili, A. S. Terenchenko, S. I. Yudin
Abstract:
The topic of the presented materials concerns the design of the exhaust system for a certain internal combustion engine. The exhaust system can be divided into two parts. The first is the engine exhaust manifold, turbocharger, and catalytic converters, which are called “hot part.” The second part is the gas exhaust system, which contains elements exclusively for reducing exhaust noise (mufflers, resonators), the accepted designation of which is the "cold part." The design of the exhaust system from the point of view of acoustics, that is, reducing the exhaust noise to a predetermined level, consists of working on the second part. Modern computer technology and software make it possible to design "cold part" with high accuracy in a given frequency range but with the condition of accurately specifying the input parameters, namely, the amplitude spectrum of the input noise and the acoustic impedance of the noise source in the form of an engine with a "hot part". Getting this data is a difficult problem: high temperatures, high exhaust gas velocities (turbulent flows), and high sound pressure levels (non-linearity mode) do not allow the calculated results to be applied with sufficient accuracy. The aim of this work is to obtain the most reliable acoustic output parameters of an engine with a "hot part" based on a complex of computational and experimental studies. The presented methodology includes several parts. The first part is a finite element simulation of the "cold part" of the exhaust system (taking into account the acoustic impedance of radiation of outlet pipe into open space) with the result in the form of the input impedance of "cold part". The second part is a finite element simulation of the "hot part" of the exhaust system (taking into account acoustic characteristics of catalytic units and geometry of turbocharger) with the result in the form of the input impedance of the "hot part". The next third part of the technique consists of the mathematical processing of the results according to the proposed formula for the convergence of the mathematical series of summation of multiple reflections of the acoustic signal "cold part" - "hot part". This is followed by conducting a set of tests on an engine stand with two high-temperature pressure sensors measuring pulsations in the nozzle between "hot part" and "cold part" of the exhaust system and subsequent processing of test results according to a well-known technique in order to separate the "incident" and "reflected" waves. The final stage consists of the mathematical processing of all calculated and experimental data to obtain a result in the form of a spectrum of the amplitude of the engine noise and its acoustic impedance.Keywords: acoustic impedance, engine exhaust system, FEM model, test stand
Procedia PDF Downloads 592965 Biosignal Recognition for Personal Identification
Authors: Hadri Hussain, M.Nasir Ibrahim, Chee-Ming Ting, Mariani Idroas, Fuad Numan, Alias Mohd Noor
Abstract:
A biometric security system has become an important application in client identification and verification system. A conventional biometric system is normally based on unimodal biometric that depends on either behavioural or physiological information for authentication purposes. The behavioural biometric depends on human body biometric signal (such as speech) and biosignal biometric (such as electrocardiogram (ECG) and phonocardiogram or heart sound (HS)). The speech signal is commonly used in a recognition system in biometric, while the ECG and the HS have been used to identify a person’s diseases uniquely related to its cluster. However, the conventional biometric system is liable to spoof attack that will affect the performance of the system. Therefore, a multimodal biometric security system is developed, which is based on biometric signal of ECG, HS, and speech. The biosignal data involved in the biometric system is initially segmented, with each segment Mel Frequency Cepstral Coefficients (MFCC) method is exploited for extracting the feature. The Hidden Markov Model (HMM) is used to model the client and to classify the unknown input with respect to the modal. The recognition system involved training and testing session that is known as client identification (CID). In this project, twenty clients are tested with the developed system. The best overall performance at 44 kHz was 93.92% for ECG and the worst overall performance was ECG at 88.47%. The results were compared to the best overall performance at 44 kHz for (20clients) to increment of clients, which was 90.00% for HS and the worst overall performance falls at ECG at 79.91%. It can be concluded that the difference multimodal biometric has a substantial effect on performance of the biometric system and with the increment of data, even with higher frequency sampling, the performance still decreased slightly as predicted.Keywords: electrocardiogram, phonocardiogram, hidden markov model, mel frequency cepstral coeffiecients, client identification
Procedia PDF Downloads 2802964 Perceived Structural Empowerment and Work Commitment among Intensive Care nurses in SMC
Authors: Ridha Abdulla Al Hammam
Abstract:
Purpose: to measure the extent of perceived structural empowerment and work commitment the intensive care unit in SMC have in their work place. Background: nurses’ access to power structures (information, recourses, opportunity, and support) directly influences their productivity, retention, and job satisfaction. Exploring nurses’ level and sources of work commitment (affective, normative, and continuance) is very essential to guide nursing leaders making decisions to improve work environment to facilitate effective nursing care. Both concepts (Structural Empowerment and Work Commitment) were never investigated in our critical care unit. Methods: a sample of 50 nurses attained from the Intensive Care Unit (Adult). Conditions for Workplace Effectiveness Questionnaire and Three-Component Model Employee Commitment Survey were used to measure the two concepts respectively. The study is quantitative, descriptive, and correlational in design. Results: the participants reported moderate structural empowerment provided by their work place (M=15 out of 20). The sample perceived high access to opportunity mainly through gaining more skills (M=4.45 out of 5) where the rest power structures were perceived with moderate accessibility. The participants’ affective commitment (M=5.6 out of 7) to work in the ICU overweighed their normative and continuance commitment (M=5.1, M=4.9 out of 7) implying a stronger emotional connection with their unit. Strong positive and significant correlations were observed between the participants’ structural empowerment scores and all work commitment sources. Conclusion: these results provided an insight on aspects of work environment that need to be fostered and improved in our intensive care unit which have a direct linkage to nurses’ work commitment and potentially to their quality of care they provide.Keywords: structural empowerment, commitment, intensive care, nurses
Procedia PDF Downloads 2872963 Reducing Defects through Organizational Learning within a Housing Association Environment
Authors: T. Hopkin, S. Lu, P. Rogers, M. Sexton
Abstract:
Housing Associations (HAs) contribute circa 20% of the UK’s housing supply. HAs are however under increasing pressure as a result of funding cuts and rent reductions. Due to the increased pressure, a number of processes are currently being reviewed by HAs, especially how they manage and learn from defects. Learning from defects is considered a useful approach to achieving defect reduction within the UK housebuilding industry. This paper contributes to our understanding of how HAs learn from defects by undertaking an initial round table discussion with key HA stakeholders as part of an ongoing collaborative research project with the National House Building Council (NHBC) to better understand how house builders and HAs learn from defects to reduce their prevalence. The initial discussion shows that defect information runs through a number of groups, both internal and external of a HA during both the defects management process and organizational learning (OL) process. Furthermore, HAs are reliant on capturing and recording defect data as the foundation for the OL process. During the OL process defect data analysis is the primary enabler to recognizing a need for a change to organizational routines. When a need for change has been recognized, new options are typically pursued to design out defects via updates to a HAs Employer’s Requirements. Proposed solutions are selected by a review board and committed to organizational routine. After implementing a change, both structured and unstructured feedback is sought to establish the change’s success. The findings from the HA discussion demonstrates that OL can achieve defect reduction within the house building sector in the UK. The paper concludes by outlining a potential ‘learning from defects model’ for the housebuilding industry as well as describing future work.Keywords: defects, new homes, housing association, organizational learning
Procedia PDF Downloads 3162962 On the Existence of Homotopic Mapping Between Knowledge Graphs and Graph Embeddings
Authors: Jude K. Safo
Abstract:
Knowledge Graphs KG) and their relation to Graph Embeddings (GE) represent a unique data structure in the landscape of machine learning (relative to image, text and acoustic data). Unlike the latter, GEs are the only data structure sufficient for representing hierarchically dense, semantic information needed for use-cases like supply chain data and protein folding where the search space exceeds the limits traditional search methods (e.g. page-rank, Dijkstra, etc.). While GEs are effective for compressing low rank tensor data, at scale, they begin to introduce a new problem of ’data retreival’ which we observe in Large Language Models. Notable attempts by transE, TransR and other prominent industry standards have shown a peak performance just north of 57% on WN18 and FB15K benchmarks, insufficient practical industry applications. They’re also limited, in scope, to next node/link predictions. Traditional linear methods like Tucker, CP, PARAFAC and CANDECOMP quickly hit memory limits on tensors exceeding 6.4 million nodes. This paper outlines a topological framework for linear mapping between concepts in KG space and GE space that preserve cardinality. Most importantly we introduce a traceable framework for composing dense linguistic strcutures. We demonstrate performance on WN18 benchmark this model hits. This model does not rely on Large Langauge Models (LLM) though the applications are certainy relevant here as well.Keywords: representation theory, large language models, graph embeddings, applied algebraic topology, applied knot theory, combinatorics
Procedia PDF Downloads 682961 Numerical Simulation of Flow and Heat Transfer Characteristics with Various Working Conditions inside a Reactor of Wet Scrubber
Authors: Jonghyuk Yoon, Hyoungwoon Song, Youngbae Kim, Eunju Kim
Abstract:
Recently, with the rapid growth of semiconductor industry, lots of interests have been focused on after treatment system that remove the polluted gas produced from semiconductor manufacturing process, and a wet scrubber is the one of the widely used system. When it comes to mechanism of removing the gas, the polluted gas is removed firstly by chemical reaction in a reactor part. After that, the polluted gas stream is brought into contact with the scrubbing liquid, by spraying it with the liquid. Effective design of the reactor part inside the wet scrubber is highly important since removal performance of the polluted gas in the reactor plays an important role in overall performance and stability. In the present study, a CFD (Computational Fluid Dynamics) analysis was performed to figure out the thermal and flow characteristics inside unit a reactor of wet scrubber. In order to verify the numerical result, temperature distribution of the numerical result at various monitoring points was compared to the experimental result. The average error rates (12~15%) between them was shown and the numerical result of temperature distribution was in good agreement with the experimental data. By using validated numerical method, the effect of the reactor geometry on heat transfer rate was also taken into consideration. Uniformity of temperature distribution was improved about 15%. Overall, the result of present study could be useful information to identify the fluid behavior and thermal performance for various scrubber systems. This project is supported by the ‘R&D Center for the reduction of Non-CO₂ Greenhouse gases (RE201706054)’ funded by the Korea Ministry of Environment (MOE) as the Global Top Environment R&D Program.Keywords: semiconductor, polluted gas, CFD (Computational Fluid Dynamics), wet scrubber, reactor
Procedia PDF Downloads 1442960 Mapping Soils from Terrain Features: The Case of Nech SAR National Park of Ethiopia
Authors: Shetie Gatew
Abstract:
Current soil maps of Ethiopia do not represent accurately the soils of Nech Sar National Park. In the framework of studies on the ecology of the park, we prepared a soil map based on field observations and a digital terrain model derived from SRTM data with a 30-m resolution. The landscape comprises volcanic cones, lava and basalt outflows, undulating plains, horsts, alluvial plains and river deltas. SOTER-like terrain mapping units were identified. First, the DTM was classified into 128 terrain classes defined by slope gradient (4 classes), relief intensity (4 classes), potential drainage density (2 classes), and hypsometry (4 classes). A soil-landscape relation between the terrain mapping units and WRB soil units was established based on 34 soil profile pits. Based on this relation, the terrain mapping units were either merged or split to represent a comprehensive soil and terrain map. The soil map indicates that Leptosols (30 %), Cambisols (26%), Andosols (21%), Fluvisols (12 %), and Vertisols (9%) are the most widespread Reference Soil Groups of the park. In contrast, the harmonized soil map of Africa derived from the FAO soil map of the world indicates that Luvisols (70%), Vertisols (14%) and Fluvisols (16%) would be the most common Reference Soil Groups. However, these latter mapping units are not consistent with the topography, nor did we find such extensive areas occupied by Luvisols during the field survey. This case study shows that with the now freely available SRTM data, it is possible to improve current soil information layers with relatively limited resources, even in a complex terrain like Nech Sar National Park.Keywords: andosols, cambisols, digital elevation model, leptosols, soil-landscaps relation
Procedia PDF Downloads 1052959 Multi-Stage Classification for Lung Lesion Detection on CT Scan Images Applying Medical Image Processing Technique
Authors: Behnaz Sohani, Sahand Shahalinezhad, Amir Rahmani, Aliyu Aliyu
Abstract:
Recently, medical imaging and specifically medical image processing is becoming one of the most dynamically developing areas of medical science. It has led to the emergence of new approaches in terms of the prevention, diagnosis, and treatment of various diseases. In the process of diagnosis of lung cancer, medical professionals rely on computed tomography (CT) scans, in which failure to correctly identify masses can lead to incorrect diagnosis or sampling of lung tissue. Identification and demarcation of masses in terms of detecting cancer within lung tissue are critical challenges in diagnosis. In this work, a segmentation system in image processing techniques has been applied for detection purposes. Particularly, the use and validation of a novel lung cancer detection algorithm have been presented through simulation. This has been performed employing CT images based on multilevel thresholding. The proposed technique consists of segmentation, feature extraction, and feature selection and classification. More in detail, the features with useful information are selected after featuring extraction. Eventually, the output image of lung cancer is obtained with 96.3% accuracy and 87.25%. The purpose of feature extraction applying the proposed approach is to transform the raw data into a more usable form for subsequent statistical processing. Future steps will involve employing the current feature extraction method to achieve more accurate resulting images, including further details available to machine vision systems to recognise objects in lung CT scan images.Keywords: lung cancer detection, image segmentation, lung computed tomography (CT) images, medical image processing
Procedia PDF Downloads 1012958 Analyzing the Permissibility of Demonstration in Islamic Perspective: Case Study of Former Governor of Jakarta Basuki Tjahaja Purnama
Authors: Ahmad Syauqi
Abstract:
This paper analyzes the permissibility of demonstrations against a leader's decision, policies, as well as statements against Islamic values from an Islamic point of view. Recorded at the end of 2016, a large demonstration in Jakarta involving many people, mostly from Muslim society against the former Governor of Jakarta, Basuki Tjahaja Purnama, was considered a form of harm to the value of harmony and the unity of religious communities in Indonesia. Hence, this paper aims to answer the question that became a tough discussion and a long debate among Indonesian Muslims after an immense demonstration known as the 212 movements, ‘how exactly Islam sees such act of demonstration?’. Is there any particular historical source in Islamic history that mention information related to demonstration? A phenomenological qualitative method was implemented throughout the process of this research to study the perspective of various Muslims scholars by reviewing, and comparing their opinions through the classical source of Islamic history and Hadith literature. One of the main roots of this extensive debate is due to the extremist group, which bans all forms of demonstration, assuming that such acts had come from the West and unknown culture in the Islamic history. In addition, they also claim that all the demonstrators are Bughat. While some other groups, freely declare that demonstration can be done anytime and anywhere, without specific terms and regulations associated. The findings of this research illustrate that the protests which we now know of today, in terms of demonstration had existed since ancient times, even from the time of the prophet Muhammad (peace be upon him). This paper reveals that there is a strong evidence that demonstration is justified in Islamic law and has a historical root. This can, therefore, be a proposition of such permissibility. However, there are still a number of things one has to be aware of when it comes to the demonstration, and clearly, not all demonstrations are legal from the Islamic perspective.Keywords: Basuki Tjahaja Purnama, demonstration, Muslim scholars, protest
Procedia PDF Downloads 1302957 Neuroecological Approach for Anthropological Studies in Archaeology
Authors: Kalangi Rodrigo
Abstract:
The term Neuroecology elucidates the study of customizable variation in cognition and the brain. Subject marked the birth since 1980s, when researches began to apply methods of comparative evolutionary biology to cognitive processes and the underlying neural mechanisms of cognition. In Archaeology and Anthropology, we observe behaviors such as social learning skills, innovative feeding and foraging, tool use and social manipulation to determine the cognitive processes of ancient mankind. Depending on the brainstem size was used as a control variable, and phylogeny was controlled using independent contrasts. Both disciplines need to enriched with comparative literature and neurological experimental, behavioral studies among tribal peoples as well as primate groups which will lead the research to a potential end. Neuroecology examines the relations between ecological selection pressure and mankind or sex differences in cognition and the brain. The goal of neuroecology is to understand how natural law acts on perception and its neural apparatus. Furthermore, neuroecology will eventually lead both principal disciplines to Ethology, where human behaviors and social management studies from a biological perspective. It can be either ethnoarchaeological or prehistoric. Archaeology should adopt general approach of neuroecology, phylogenetic comparative methods can be used in the field, and new findings on the cognitive mechanisms and brain structures involved mating systems, social organization, communication and foraging. The contribution of neuroecology to archaeology and anthropology is the information it provides on the selective pressures that have influenced the evolution of cognition and brain structure of the mankind. It will shed a new light to the path of evolutionary studies including behavioral ecology, primate archaeology and cognitive archaeology.Keywords: Neuroecology, Archaeology, Brain Evolution, Cognitive Archaeology
Procedia PDF Downloads 1202956 Evaluation of the Weight-Based and Fat-Based Indices in Relation to Basal Metabolic Rate-to-Weight Ratio
Authors: Orkide Donma, Mustafa M. Donma
Abstract:
Basal metabolic rate is questioned as a risk factor for weight gain. The relations between basal metabolic rate and body composition have not been cleared yet. The impact of fat mass on basal metabolic rate is also uncertain. Within this context, indices based upon total body mass as well as total body fat mass are available. In this study, the aim is to investigate the potential clinical utility of these indices in the adult population. 287 individuals, aged from 18 to 79 years, were included into the scope of the study. Based upon body mass index values, 10 underweight, 88 normal, 88 overweight, 81 obese, and 20 morbid obese individuals participated. Anthropometric measurements including height (m), and weight (kg) were performed. Body mass index, diagnostic obesity notation model assessment index I, diagnostic obesity notation model assessment index II, basal metabolic rate-to-weight ratio were calculated. Total body fat mass (kg), fat percent (%), basal metabolic rate, metabolic age, visceral adiposity, fat mass of upper as well as lower extremities and trunk, obesity degree were measured by TANITA body composition monitor using bioelectrical impedance analysis technology. Statistical evaluations were performed by statistical package (SPSS) for Windows Version 16.0. Scatterplots of individual measurements for the parameters concerning correlations were drawn. Linear regression lines were displayed. The statistical significance degree was accepted as p < 0.05. The strong correlations between body mass index and diagnostic obesity notation model assessment index I as well as diagnostic obesity notation model assessment index II were obtained (p < 0.001). A much stronger correlation was detected between basal metabolic rate and diagnostic obesity notation model assessment index I in comparison with that calculated for basal metabolic rate and body mass index (p < 0.001). Upon consideration of the associations between basal metabolic rate-to-weight ratio and these three indices, the best association was observed between basal metabolic rate-to-weight and diagnostic obesity notation model assessment index II. In a similar manner, this index was highly correlated with fat percent (p < 0.001). Being independent of the indices, a strong correlation was found between fat percent and basal metabolic rate-to-weight ratio (p < 0.001). Visceral adiposity was much strongly correlated with metabolic age when compared to that with chronological age (p < 0.001). In conclusion, all three indices were associated with metabolic age, but not with chronological age. Diagnostic obesity notation model assessment index II values were highly correlated with body mass index values throughout all ranges starting with underweight going towards morbid obesity. This index is the best in terms of its association with basal metabolic rate-to-weight ratio, which can be interpreted as basal metabolic rate unit.Keywords: basal metabolic rate, body mass index, children, diagnostic obesity notation model assessment index, obesity
Procedia PDF Downloads 1502955 Experimental Investigation on the Effect of Prestress on the Dynamic Mechanical Properties of Conglomerate Based on 3D-SHPB System
Authors: Wei Jun, Liao Hualin, Wang Huajian, Chen Jingkai, Liang Hongjun, Liu Chuanfu
Abstract:
Kuqa Piedmont is rich in oil and gas resources and has great development potential in Tarim Basin, China. However, there is a huge thick gravel layer developed with high content, wide distribution and variation in size of gravel, leading to the condition of strong heterogeneity. So that, the drill string is in a state of severe vibration and the drill bit is worn seriously while drilling, which greatly reduces the rock-breaking efficiency, and there is a complex load state of impact and three-dimensional in-situ stress acting on the rock in the bottom hole. The dynamic mechanical properties and the influencing factors of conglomerate, the main component of gravel layer, are the basis of engineering design and efficient rock breaking method and theoretical research. Limited by the previously experimental technique, there are few works published yet about conglomerate, especially rare in dynamic load. Based on this, a kind of 3D SHPB system, three-dimensional prestress, can be applied to simulate the in-situ stress characteristics, is adopted for the dynamic test of the conglomerate. The results show that the dynamic strength is higher than its static strength obviously, and while the three-dimensional prestress is 0 and the loading strain rate is 81.25~228.42 s-1, the true triaxial equivalent strength is 167.17~199.87 MPa, and the strong growth factor of dynamic and static is 1.61~1.92. And the higher the impact velocity, the greater the loading strain rate, the higher the dynamic strength and the greater the failure strain, which all increase linearly. There is a critical prestress in the impact direction and its vertical direction. In the impact direction, while the prestress is less than the critical one, the dynamic strength and the loading strain rate increase linearly; otherwise, the strength decreases slightly and the strain rate decreases rapidly. In the vertical direction of impact load, the strength increases and the strain rate decreases linearly before the critical prestress, after that, oppositely. The dynamic strength of the conglomerate can be reduced properly by reducing the amplitude of impact load so that the service life of rock-breaking tools can be prolonged while drilling in the stratum rich in gravel. The research has important reference significance for the speed-increasing technology and theoretical research while drilling in gravel layer.Keywords: huge thick gravel layer, conglomerate, 3D SHPB, dynamic strength, the deformation characteristics, prestress
Procedia PDF Downloads 2092954 Influence of Travel Time Reliability on Elderly Drivers Crash Severity
Authors: Ren Moses, Emmanuel Kidando, Eren Ozguven, Yassir Abdelrazig
Abstract:
Although older drivers (defined as those of age 65 and above) are less involved with speeding, alcohol use as well as night driving, they are more vulnerable to severe crashes. The major contributing factors for severe crashes include frailty and medical complications. Several studies have evaluated the contributing factors on severity of crashes. However, few studies have established the impact of travel time reliability (TTR) on road safety. In particular, the impact of TTR on senior adults who face several challenges including hearing difficulties, decreasing of the processing skills and cognitive problems in driving is not well established. Therefore, this study focuses on determining possible impacts of TTR on the traffic safety with focus on elderly drivers. Historical travel speed data from freeway links in the study area were used to calculate travel time and the associated TTR metrics that is, planning time index, the buffer index, the standard deviation of the travel time and the probability of congestion. Four-year information on crashes occurring on these freeway links was acquired. The binary logit model estimated using the Markov Chain Monte Carlo (MCMC) sampling technique was used to evaluate variables that could be influencing elderly crash severity. Preliminary results of the analysis suggest that TTR is statistically significant in affecting the severity of a crash involving an elderly driver. The result suggests that one unit increase in the probability of congestion reduces the likelihood of the elderly severe crash by nearly 22%. These findings will enhance the understanding of TTR and its impact on the elderly crash severity.Keywords: highway safety, travel time reliability, elderly drivers, traffic modeling
Procedia PDF Downloads 4932953 Evidence Theory Based Emergency Multi-Attribute Group Decision-Making: Application in Facility Location Problem
Authors: Bidzina Matsaberidze
Abstract:
It is known that, in emergency situations, multi-attribute group decision-making (MAGDM) models are characterized by insufficient objective data and a lack of time to respond to the task. Evidence theory is an effective tool for describing such incomplete information in decision-making models when the expert and his knowledge are involved in the estimations of the MAGDM parameters. We consider an emergency decision-making model, where expert assessments on humanitarian aid from distribution centers (HADC) are represented in q-rung ortho-pair fuzzy numbers, and the data structure is described within the data body theory. Based on focal probability construction and experts’ evaluations, an objective function-distribution centers’ selection ranking index is constructed. Our approach for solving the constructed bicriteria partitioning problem consists of two phases. In the first phase, based on the covering’s matrix, we generate a matrix, the columns of which allow us to find all possible partitionings of the HADCs with the service centers. Some constraints are also taken into consideration while generating the matrix. In the second phase, based on the matrix and using our exact algorithm, we find the partitionings -allocations of the HADCs to the centers- which correspond to the Pareto-optimal solutions. For an illustration of the obtained results, a numerical example is given for the facility location-selection problem.Keywords: emergency MAGDM, q-rung orthopair fuzzy sets, evidence theory, HADC, facility location problem, multi-objective combinatorial optimization problem, Pareto-optimal solutions
Procedia PDF Downloads 922952 Scalable UI Test Automation for Large-scale Web Applications
Authors: Kuniaki Kudo, Raviraj Solanki, Kaushal Patel, Yash Virani
Abstract:
This research mainly concerns optimizing UI test automation for large-scale web applications. The test target application is the HHAexchange homecare management WEB application that seamlessly connects providers, state Medicaid programs, managed care organizations (MCOs), and caregivers through one platform with large-scale functionalities. This study focuses on user interface automation testing for the WEB application. The quality assurance team must execute many manual users interface test cases in the development process to confirm no regression bugs. The team automated 346 test cases; the UI automation test execution time was over 17 hours. The business requirement was reducing the execution time to release high-quality products quickly, and the quality assurance automation team modernized the test automation framework to optimize the execution time. The base of the WEB UI automation test environment is Selenium, and the test code is written in Python. Adopting a compilation language to write test code leads to an inefficient flow when introducing scalability into a traditional test automation environment. In order to efficiently introduce scalability into Test Automation, a scripting language was adopted. The scalability implementation is mainly implemented with AWS's serverless technology, an elastic container service. The definition of scalability here is the ability to automatically set up computers to test automation and increase or decrease the number of computers running those tests. This means the scalable mechanism can help test cases run parallelly. Then test execution time is dramatically decreased. Also, introducing scalable test automation is for more than just reducing test execution time. There is a possibility that some challenging bugs are detected by introducing scalable test automation, such as race conditions, Etc. since test cases can be executed at same timing. If API and Unit tests are implemented, the test strategies can be adopted more efficiently for this scalability testing. However, in WEB applications, as a practical matter, API and Unit testing cannot cover 100% functional testing since they do not reach front-end codes. This study applied a scalable UI automation testing strategy to the large-scale homecare management system. It confirmed the optimization of the test case execution time and the detection of a challenging bug. This study first describes the detailed architecture of the scalable test automation environment, then describes the actual performance reduction time and an example of challenging issue detection.Keywords: aws, elastic container service, scalability, serverless, ui automation test
Procedia PDF Downloads 1072951 Classical and Bayesian Inference of the Generalized Log-Logistic Distribution with Applications to Survival Data
Authors: Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa
Abstract:
A generalized log-logistic distribution with variable shapes of the hazard rate was introduced and studied, extending the log-logistic distribution by adding an extra parameter to the classical distribution, leading to greater flexibility in analysing and modeling various data types. The proposed distribution has a large number of well-known lifetime special sub-models such as; Weibull, log-logistic, exponential, and Burr XII distributions. Its basic mathematical and statistical properties were derived. The method of maximum likelihood was adopted for estimating the unknown parameters of the proposed distribution, and a Monte Carlo simulation study is carried out to assess the behavior of the estimators. The importance of this distribution is that its tendency to model both monotone (increasing and decreasing) and non-monotone (unimodal and bathtub shape) or reversed “bathtub” shape hazard rate functions which are quite common in survival and reliability data analysis. Furthermore, the flexibility and usefulness of the proposed distribution are illustrated in a real-life data set and compared to its sub-models; Weibull, log-logistic, and BurrXII distributions and other parametric survival distributions with 3-parmaeters; like the exponentiated Weibull distribution, the 3-parameter lognormal distribution, the 3- parameter gamma distribution, the 3-parameter Weibull distribution, and the 3-parameter log-logistic (also known as shifted log-logistic) distribution. The proposed distribution provided a better fit than all of the competitive distributions based on the goodness-of-fit tests, the log-likelihood, and information criterion values. Finally, Bayesian analysis and performance of Gibbs sampling for the data set are also carried out.Keywords: hazard rate function, log-logistic distribution, maximum likelihood estimation, generalized log-logistic distribution, survival data, Monte Carlo simulation
Procedia PDF Downloads 2022950 Flood Devastation Assessment Through Mapping in Nigeria-2022 using Geospatial Techniques
Authors: Hafiz Muhammad Tayyab Bhatti, Munazza Usmani
Abstract:
One of nature's most destructive occurrences, floods do immense damage to communities and economic losses. Nigeria country, specifically southern Nigeria, is known for being prone to flooding. Even though periodic flooding occurs in Nigeria frequently, the floods of 2022 were the worst since those in 2012. Flood vulnerability analysis and mapping are still lacking in this region due to the very limited historical hydrological measurements and surveys on the effects of floods, which makes it difficult to develop and put into practice efficient flood protection measures. Remote sensing and Geographic Information Systems (GIS) are useful approaches to detecting, determining, and estimating the flood extent and its impacts. In this study, NOAA VIIR has been used to extract the flood extent using the flood water fraction data and afterward fused with GIS data for some zonal statistical analysis. The estimated possible flooding areas are validated using satellite imagery from the Moderate Resolution Imaging Spectroradiometer (MODIS). The goal is to map and studied flood extent, flood hazards, and their effects on the population, schools, and health facilities for each state of Nigeria. The resulting flood hazard maps show areas with high-risk levels clearly and serve as an important reference for planning and implementing future flood mitigation and control strategies. Overall, the study demonstrated the viability of using the chosen GIS and remote sensing approaches to detect possible risk regions to secure local populations and enhance disaster response capabilities during natural disasters.Keywords: flood hazards, remote sensing, damage assessment, GIS, geospatial analysis
Procedia PDF Downloads 1372949 Cricket Injury Surveillence by Mobile Application Technology on Smartphones
Authors: Najeebullah Soomro, Habib Noorbhai, Mariam Soomro, Ross Sanders
Abstract:
The demands on cricketers are increasing with more matches being played in a shorter period of time with a greater intensity. A ten year report on injury incidence for Australian elite cricketers between the 2000- 2011 seasons revealed an injury incidence rate of 17.4%.1. In the 2009–10 season, 24 % of Australian fast bowlers missed matches through injury. 1 Injury rates are even higher in junior cricketers with an injury incidence of 25% or 2.9 injuries per 100 player hours reported. 2 Traditionally, injury surveillance has relied on the use of paper based forms or complex computer software. 3,4 This makes injury reporting laborious for the staff involved. The purpose of this presentation is to describe a smartphone based mobile application as a means of improving injury surveillance in cricket. Methods: The researchers developed CricPredict mobile App for the Android platforms, the world’s most widely used smartphone platform. It uses Qt SDK (Software Development Kit) as IDE (Integrated Development Environment). C++ was used as the programming language with the Qt framework, which provides us with cross-platform abilities that will allow this app to be ported to other operating systems (iOS, Mac, Windows) in the future. The wireframes (graphic user interface) were developed using Justinmind Prototyper Pro Edition Version (Ver. 6.1.0). CricPredict enables recording of injury and training status conveniently and immediately. When an injury is reported automated follow-up questions include site of injury, nature of injury, mechanism of injury, initial treatment, referral and action taken after injury. Direct communication with the player then enables assessment of severity and diagnosis. CricPredict also allows the coach to maintain and track each player’s attendance at matches and training session. Workload data can also be recorded by either the player or coach by recording the number of balls bowled or played in a day. This is helpful in formulating injury rates and time lost due to injuries. All the data are stored at a secured password protected data server. Outcomes and Significance: Use of CricPredit offers a simple, user friendly tool for the coaching or medical staff associated with teams to predict, record and report injuries. This system will assist teams to capture injury data with ease thus allowing better understanding of injuries associated with cricket and potentially optimize the performance of such cricketers.Keywords: injury, cricket, surveillance, smartphones, mobile
Procedia PDF Downloads 4592948 The Impact of the Enron Scandal on the Reputation of Corporate Social Responsibility Rating Agencies
Authors: Jaballah Jamil
Abstract:
KLD (Peter Kinder, Steve Lydenberg and Amy Domini) research & analytics is an independent intermediary of social performance information that adopts an investor-pay model. KLD rating agency does not have an explicit monitoring on the rated firm which suggests that KLD ratings may not include private informations. Moreover, the incapacity of KLD to predict accurately the extra-financial rating of Enron casts doubt on the reliability of KLD ratings. Therefore, we first investigate whether KLD ratings affect investors' perception by studying the effect of KLD rating changes on firms' financial performances. Second, we study the impact of the Enron scandal on investors' perception of KLD rating changes by comparing the effect of KLD rating changes on firms' financial performances before and after the failure of Enron. We propose an empirical study that relates a number of equally-weighted portfolios returns, excess stock returns and book-to-market ratio to different dimensions of KLD social responsibility ratings. We first find that over the last two decades KLD rating changes influence significantly and negatively stock returns and book-to-market ratio of rated firms. This finding suggests that a raise in corporate social responsibility rating lowers the firm's risk. Second, to assess the Enron scandal's effect on the perception of KLD ratings, we compare the effect of KLD rating changes before and after the Enron scandal. We find that after the Enron scandal this significant effect disappears. This finding supports the view that the Enron scandal annihilates the KLD's effect on Socially Responsible Investors. Therefore, our findings may question results of recent studies that use KLD ratings as a proxy for Corporate Social Responsibility behavior.Keywords: KLD social rating agency, investors' perception, investment decision, financial performance
Procedia PDF Downloads 4392947 High-Performance Thin-layer Chromatography (HPTLC) Analysis of Multi-Ingredient Traditional Chinese Medicine Supplement
Authors: Martin Cai, Khadijah B. Hashim, Leng Leo, Edmund F. Tian
Abstract:
Analysis of traditional Chinese medicinal (TCM) supplements has always been a laborious task, particularly in the case of multi‐ingredient formulations. Traditionally, herbal extracts are analysed using one or few markers compounds. In the recent years, however, pharmaceutical companies are introducing health supplements of TCM active ingredients to cater to the needs of consumers in the fast-paced society in this age. As such, new problems arise in the aspects of composition identification as well as quality analysis. In most cases of products or supplements formulated with multiple TCM herbs, the chemical composition, and nature of each raw material differs greatly from the others in the formulation. This results in a requirement for individual analytical processes in order to identify the marker compounds in the various botanicals. Thin-layer Chromatography (TLC) is a simple, cost effective, yet well-regarded method for the analysis of natural products, both as a Pharmacopeia-approved method for identification and authentication of herbs, and a great analytical tool for the discovery of chemical compositions in herbal extracts. Recent technical advances introduced High-Performance TLC (HPTLC) where, with the help of automated equipment and improvements on the chromatographic materials, both the quality and reproducibility are greatly improved, allowing for highly standardised analysis with greater details. Here we report an industrial consultancy project with ONI Global Pte Ltd for the analysis of LAC Liver Protector, a TCM formulation aimed at improving liver health. The aim of this study was to identify 4 key components of the supplement using HPTLC, following protocols derived from Chinese Pharmacopeia standards. By comparing the TLC profiles of the supplement to the extracts of the herbs reported in the label, this project proposes a simple and cost-effective analysis of the presence of the 4 marker compounds in the multi‐ingredient formulation by using 4 different HPTLC methods. With the increasing trend of small and medium-sized enterprises (SMEs) bringing natural products and health supplements into the market, it is crucial that the qualities of both raw materials and end products be well-assured for the protection of consumers. With the technology of HPTLC, science can be incorporated to help SMEs with their quality control, thereby ensuring product quality.Keywords: traditional Chinese medicine supplement, high performance thin layer chromatography, active ingredients, product quality
Procedia PDF Downloads 2802946 Assessing the Feasibility of Commercial Meat Rabbit Production in the Kumasi Metropolis of Ghana
Authors: Nana Segu Acquaah-Harrison, James Osei Mensah, Richard Aidoo, David Amponsah, Amy Buah, Gilbert Aboagye
Abstract:
The study aimed at assessing the feasibility of commercial meat rabbit production in the Kumasi Metropolis of Ghana. Structured and unstructured questionnaires were utilized in obtaining information from two hundred meat consumers and 15 meat rabbit farmers. Data were analyzed using Net Present Value (NPV), Internal Rate of Return (IRR), Benefit Cost Ratio (BCR)/Profitability Index (PI) technique, percentages and chi-square contingency test. The study found that the current demand for rabbit meat is low (36%). The desirable nutritional attributes of rabbit meat and other socio economic factors of meat consumers make the potential demand for rabbit meat high (69%). It was estimated that GH¢5,292 (approximately $ 2672) was needed as a start-up capital for a 40-doe unit meat rabbit farm in Kumasi Metropolis. The cost of breeding animals, housing and equipment formed 12.47%, 53.97% and 24.87% respectively of the initial estimated capital. A Net Present Value of GH¢ 5,910.75 (approximately $ 2984) was obtained at the end of the fifth year, with an internal rate return and profitability index of 70% and 1.12 respectively. The major constraints identified in meat rabbit production were low price of rabbit meat, shortage of fodder, pest and diseases, high cost of capital, high cost of operating materials and veterinary care. Based on the analysis, it was concluded that meat rabbit production is feasible in the Kumasi Metropolis of Ghana. The study recommends embarking on mass advertisement; farmer association and adapting to new technologies in the production process will help to enhance productivity.Keywords: feasibility, commercial meat rabbit, production, Kumasi, Ghana
Procedia PDF Downloads 1322945 A Morphological Examination of Urban Renewal Processes: The Sample of Konya City
Authors: Muzaffer Ali Yaygın, Mehmet Topçu
Abstract:
This research aims to investigate morphological changes in urban patterns in urban renewal areas by using geographic information systems and to reveal pattern differences that occur before and after urban renewal processes by applying a morphological analysis. The concept of urban morphology is not involved in urban renewal and urban planning practices in Turkey. This situation destroys the structural characteristic of urban space which appears as a consequence of changes at city, street or plot level. Different approaches and renewal interventions to urban settlements, which are formed as a reflection of cultural issues, may have positive and negative results. A morphological analysis has been applied to an urban renewal area that covers 325 ha. in Konya, in which city urban renewal projects have gained speed with the increasing of economic investments in this study. The study mentions urban renewal and urban morphology relationship, varied academic approach on the urban morphology issue, urban morphology components, changes in lots pattern and numerical differences that occur on road, construction and green space ratios that are before and after the renewal project, and the results of the morphological analysis. It is seen that the built-up area has significant differences when compared to the previous situation. The amount of green areas decreased significantly in quantitative terms; the transportation systems has been changed completely; and the property ownership has been reconstructed without taking the previous situation into account. Findings show that urban renewal projects in Turkey are put into practice with a rent-oriented approach without making an in-depth analysis. The paper discusses the morphological dimension of urban renewal projects in Turkey through a case study from Konya city.Keywords: Konya, pattern, urban morphology, urban renewal
Procedia PDF Downloads 3692944 Omni-Modeler: Dynamic Learning for Pedestrian Redetection
Authors: Michael Karnes, Alper Yilmaz
Abstract:
This paper presents the application of the omni-modeler towards pedestrian redetection. The pedestrian redetection task creates several challenges when applying deep neural networks (DNN) due to the variety of pedestrian appearance with camera position, the variety of environmental conditions, and the specificity required to recognize one pedestrian from another. DNNs require significant training sets and are not easily adapted for changes in class appearances or changes in the set of classes held in its knowledge domain. Pedestrian redetection requires an algorithm that can actively manage its knowledge domain as individuals move in and out of the scene, as well as learn individual appearances from a few frames of a video. The Omni-Modeler is a dynamically learning few-shot visual recognition algorithm developed for tasks with limited training data availability. The Omni-Modeler adapts the knowledge domain of pre-trained deep neural networks to novel concepts with a calculated localized language encoder. The Omni-Modeler knowledge domain is generated by creating a dynamic dictionary of concept definitions, which are directly updatable as new information becomes available. Query images are identified through nearest neighbor comparison to the learned object definitions. The study presented in this paper evaluates its performance in re-identifying individuals as they move through a scene in both single-camera and multi-camera tracking applications. The results demonstrate that the Omni-Modeler shows potential for across-camera view pedestrian redetection and is highly effective for single-camera redetection with a 93% accuracy across 30 individuals using 64 example images for each individual.Keywords: dynamic learning, few-shot learning, pedestrian redetection, visual recognition
Procedia PDF Downloads 762943 Holographic Art as an Approach to Enhance Visual Communication in Egyptian Community: Experimental Study
Authors: Diaa Ahmed Mohamed Ahmedien
Abstract:
Nowadays, it cannot be denied that the most important interactive arts trends have appeared as a result of significant scientific mutations in the modern sciences, and holographic art is not an exception, where it is considered as a one of the most important major contemporary interactive arts trends in visual arts. Holographic technique had been evoked through the modern physics application in late 1940s, for the improvement of the quality of electron microscope images by Denis Gabor, until it had arrived to Margaret Benyon’s art exhibitions, and then it passed through a lot of procedures to enhance its quality and artistic applications technically and visually more over 70 years in visual arts. As a modest extension to these great efforts, this research aimed to invoke extraordinary attempt to enroll sample of normal people in Egyptian community in holographic recording program to record their appreciated objects or antiques, therefore examine their abilities to interact with modern techniques in visual communication arts. So this research tried to answer to main three questions: 'can we use the analog holographic techniques to unleash new theoretical and practical knowledge in interactive arts for public in Egyptian community?', 'to what extent holographic art can be familiar with public and make them able to produce interactive artistic samples?', 'are there possibilities to build holographic interactive program for normal people which lead them to enhance their understanding to visual communication in public and, be aware of interactive arts trends?' This research was depending in its first part on experimental methods, where it conducted in Laser lab at Cairo University, using Nd: Yag Laser 532 nm, and holographic optical layout, with selected samples of Egyptian people that they have been asked to record their appreciated object, after they had already learned recording methods, and in its second part on a lot of discussion panel had conducted to discuss the result and how participants felt towards their holographic artistic products through survey, questionnaires, take notes and critiquing holographic artworks. Our practical experiments and final discussions have already lead us to say that this experimental research was able to make most of participants pass through paradigm shift in their visual and conceptual experiences towards more interaction with contemporary visual arts trends, as an attempt to emphasize to the role of mature relationship between the art, science and technology, to spread interactive arts out in our community through the latest scientific and artistic mutations around the world and the role of this relationship in our societies particularly with those who have never been enrolled in practical arts programs before.Keywords: Egyptian community, holographic art, laser art, visual art
Procedia PDF Downloads 4792942 Hip Resurfacing Makes for Easier Surgery with Better Functional Outcomes at Time of Revision: A Case Controlled Study
Authors: O. O. Onafowokan, K. Anderson, M. R. Norton, R. G. Middleton
Abstract:
Revision total hip arthroplasty (THA) is known to be a challenging procedure with potential for poor outcomes. Due to its lack of metaphyseal encroachment, hip resurfacing arthroplasty (HRA) is classified as a bone conserving procedure. Although the literature postulates that this is an advantage at time of revision surgery, there is no evidence to either support or refute this claim. We identified 129 hips that had undergone HRA and 129 controls undergoing first revision THA. We recorded the clinical assessment and survivorship of implants in a multi-surgeon, single centre, retrospective case control series for both arms. These were matched for age and sex. Data collected included demographics, indications for surgery, Oxford Hip Score (OHS), length of surgery, length of hospital stay, blood transfusion, implant complexity and further surgical procedures. Significance was taken as p < 0.05. Mean follow up was 7.5 years (1 to 15). There was a significant 6 point difference in postoperative OHS in favour of the revision resurfacing group (p=0.0001). The revision HRA group recorded 48 minutes less length of surgery (p<0.0001), 2 days less in length of hospital stay (p=0.018), a reduced need for blood transfusion (p=0.0001), a need for less complexity in revision implants (p=0.001) and a reduced probability of further surgery being required (P=0.003). Whilst we acknowledge the limitations of this study our results suggest that, in contrast to THA, the bone conservation element of HRA may make for a less traumatic revision procedure with better functional outcomes. Use of HRA has seen a dramatic decline as a result of concerns regarding metallosis. However, this information remains of relevance when counselling young active patients about their arthroplasty options and may become pertinent in the future if the promise of ceramic hip resurfacing is ever realized.Keywords: hip resurfacing, metallosis, revision surgery, total hip arthroplasty
Procedia PDF Downloads 88