Search results for: AI interpretation of GPR exploration images
391 Automation of Finite Element Simulations for the Design Space Exploration and Optimization of Type IV Pressure Vessel
Authors: Weili Jiang, Simon Cadavid Lopera, Klaus Drechsler
Abstract:
Fuel cell vehicle has become the most competitive solution for the transportation sector in the hydrogen economy. Type IV pressure vessel is currently the most popular and widely developed technology for the on-board storage, based on their high reliability and relatively low cost. Due to the stringent requirement on mechanical performance, the pressure vessel is subject to great amount of composite material, a major cost driver for the hydrogen tanks. Evidently, the optimization of composite layup design shows great potential in reducing the overall material usage, yet requires comprehensive understanding on underlying mechanisms as well as the influence of different design parameters on mechanical performance. Given the type of materials and manufacturing processes by which the type IV pressure vessels are manufactured, the design and optimization are a nuanced subject. The manifold of stacking sequence and fiber orientation variation possibilities have an out-standing effect on vessel strength due to the anisotropic property of carbon fiber composites, which make the design space high dimensional. Each variation of design parameters requires computational resources. Using finite element analysis to evaluate different designs is the most common method, however, the model-ing, setup and simulation process can be very time consuming and result in high computational cost. For this reason, it is necessary to build a reliable automation scheme to set up and analyze the di-verse composite layups. In this research, the simulation process of different tank designs regarding various parameters is conducted and automatized in a commercial finite element analysis framework Abaqus. Worth mentioning, the modeling of the composite overwrap is automatically generated using an Abaqus-Python scripting interface. The prediction of the winding angle of each layer and corresponding thickness variation on dome region is the most crucial step of the modeling, which is calculated and implemented using analytical methods. Subsequently, these different composites layups are simulated as axisymmetric models to facilitate the computational complexity and reduce the calculation time. Finally, the results are evaluated and compared regarding the ultimate tank strength. By automatically modeling, evaluating and comparing various composites layups, this system is applicable for the optimization of the tanks structures. As mentioned above, the mechanical property of the pressure vessel is highly dependent on composites layup, which requires big amount of simulations. Consequently, to automatize the simulation process gains a rapid way to compare the various designs and provide an indication of the optimum one. Moreover, this automation process can also be operated for creating a data bank of layups and corresponding mechanical properties with few preliminary configuration steps for the further case analysis. Subsequently, using e.g. machine learning to gather the optimum by the data pool directly without the simulation process.Keywords: type IV pressure vessels, carbon composites, finite element analy-sis, automation of simulation process
Procedia PDF Downloads 134390 Efficient Reuse of Exome Sequencing Data for Copy Number Variation Callings
Authors: Chen Wang, Jared Evans, Yan Asmann
Abstract:
With the quick evolvement of next-generation sequencing techniques, whole-exome or exome-panel data have become a cost-effective way for detection of small exonic mutations, but there has been a growing desire to accurately detect copy number variations (CNVs) as well. In order to address this research and clinical needs, we developed a sequencing coverage pattern-based method not only for copy number detections, data integrity checks, CNV calling, and visualization reports. The developed methodologies include complete automation to increase usability, genome content-coverage bias correction, CNV segmentation, data quality reports, and publication quality images. Automatic identification and removal of poor quality outlier samples were made automatically. Multiple experimental batches were routinely detected and further reduced for a clean subset of samples before analysis. Algorithm improvements were also made to improve somatic CNV detection as well as germline CNV detection in trio family. Additionally, a set of utilities was included to facilitate users for producing CNV plots in focused genes of interest. We demonstrate the somatic CNV enhancements by accurately detecting CNVs in whole exome-wide data from the cancer genome atlas cancer samples and a lymphoma case study with paired tumor and normal samples. We also showed our efficient reuses of existing exome sequencing data, for improved germline CNV calling in a family of the trio from the phase-III study of 1000 Genome to detect CNVs with various modes of inheritance. The performance of the developed method is evaluated by comparing CNV calling results with results from other orthogonal copy number platforms. Through our case studies, reuses of exome sequencing data for calling CNVs have several noticeable functionalities, including a better quality control for exome sequencing data, improved joint analysis with single nucleotide variant calls, and novel genomic discovery of under-utilized existing whole exome and custom exome panel data.Keywords: bioinformatics, computational genetics, copy number variations, data reuse, exome sequencing, next generation sequencing
Procedia PDF Downloads 255389 Detection and Classification Strabismus Using Convolutional Neural Network and Spatial Image Processing
Authors: Anoop T. R., Otman Basir, Robert F. Hess, Eileen E. Birch, Brooke A. Koritala, Reed M. Jost, Becky Luu, David Stager, Ben Thompson
Abstract:
Strabismus refers to a misalignment of the eyes. Early detection and treatment of strabismus in childhood can prevent the development of permanent vision loss due to abnormal development of visual brain areas. We developed a two-stage method for strabismus detection and classification based on photographs of the face. The first stage detects the presence or absence of strabismus, and the second stage classifies the type of strabismus. The first stage comprises face detection using Haar cascade, facial landmark estimation, face alignment, aligned face landmark detection, segmentation of the eye region, and detection of strabismus using VGG 16 convolution neural networks. Face alignment transforms the face to a canonical pose to ensure consistency in subsequent analysis. Using facial landmarks, the eye region is segmented from the aligned face and fed into a VGG 16 CNN model, which has been trained to classify strabismus. The CNN determines whether strabismus is present and classifies the type of strabismus (exotropia, esotropia, and vertical deviation). If stage 1 detects strabismus, the eye region image is fed into stage 2, which starts with the estimation of pupil center coordinates using mask R-CNN deep neural networks. Then, the distance between the pupil coordinates and eye landmarks is calculated along with the angle that the pupil coordinates make with the horizontal and vertical axis. The distance and angle information is used to characterize the degree and direction of the strabismic eye misalignment. This model was tested on 100 clinically labeled images of children with (n = 50) and without (n = 50) strabismus. The True Positive Rate (TPR) and False Positive Rate (FPR) of the first stage were 94% and 6% respectively. The classification stage has produced a TPR of 94.73%, 94.44%, and 100% for esotropia, exotropia, and vertical deviations, respectively. This method also had an FPR of 5.26%, 5.55%, and 0% for esotropia, exotropia, and vertical deviation, respectively. The addition of one more feature related to the location of corneal light reflections may reduce the FPR, which was primarily due to children with pseudo-strabismus (the appearance of strabismus due to a wide nasal bridge or skin folds on the nasal side of the eyes).Keywords: strabismus, deep neural networks, face detection, facial landmarks, face alignment, segmentation, VGG 16, mask R-CNN, pupil coordinates, angle deviation, horizontal and vertical deviation
Procedia PDF Downloads 92388 Application of Ground-Penetrating Radar in Environmental Hazards
Authors: Kambiz Teimour Najad
Abstract:
The basic methodology of GPR involves the use of a transmitting antenna to send electromagnetic waves into the subsurface, which then bounce back to the surface and are detected by a receiving antenna. The transmitter and receiver antennas are typically placed on the ground surface and moved across the area of interest to create a profile of the subsurface. The GPR system consists of a control unit that powers the antennas and records the data, as well as a display unit that shows the results of the survey. The control unit sends a pulse of electromagnetic energy into the ground, which propagates through the soil or rock until it encounters a change in material or structure. When the electromagnetic wave encounters a buried object or structure, some of the energy is reflected back to the surface and detected by the receiving antenna. The GPR data is then processed using specialized software that analyzes the amplitude and travel time of the reflected waves. By interpreting the data, GPR can provide information on the depth, location, and nature of subsurface features and structures. GPR has several advantages over other geophysical survey methods, including its ability to provide high-resolution images of the subsurface and its non-invasive nature, which minimizes disruption to the site. However, the effectiveness of GPR depends on several factors, including the type of soil or rock, the depth of the features being investigated, and the frequency of the electromagnetic waves used. In environmental hazard assessments, GPR can be used to detect buried structures, such as underground storage tanks, pipelines, or utilities, which may pose a risk of contamination to the surrounding soil or groundwater. GPR can also be used to assess soil stability by identifying areas of subsurface voids or sinkholes, which can lead to the collapse of the surface. Additionally, GPR can be used to map the extent and movement of groundwater contamination, which is critical in designing effective remediation strategies. the methodology of GPR in environmental hazard assessments involves the use of electromagnetic waves to create high of the subsurface, which are then analyzed to provide information on the depth, location, and nature of subsurface features and structures. This information is critical in identifying and mitigating environmental hazards, and the non-invasive nature of GPR makes it a valuable tool in this field.Keywords: GPR, hazard, landslide, rock fall, contamination
Procedia PDF Downloads 80387 Impact of Short-Term Drought on Vegetation Health Condition in the Kingdom of Saudi Arabia Using Space Data
Authors: E. Ghoneim, C. Narron, I. Iqbal, I. Hassan, E. Hammam
Abstract:
The scarcity of water is becoming a more prominent threat, especially in areas that are already arid in nature. Although the Kingdom of Saudi Arabia (KSA) is an arid country, its southwestern region offers a high variety of botanical landscapes, many of which are wooded forests, while the eastern and northern regions offer large areas of groundwater irrigated farmlands. At present, some parts of KSA, including forests and farmlands, have witnessed protracted and severe drought due to change in rainfall pattern as a result of global climate change. Such prolonged drought that last for several consecutive years is expected to cause deterioration of forested and pastured lands as well as cause crop failure in the KSA (e.g., wheat yield). An analysis to determine vegetation drought vulnerability and severity during the growing season (September-April) over a fourteen year period (2000-2014) in KSA was conducted using MODIS Terra imagery. The Vegetation Condition Index (VCI), derived from the Normalized Difference Vegetation Index (NDVI), and the Temperature Condition Index (TCI), derived from the Land Surface Temperature (LST) data was extracted from MODIS Terra Images. The VCI and TCI were then combined to compute the Vegetation Health Index (VHI). The VHI revealed the overall vegetation health for the area under investigation. A preliminary outcome of the modeled VHI over KSA, using averaged monthly vegetation data over a 14-year period, revealed that the vegetation health condition is deteriorating over time in both naturally vegetated areas and irrigated farmlands. The derived drought map for KSA indicates that both extreme and severe drought occurrences have considerably increased over the same study period. Moreover, based on the cumulative average of drought frequency in each governorate of KSA it was determined that Makkah and Jizan governorates to the east and southwest, witness the most frequency of extreme drought, whereas Tabuk to the northwest, exhibits the less extreme drought frequency. Areas where drought is extreme or severe would most likely have negative influences on agriculture, ecosystems, tourism, and even human welfare. With the drought risk map the kingdom could make informed land management decisions including were to continue with agricultural endeavors and protect forested areas and even where to develop new settlements.Keywords: drought, vegetation health condition, TCI, Saudi Arabia
Procedia PDF Downloads 384386 Mesoporous Titania Thin Films for Gentamicin Delivery and Bone Morphogenetic Protein-2 Immobilization
Authors: Ane Escobar, Paula Angelomé, Mihaela Delcea, Marek Grzelczak, Sergio Enrique Moya
Abstract:
The antibacterial capacity of bone-anchoring implants can be improved by the use of antibiotics that can be delivered to the media after the surgery. Mesoporous films have shown great potential in drug delivery for orthopedic applications, since pore size and thickness can be tuned to produce different surface area and free volume inside the material. This work shows the synthesis of mesoporous titania films (MTF) by sol-gel chemistry and evaporation-induced self-assembly (EISA) on top of glass substrates. Pores with a diameter of 12nm were observed by Transmission Electron Microscopy (TEM). A film thickness of 100 nm was measured by Scanning Electron Microscopy (SEM). Gentamicin was used to study the antibiotic delivery from the film by means of High-performance liquid chromatography (HPLC). The Staphilococcus aureus strand was used to evaluate the effectiveness of the penicillin loaded films toward inhibiting bacterial colonization. MC3T3-E1 pre-osteoblast cell proliferation experiments proved that MTFs have a good biocompatibility and are a suitable surface for MC3T3-E1 cell proliferation. Moreover, images taken by Confocal Fluorescence Microscopy using labeled vinculin, showed good adhesion of the MC3T3-E1 cells to the MTFs, as well as complex actin filaments arrangement. In order to improve cell proliferation Bone Morphogenetic Protein-2 (BMP-2) was adsorbed on top of the mesoporous film. The deposition of the protein was proved by measurements in the contact angle, showing an increment in the hydrophobicity while the protein concentration is higher. By measuring the dehydrogenase activity in MC3T3-E1 cells cultured in dually functionalized mesoporous titatina films with gentamicin and BMP-2 is possible to find an improvement in cell proliferation. For this purpose, the absorption of a yellow-color formazan dye, product of a water-soluble salt (WST-8) reduction by the dehydrogenases, is measured. In summary, this study proves that by means of the surface modification of MTFs with proteins and loading of gentamicin is possible to achieve an antibacterial effect and a cell growth improvement.Keywords: antibacterial, biocompatibility, bone morphogenetic protein-2, cell proliferation, gentamicin, implants, mesoporous titania films, osteoblasts
Procedia PDF Downloads 161385 Effect of Natural and Urban Environments on the Perception of Thermal Pain – Experimental Research Using Virtual Environments
Authors: Anna Mucha, Ewa Wojtyna, Anita Pollak
Abstract:
The environment in which an individual resides and observes may play a meaningful role in well-being and related constructs. Contact with nature may have a positive influence of natural environments on individuals, impacting mood and psychophysical sensations, such as pain relief. Conversely, urban settings, dominated by concrete elements, might lead to mood decline and heightened stress levels. Similarly, the situation may appear in the case of the perception of virtual environments. However, this is a topic that requires further exploration, especially in the context of relationships with pain. The aforementioned matters served as the basis for formulating and executing the outlined experimental research within the realm of environmental psychology, leveraging new technologies, notably virtual reality (VR), which is progressively gaining prominence in the domain of mental health. The primary objective was to investigate the impact of a simulated virtual environment, mirroring a natural setting abundant in greenery, on the perception of acute pain induced by thermal stimuli (high temperature) – encompassing intensity, unpleasantness, and pain tolerance. Comparative analyses were conducted between the virtual natural environment (intentionally constructed in the likeness of a therapeutic garden), virtual urban environment, and a control group devoid of virtual projections. Secondary objectives aimed to determine the mutual relationships among variables such as positive and negative emotions, preferences regarding virtual environments, sense of presence, and restorative experience in the context of the perception of presented virtual environments and induced thermal pain. The study encompassed 126 physically healthy Polish adults, distributing 42 individuals across each of the three comparative groups. Oculus Rift VR technology and the TSA-II neurosensory analyzer facilitated the experiment. Alongside demographic data, participants' subjective feelings concerning virtual reality and pain were evaluated using the Visual Analogue Scale (VAS), the original Restorative Experience in the Virtual World questionnaire (Doświadczenie Regeneracji w Wirtualnym Świecie), and an adapted Slater-Usoh-Steed (SUS) questionnaire. Results of statistical and psychometric analyses, such as Kruskal-Wallis tests, Wilcoxon tests, and contrast analyses, underscored the positive impact of the virtual natural environment on individual pain perception and mood. The virtual natural environment outperformed the virtual urban environment and the control group without virtual projection, particularly in subjective pain components like intensity and unpleasantness. Variables such as restorative experience, sense of presence and virtual environment preference also proved pivotal in pain perception and pain tolerance threshold alterations, contingent on specific conditions. This implies considerable application potential for virtual natural environments across diverse realms of psychology and related fields, among others as a supportive analgesic approach and a form of relaxation following psychotherapeutic sessions.Keywords: environmental psychology, nature, acute pain, emotions, vitrual reality, virtual environments
Procedia PDF Downloads 62384 Understanding Stock-Out of Pharmaceuticals in Timor-Leste: A Case Study in Identifying Factors Impacting on Pharmaceutical Quantification in Timor-Leste
Authors: Lourenco Camnahas, Eileen Willis, Greg Fisher, Jessie Gunson, Pascale Dettwiller, Charlene Thornton
Abstract:
Stock-out of pharmaceuticals is a common issue at all level of health services in Timor-Leste, a small post-conflict country. This lead to the research questions: what are the current methods used to quantify pharmaceutical supplies; what factors contribute to the on-going pharmaceutical stock-out? The study examined factors that influence the pharmaceutical supply chain system. Methodology: Privett and Goncalvez dependency model has been adopted for the design of the qualitative interviews. The model examines pharmaceutical supply chain management at three management levels: management of individual pharmaceutical items, health facilities, and health systems. The interviews were conducted in order to collect information on inventory management, logistics management information system (LMIS) and the provision of pharmaceuticals. Andersen' behavioural model for healthcare utilization also informed the interview schedule, specifically factors linked to environment (healthcare system and external environment) and the population (enabling factors). Forty health professionals (bureaucrats, clinicians) and six senior officers from a United Nations Agency, a global multilateral agency and a local non-governmental organization were interviewed on their perceptions of factors (healthcare system/supply chain and wider environment) impacting on stock out. Additionally, policy documents for the entire healthcare system, along with population data were collected. Findings: An analysis using Pozzebon’s critical interpretation identified a range of difficulties within the system from poor coordination to failure to adhere to policy guidelines along with major difficulties with inventory management, quantification, forecasting, and budgetary constraints. Weak logistics management information system, lack of capacity in inventory management, monitoring and supervision are additional organizational factors that also contributed to the issue. There were various methods of quantification of pharmaceuticals applied in the government sector, and non-governmental organizations. Lack of reliable data is one of the major problems in the pharmaceutical provision. Global Fund has the best quantification methods fed by consumption data and malaria cases. There are other issues that worsen stock-out: political intervention, work ethic and basic infrastructure such as unreliable internet connectivity. Major issues impacting on pharmaceutical quantification have been identified. However, current data collection identified limitations within the Andersen model; specifically, a failure to take account of predictors in the healthcare system and the environment (culture/politics/social. The next step is to (a) compare models used by three non-governmental agencies with the government model; (b) to run the Andersen explanatory model for pharmaceutical expenditure for 2 to 5 drug items used by these three development partners in order to see how it correlates with the present model in terms of quantification and forecasting the needs; (c) to repeat objectives (a) and (b) using the government model; (d) to draw a conclusion about the strength.Keywords: inventory management, pharmaceutical forecasting and quantification, pharmaceutical stock-out, pharmaceutical supply chain management
Procedia PDF Downloads 242383 qPCR Method for Detection of Halal Food Adulteration
Authors: Gabriela Borilova, Monika Petrakova, Petr Kralik
Abstract:
Nowadays, European producers are increasingly interested in the production of halal meat products. Halal meat has been increasingly appearing in the EU's market network and meat products from European producers are being exported to Islamic countries. Halal criteria are mainly related to the origin of muscle used in production, and also to the way products are obtained and processed. Although the EU has legislatively addressed the question of food authenticity, the circumstances of previous years when products with undeclared horse or poultry meat content appeared on EU markets raised the question of the effectiveness of control mechanisms. Replacement of expensive or not-available types of meat for low-priced meat has been on a global scale for a long time. Likewise, halal products may be contaminated (falsified) by pork or food components obtained from pigs. These components include collagen, offal, pork fat, mechanically separated pork, emulsifier, blood, dried blood, dried blood plasma, gelatin, and others. These substances can influence sensory properties of the meat products - color, aroma, flavor, consistency and texture or they are added for preservation and stabilization. Food manufacturers sometimes access these substances mainly due to their dense availability and low prices. However, the use of these substances is not always declared on the product packaging. Verification of the presence of declared ingredients, including the detection of undeclared ingredients, are among the basic control procedures for determining the authenticity of food. Molecular biology methods, based on DNA analysis, offer rapid and sensitive testing. The PCR method and its modification can be successfully used to identify animal species in single- and multi-ingredient raw and processed foods and qPCR is the first choice for food analysis. Like all PCR-based methods, it is simple to implement and its greatest advantage is the absence of post-PCR visualization by electrophoresis. qPCR allows detection of trace amounts of nucleic acids, and by comparing an unknown sample with a calibration curve, it can also provide information on the absolute quantity of individual components in the sample. Our study addresses a problem that is related to the fact that the molecular biological approach of most of the work associated with the identification and quantification of animal species is based on the construction of specific primers amplifying the selected section of the mitochondrial genome. In addition, the sections amplified in conventional PCR are relatively long (hundreds of bp) and unsuitable for use in qPCR, because in DNA fragmentation, amplification of long target sequences is quite limited. Our study focuses on finding a suitable genomic DNA target and optimizing qPCR to reduce variability and distortion of results, which is necessary for the correct interpretation of quantification results. In halal products, the impact of falsification of meat products by the addition of components derived from pigs is all the greater that it is not just about the economic aspect but above all about the religious and social aspect. This work was supported by the Ministry of Agriculture of the Czech Republic (QJ1530107).Keywords: food fraud, halal food, pork, qPCR
Procedia PDF Downloads 246382 Evaluating Daylight Performance in an Office Environment in Malaysia, Using Venetian Blind Systems
Authors: Fatemeh Deldarabdolmaleki, Mohamad Fakri Zaky Bin Ja'afar
Abstract:
This paper presents fenestration analysis to study the balance between utilizing daylight and eliminating the disturbing parameters in a private office room with interior venetian blinds taking into account different slat angles. Mean luminance of the scene and window, luminance ratio of the workplane and window, work plane illumination and daylight glare probability(DGP) were calculated as a function of venetian blind design properties. Recently developed software, analyzing High Dynamic Range Images (HDRI captured by CCD camera), such as radiance based evalglare and hdrscope help to investigate luminance-based metrics. A total of Eight-day measurement experiment was conducted to investigate the impact of different venetian blind angles in an office environment under daylight condition in Serdang, Malaysia. Detailed result for the selected case study showed that artificial lighting is necessary during the morning session for Malaysian buildings with southwest windows regardless of the venetian blind’s slat angle. However, in some conditions of afternoon session the workplane illuminance level exceeds the maximum illuminance of 2000 lx such as 10° and 40° slat angles. Generally, a rising trend is discovered toward mean window luminance level during the day. All the conditions have less than 10% of the pixels exceeding 2000 cd/m² before 1:00 P.M. However, 40% of the selected hours have more than 10% of the scene pixels higher than 2000 cd/m² after 1:00 P.M. Surprisingly in no blind condition, there is no extreme case of window/task ratio, However, the extreme cases happen for 20°, 30°, 40° and 50° slat angles. As expected mean window luminance level is higher than 2000 cd/m² after 2:00 P.M for most cases except 60° slat angle condition. Studying the daylight glare probability, there is not any DGP value higher than 0.35 in this experiment, due to the window’s direction, location of the building and studied workplane. Specifically, this paper reviews different blind angle’s response to the suggested metrics by the previous standards, and finally conclusions and knowledge gaps are summarized and suggested next steps for research are provided. Addressing these gaps is critical for the continued progress of the energy efficiency movement.Keywords: daylighting, office environment, energy simulation, venetian blind
Procedia PDF Downloads 226381 Femoral Neck Anteversion and Neck-Shaft Angles: Determination and Their Clinical Implications in Fetuses of Different Gestational Ages
Authors: Vrinda Hari Ankolekar, Anne D. Souza, Mamatha Hosapatna
Abstract:
Introduction: Precise anatomical assessment of femoral neck anteversion (FNA) and the neck shaft angles (NSA) would be essential in diagnosing the pathological conditions involving hip joint and its ligaments. FNA of greater than 20 degrees is considered excessive femoral anteversion, whereas a torsion angle of fewer than 10 degrees is considered femoral retroversion. Excessive femoral torsion is not uncommon and has been associated with certain neurologic and orthopedic conditions. The enlargement and maturation of the hip joint increases at the 20th week of gestation and the NSA ranges from 135- 140◦ at birth. Material and methods: 48 femurs were tagged according to the GA and two photographs for each femur were taken using Nikon digital camera. Each femur was kept on a horizontal hard desk and end on an image of the upper end was taken for the estimation of FNA and a photograph in a perpendicular plane was taken to calculate the NSA. The images were transferred to the computer and were stored in TIFF format. Microsoft Paint software was used to mark the points and Image J software was used to calculate the angles digitally. 1. Calculation of FNA: The midpoint of the femoral head and the neck were marked and a line was drawn joining these two points. The angle made by this line with the horizontal plane was measured as FNA. 2. Calculation of NSA: The midpoint of the femoral head and the neck were marked and a line was drawn joining these two points. A vertical line was drawn passing through the tip of the greater trochanter to the inter-condylar notch. The angle formed by these lines was calculated as NSA. Results: The paired t-test for the inter-observer variability showed no significant difference between the values of two observers. (FNA: t=-1.06 and p=0.31; NSA: t=-0.09 and p=0.9). The FNA ranged from 17.08º to 33.97 º on right and 17.32 º to 45.08 º on left. The NSA ranged from 139.33 º to 124.91 º on right and 143.98 º to 123.8 º on left. Unpaired t-test was applied to compare the mean angles between the second and third trimesters which did not show any statistical significance. This shows that the FNA and NSA of femur did not vary significantly during the third trimester. The FNA and NSA were correlated with the GA using Pearson’s correlation. FNA appeared to increase with the GA (r=0.5) but the increase was not statistically significant. A decrease in the NSA was also noted with the GA (r=-0.3) which was also statistically not significant. Conclusion: The present study evaluates the FNA and NSA of the femur in fetuses and correlates their development with the GA during second and third trimesters. The FNA and NSA did not vary significantly during the third trimester.Keywords: anteversion, coxa antetorsa, femoral torsion, femur neck shaft angle
Procedia PDF Downloads 316380 Automated Adaptions of Semantic User- and Service Profile Representations by Learning the User Context
Authors: Nicole Merkle, Stefan Zander
Abstract:
Ambient Assisted Living (AAL) describes a technological and methodological stack of (e.g. formal model-theoretic semantics, rule-based reasoning and machine learning), different aspects regarding the behavior, activities and characteristics of humans. Hence, a semantic representation of the user environment and its relevant elements are required in order to allow assistive agents to recognize situations and deduce appropriate actions. Furthermore, the user and his/her characteristics (e.g. physical, cognitive, preferences) need to be represented with a high degree of expressiveness in order to allow software agents a precise evaluation of the users’ context models. The correct interpretation of these context models highly depends on temporal, spatial circumstances as well as individual user preferences. In most AAL approaches, model representations of real world situations represent the current state of a universe of discourse at a given point in time by neglecting transitions between a set of states. However, the AAL domain currently lacks sufficient approaches that contemplate on the dynamic adaptions of context-related representations. Semantic representations of relevant real-world excerpts (e.g. user activities) help cognitive, rule-based agents to reason and make decisions in order to help users in appropriate tasks and situations. Furthermore, rules and reasoning on semantic models are not sufficient for handling uncertainty and fuzzy situations. A certain situation can require different (re-)actions in order to achieve the best results with respect to the user and his/her needs. But what is the best result? To answer this question, we need to consider that every smart agent requires to achieve an objective, but this objective is mostly defined by domain experts who can also fail in their estimation of what is desired by the user and what not. Hence, a smart agent has to be able to learn from context history data and estimate or predict what is most likely in certain contexts. Furthermore, different agents with contrary objectives can cause collisions as their actions influence the user’s context and constituting conditions in unintended or uncontrolled ways. We present an approach for dynamically updating a semantic model with respect to the current user context that allows flexibility of the software agents and enhances their conformance in order to improve the user experience. The presented approach adapts rules by learning sensor evidence and user actions using probabilistic reasoning approaches, based on given expert knowledge. The semantic domain model consists basically of device-, service- and user profile representations. In this paper, we present how this semantic domain model can be used in order to compute the probability of matching rules and actions. We apply this probability estimation to compare the current domain model representation with the computed one in order to adapt the formal semantic representation. Our approach aims at minimizing the likelihood of unintended interferences in order to eliminate conflicts and unpredictable side-effects by updating pre-defined expert knowledge according to the most probable context representation. This enables agents to adapt to dynamic changes in the environment which enhances the provision of adequate assistance and affects positively the user satisfaction.Keywords: ambient intelligence, machine learning, semantic web, software agents
Procedia PDF Downloads 281379 Rare Internal Organ Trauma in Adolescent Athletes: Insights from a Pancreatic Injury Case Study
Authors: Muhandiram Rallage Ruvini Nisansala Yatigammana, Anuruddhika Kumudu Kumari Rajakaruna Jayathilaka
Abstract:
Sports injuries are common among teenagers and children engaged in organized sports. While most sports injuries are typical, some rare occurrences involve conditions such as eye, dental, cervical, and rare internal organ injuries, such as pancreatic injuries. These injuries, especially traumatic pancreatitis, require prompt attention due to their potential for severe and sometimes fatal complications. This case revolves around a real accident involving a 12-year-old girl, Piyumi, who suffered a face-to-face collision during netball practice, resulting in severe abdominal pain. After a medical examination, she was diagnosed with a rare pancreatic injury, uncommon in children compared to adults. In Piyumi’s case, she had a grade 3 pancreatic injury and underwent non-surgical management, successfully healing her wound without surgery. The study attempts to fill empirical and population gaps, addressing a rarely discussed injury experienced by a 12-year-old female netball player. The paper will also provide an in-depth understanding of pancreatic injury, which is a rare sports injury. The study’s main objective was to investigate the incidence and characteristics of pancreatic injury, particularly focusing on pancreatic trauma, among children and adolescents engaged in high-impact sports, such as netball. This research adopted a case study strategy, employing interviews as the primary data collection method. Interviews were conducted with Piyumi, her parents, and the two specialist doctors directly involved in her treatment, providing firsthand accounts and insights. By examining the case, the paper arrives at three main conclusions. Firstly, pancreatic damage is uncommon, especially in the sports world, and proper diagnosis is essential to avoiding health concerns, particularly for minors. Secondly, CT (Computed Tomography) was useful in locating the injury, as injuries can be diagnosed very well with Computed Tomography (CT) images. Finally, and most importantly, pancreatic injuries are infrequent, but trauma can still occur, particularly in high-impact sports or accidents involving extreme force or falls. These injuries should be accurately diagnosed and treated promptly.Keywords: child athlete, pancreatic injury, rare sports injuries, sportswoman
Procedia PDF Downloads 72378 Apple in the Big Tech Oligopoly: An Analysis of Disruptive Innovation Trends and Their Influence on the Capacity of Conserving a Positive Social Impact as Primary Purpose
Authors: E. Loffi Borghese
Abstract:
In this comprehensive study, we delve into the intricate dynamics of the big tech oligopoly, focusing particularly on Apple as a case study. The core objective is to scrutinize the evolving relationship between a firm's commitment to positive social impact as its primary purpose and its resilience in the face of disruptive innovations within the big tech market. Our exploration begins with a theoretical framework, emphasizing the significance of distinguishing between corporate social responsibility and social impact as a primary purpose. Drawing on insights from Drumwright and Bartkus and Glassman, we underscore the transformative potential when a firm aligns its core business with a social mission, transcending mere side activities. Examining successful firms, such as Apple, we adopt Sinek's perspective on inspirational leadership and the "golden circle." This framework sheds light on why some organizations, like Apple, succeed in making positive social impact their primary purpose. Apple's early-stage life cycle is dissected, revealing a profound commitment to challenging the status quo and promoting simpler alternatives that resonate with its users' lives. The study then navigates through industry life cycles, drawing on Klepper's stages and Christensen's disruptive innovations. Apple's dominance in the big tech oligopoly is contrasted with companies like Harley Davidson and Polaroid, illustrating the consequences of failing to adapt to disruptive innovations. The data and methods employed encompass a qualitative approach, leveraging sources like ECB, Forbes, World in Data, and scientific articles. A secondary data analysis probes Apple's market evolution within the big tech oligopoly, emphasizing the shifts in market context and innovation trends that demand strategic adaptations. The subsequent sections scrutinize Apple's present innovation strategies, highlighting its diversified product portfolio and intensified focus on big data. We examine the implications of these shifts on Apple's capacity to maintain positive social impact as its primary purpose, pondering potential consequences on its brand perception. The study culminates in a reflection on the broader implications of the big tech oligopoly's dominance. It contemplates the diminishing competitiveness in the market and the potential sidelining of positive social impact as a competitive advantage. The expansion of tech firms into diverse sectors raises concerns about negative societal impacts, prompting a call for increased regulatory attention and awareness. In conclusion, this research serves as a catalyst for heightened awareness and discussion on the intricate interplay between firms' social impact goals, disruptive innovations, and the broader societal implications within the evolving landscape of the big tech oligopoly. Despite limitations, this study aims to stimulate further research, urging a conscious and responsible approach to shaping the future economic system.Keywords: innovation trends, market dynamics, social impact, tech oligopoly
Procedia PDF Downloads 72377 Accessing Motional Quotient for All Round Development
Authors: Zongping Wang, Chengjun Cui, Jiacun Wang
Abstract:
The concept of intelligence has been widely used to access an individual's cognitive abilities to learn, form concepts, understand, apply logic, and reason. According to the multiple intelligence theory, there are eight distinguished types of intelligence. One of them is the bodily-kinaesthetic intelligence that links to the capacity of an individual controlling his body and working with objects. Motor intelligence, on the other hand, reflects the capacity to understand, perceive and solve functional problems by motor behavior. Both bodily-kinaesthetic intelligence and motor intelligence refer directly or indirectly to bodily capacity. Inspired by these two intelligence concepts, this paper introduces motional intelligence (MI). MI is two-fold. (1) Body strength, which is the capacity of various organ functions manifested by muscle activity under the control of the central nervous system during physical exercises. It can be measured by the magnitude of muscle contraction force, the frequency of repeating a movement, the time to finish a movement of body position, the duration to maintain muscles in a working status, etc. Body strength reflects the objective of MI. (2) Level of psychiatric willingness to physical events. It is a subjective thing and determined by an individual’s self-consciousness to physical events and resistance to fatigue. As such, we call it subjective MI. Subjective MI can be improved through education and proper social events. The improvement of subjective MI can lead to that of objective MI. A quantitative score of an individual’s MI is motional quotient (MQ). MQ is affected by several factors, including genetics, physical training, diet and lifestyle, family and social environment, and personal awareness of the importance of physical exercise. Genes determine one’s body strength potential. Physical training, in general, makes people stronger, faster and swifter. Diet and lifestyle have a direct impact on health. Family and social environment largely affect one’s passion for physical activities, so does personal awareness of the importance of physical exercise. The key to the success of the MQ study is developing an acceptable and efficient system that can be used to assess MQ objectively and quantitatively. We should apply different accessing systems to different groups of people according to their ages and genders. Field test, laboratory test and questionnaire are among essential components of MQ assessment. A scientific interpretation of MQ score is part of an MQ assessment system as it will help an individual to improve his MQ. IQ (intelligence quotient) and EQ (emotional quotient) and their test have been studied intensively. We argue that IQ and EQ study alone is not sufficient for an individual’s all round development. The significance of MQ study is that it offsets IQ and EQ study. MQ reflects an individual’s mental level as well as bodily level of intelligence in physical activities. It is well-known that the American Springfield College seal includes the Luther Gulick triangle with the words “spirit,” “mind,” and “body” written within it. MQ, together with IQ and EQ, echoes this education philosophy. Since its inception in 2012, the MQ research has spread rapidly in China. By now, six prestigious universities in China have established research centers on MQ and its assessment.Keywords: motional Intelligence, motional quotient, multiple intelligence, motor intelligence, all round development
Procedia PDF Downloads 161376 Post-Exercise Recovery Tracking Based on Electrocardiography-Derived Features
Authors: Pavel Bulai, Taras Pitlik, Tatsiana Kulahava, Timofei Lipski
Abstract:
The method of Electrocardiography (ECG) interpretation for post-exercise recovery tracking was developed. Metabolic indices (aerobic and anaerobic) were designed using ECG-derived features. This study reports the associations between aerobic and anaerobic indices and classical parameters of the person’s physiological state, including blood biochemistry, glycogen concentration and VO2max changes. During the study 9 participants, healthy, physically active medium trained men and women, which trained 2-4 times per week for at least 9 weeks, fulfilled (i) ECG monitoring using Apple Watch Series 4 (AWS4); (ii) blood biochemical analysis; (iii) maximal oxygen consumption (VO2max) test, (iv) bioimpedance analysis (BIA). ECG signals from a single-lead wrist-wearable device were processed with detection of QRS-complex. Aerobic index (AI) was derived as the normalized slope of QR segment. Anaerobic index (ANI) was derived as the normalized slope of SJ segment. Biochemical parameters, glycogen content and VO2max were evaluated eight times within 3-60 hours after training. ECGs were recorded 5 times per day, plus before and after training, cycloergometry and BIA. The negative correlation between AI and blood markers of the muscles functional status including creatine phosphokinase (r=-0.238, p < 0.008), aspartate aminotransferase (r=-0.249, p < 0.004) and uric acid (r = -0.293, p<0.004) were observed. ANI was also correlated with creatine phosphokinase (r= -0.265, p < 0.003), aspartate aminotransferase (r = -0.292, p < 0.001), lactate dehydrogenase (LDH) (r = -0.190, p < 0.050). So, when the level of muscular enzymes increases during post-exercise fatigue, AI and ANI decrease. During recovery, the level of metabolites is restored, and metabolic indices rising is registered. It can be concluded that AI and ANI adequately reflect the physiology of the muscles during recovery. One of the markers of an athlete’s physiological state is the ratio between testosterone and cortisol (TCR). TCR provides a relative indication of anabolic-catabolic balance and is considered to be more sensitive to training stress than measuring testosterone and cortisol separately. AI shows a strong negative correlation with TCR (r=-0.437, p < 0.001) and correctly represents post-exercise physiology. In order to reveal the relation between the ECG-derived metabolic indices and the state of the cardiorespiratory system, direct measurements of VO2max were carried out at various time points after training sessions. The negative correlation between AI and VO2max (r = -0.342, p < 0.001) was obtained. These data testifying VO2max rising during fatigue are controversial. However, some studies have revealed increased stroke volume after training, that agrees with findings. It is important to note that post-exercise increase in VO2max does not mean an athlete’s readiness for the next training session, because the recovery of the cardiovascular system occurs over a substantially longer period. Negative correlations registered for ANI with glycogen (r = -0.303, p < 0.001), albumin (r = -0.205, p < 0.021) and creatinine (r = -0.268, p < 0.002) reflect the dehydration status of participants after training. Correlations between designed metabolic indices and physiological parameters revealed in this study can be considered as the sufficient evidence to use these indices for assessing the state of person’s aerobic and anaerobic metabolic systems after training during fatigue, recovery and supercompensation.Keywords: aerobic index, anaerobic index, electrocardiography, supercompensation
Procedia PDF Downloads 114375 Documentary Filmmaking as Activism: Case Studies in Advocacy and Social Justice
Authors: Babatunde Kolawole
Abstract:
This paper embarks on an exploration of the compelling interplay between documentary filmmaking and activism, delving into their symbiotic relationship and profound impact on advocacy and social justice causes. Through an in-depth analysis of diverse case studies, it seeks to illuminate the instances where documentary films have emerged as potent tools for effecting social change and advancing the principles of justice. This research underscores the vital role played by documentary filmmakers in harnessing the medium's unique capacity to engage, educate, and mobilize audiences while advocating for societal transformation. The primary focus of this study is on a selection of compelling case studies spanning various topics and causes, each exemplifying the marriage between documentary filmmaking and activism. These case studies encompass a broad spectrum of subjects, from environmental conservation and climate change to civil rights movements and human rights struggles. By examining these real-world instances, this paper endeavors to provide a comprehensive understanding of the strategies, challenges, and ethical considerations that underpin the practice of documentary filmmaking as a form of activism. Throughout the paper, it becomes evident that the potency of documentary filmmaking lies in its ability to blend artistry with social impact. The selected case studies vividly demonstrate how documentary filmmakers, armed with cameras and a passion for change, have emerged as critical agents of societal transformation. Whether it be exposing environmental atrocities, shedding light on systemic inequalities, or giving voice to marginalized communities, these documentaries have played a pivotal role in pushing the boundaries of advocacy and social justice. One of the key themes explored in this paper is the evolving nature of documentary filmmaking as a tool for activism. It delves into the shift from traditional observational documentaries to more participatory and immersive approaches, highlighting the dynamic ways in which filmmakers engage with their subjects and audiences. This evolution is exemplified in case studies where filmmakers have collaborated with the communities they document, fostering a sense of agency and empowerment among those whose stories are being told. Furthermore, this research underscores the ethical considerations inherent in the intersection of documentary filmmaking and activism. It scrutinizes questions surrounding representation, objectivity, and the responsibility of filmmakers in portraying complex social issues. By dissecting ethical dilemmas faced by documentary filmmakers in these case studies, this paper encourages a critical examination of the ethical boundaries and obligations in the realm of advocacy-driven filmmaking. In conclusion, this paper aims to shed light on the remarkable potential of documentary filmmaking as a catalyst for activism and social justice. Through the lens of compelling case studies, it illustrates the transformative power of the medium in effecting change, amplifying underrepresented voices, and mobilizing global audiences. It is hoped that this research will not only inform the discourse on documentary activism but also inspire filmmakers, scholars, and advocates to continue leveraging the cinematic art form as a formidable force for a more just and equitable world.Keywords: film, filmmaker, documentary, human right
Procedia PDF Downloads 51374 Monitoring Land Cover/Land Use Change in Rupandehi District by Optimising Remotely Sensed Image
Authors: Hritik Bhattarai
Abstract:
Land use and land cover play a crucial role in preserving and managing Earth's natural resources. Various factors, such as economic, demographic, social, cultural, technological, and environmental processes, contribute to changes in land use and land cover (LULC). Rupandehi District is significantly influenced by a combination of driving forces, including its geographical location, rapid population growth, economic opportunities, globalization, tourism activities, and political events. Urbanization and urban growth in the region have been occurring in an unplanned manner, with internal migration and natural population growth being the primary contributors. Internal migration, particularly from neighboring districts in the higher and lower Himalayan regions, has been high, leading to increased population growth and density. This study utilizes geospatial technology, specifically geographic information system (GIS), to analyze and illustrate the land cover and land use changes in the Rupandehi district for the years 2009 and 2019, using freely available Landsat images. The identified land cover categories include built-up area, cropland, Das-Gaja, forest, grassland, other woodland, riverbed, and water. The statistical analysis of the data over the 10-year period (2009-2019) reveals significant percentage changes in LULC. Notably, Das-Gaja shows a minimal change of 99.9%, while water and forest exhibit increases of 34.5% and 98.6%, respectively. Riverbed and built-up areas experience changes of 95.3% and 39.6%, respectively. Cropland and grassland, however, show concerning decreases of 102.6% and 140.0%, respectively. Other woodland also indicates a change of 50.6%. The most noteworthy trends are the substantial increase in water areas and built-up areas, leading to the degradation of agricultural and open spaces. This emphasizes the urgent need for effective urban planning activities to ensure the development of a sustainable city. While Das-Gaja seems unaffected, the decreasing trends in cropland and grassland, accompanied by the increasing built-up areas, are unsatisfactory. It is imperative for relevant authorities to be aware of these trends and implement proactive measures for sustainable urban development.Keywords: land use and land cover, geospatial, urbanization, geographic information system, sustainable urban development
Procedia PDF Downloads 56373 Measuring Fluctuating Asymmetry in Human Faces Using High-Density 3D Surface Scans
Authors: O. Ekrami, P. Claes, S. Van Dongen
Abstract:
Fluctuating asymmetry (FA) has been studied for many years as an indicator of developmental stability or ‘genetic quality’ based on the assumption that perfect symmetry is ideally the expected outcome for a bilateral organism. Further studies have also investigated the possible link between FA and attractiveness or levels of masculinity or femininity. These hypotheses have been mostly examined using 2D images, and the structure of interest is usually presented using a limited number of landmarks. Such methods have the downside of simplifying and reducing the dimensionality of the structure, which will in return increase the error of the analysis. In an attempt to reach more conclusive and accurate results, in this study we have used high-resolution 3D scans of human faces and have developed an algorithm to measure and localize FA, taking a spatially-dense approach. A symmetric spatially dense anthropometric mask with paired vertices is non-rigidly mapped on target faces using an Iterative Closest Point (ICP) registration algorithm. A set of 19 manually indicated landmarks were used to examine the precision of our mapping step. The protocol’s accuracy in measurement and localizing FA is assessed using simulated faces with known amounts of asymmetry added to them. The results of validation of our approach show that the algorithm is perfectly capable of locating and measuring FA in 3D simulated faces. With the use of such algorithm, the additional captured information on asymmetry can be used to improve the studies of FA as an indicator of fitness or attractiveness. This algorithm can especially be of great benefit in studies of high number of subjects due to its automated and time-efficient nature. Additionally, taking a spatially dense approach provides us with information about the locality of FA, which is impossible to obtain using conventional methods. It also enables us to analyze the asymmetry of a morphological structures in a multivariate manner; This can be achieved by using methods such as Principal Components Analysis (PCA) or Factor Analysis, which can be a step towards understanding the underlying processes of asymmetry. This method can also be used in combination with genome wide association studies to help unravel the genetic bases of FA. To conclude, we introduced an algorithm to study and analyze asymmetry in human faces, with the possibility of extending the application to other morphological structures, in an automated, accurate and multi-variate framework.Keywords: developmental stability, fluctuating asymmetry, morphometrics, 3D image processing
Procedia PDF Downloads 139372 A Corpus-Based Study on the Lexical, Syntactic and Sequential Features across Interpreting Types
Authors: Qianxi Lv, Junying Liang
Abstract:
Among the various modes of interpreting, simultaneous interpreting (SI) is regarded as a ‘complex’ and ‘extreme condition’ of cognitive tasks while consecutive interpreters (CI) do not have to share processing capacity between tasks. Given that SI exerts great cognitive demand, it makes sense to posit that the output of SI may be more compromised than that of CI in the linguistic features. The bulk of the research has stressed the varying cognitive demand and processes involved in different modes of interpreting; however, related empirical research is sparse. In keeping with our interest in investigating the quantitative linguistic factors discriminating between SI and CI, the current study seeks to examine the potential lexical simplification, syntactic complexity and sequential organization mechanism with a self-made inter-model corpus of transcribed simultaneous and consecutive interpretation, translated speech and original speech texts with a total running word of 321960. The lexical features are extracted in terms of the lexical density, list head coverage, hapax legomena, and type-token ratio, as well as core vocabulary percentage. Dependency distance, an index for syntactic complexity and reflective of processing demand is employed. Frequency motif is a non-grammatically-bound sequential unit and is also used to visualize the local function distribution of interpreting the output. While SI is generally regarded as multitasking with high cognitive load, our findings evidently show that CI may impose heavier or taxing cognitive resource differently and hence yields more lexically and syntactically simplified output. In addition, the sequential features manifest that SI and CI organize the sequences from the source text in different ways into the output, to minimize the cognitive load respectively. We reasoned the results in the framework that cognitive demand is exerted both on maintaining and coordinating component of Working Memory. On the one hand, the information maintained in CI is inherently larger in volume compared to SI. On the other hand, time constraints directly influence the sentence reformulation process. The temporal pressure from the input in SI makes the interpreters only keep a small chunk of information in the focus of attention. Thus, SI interpreters usually produce the output by largely retaining the source structure so as to relieve the information from the working memory immediately after formulated in the target language. Conversely, CI interpreters receive at least a few sentences before reformulation, when they are more self-paced. CI interpreters may thus tend to retain and generate the information in a way to lessen the demand. In other words, interpreters cope with the high demand in the reformulation phase of CI by generating output with densely distributed function words, more content words of higher frequency values and fewer variations, simpler structures and more frequently used language sequences. We consequently propose a revised effort model based on the result for a better illustration of cognitive demand during both interpreting types.Keywords: cognitive demand, corpus-based, dependency distance, frequency motif, interpreting types, lexical simplification, sequential units distribution, syntactic complexity
Procedia PDF Downloads 175371 Migration as a Trigger Causing Change to the Levant Literary Modernism
Authors: Aathira Peedikaparambil Somasundaran
Abstract:
The beginning of the 20th century marked the perios when a new generation of Lebanese radicals sowed the seeds for the second phase of Levant literary modernism, situated in the Levant. Beirut, during this era popularly fit every radical writer’s criterion owing to its weakened censorship and political control, despite the absence of a protective womb for the development of literary modernism, caused by the natively prevalent political unsettlement. The third stage of literary modernization, in which scholars used Western-inspired critical techniques to better understand their own cultures, coincides with the time period examined in this paper, which involved the international-inspired critical analysis of native cultural stimulants, which raised questions among Arab freethinking intellectuals. Locals who ventured outside recognised the difference between the West's progress and their own nations' stagnation. The awareness of such ‘gap of success’ aroused an ambition from journalists, authors, and proletarian revolutionaries who had studied in Europe, and finally developed enlightened ideas. Some Middle Eastern authors and artists only adopted current social and political frameworks after discovering western modernity. After learning about the upheavals that were happening in the West, these thinkers aspired to bring about equally broad drastic developments in their own country's social, political, and cultural milieu. These occurrences illustrate the increased power of migration to alter the cultural and literary scene in the Levant. The paper intends to discuss the different effects of migration that contributed to Levant literary modernism. The exploration of these factors as causes begins with addressing the politically influenced activism, that has always been a relevant part of Beirut, and then diving into the psychological effects of migration in the individuals of the society, which might have induced an accommodability to alien thoughts and ideas over time, as a coping mechanism. Nature or environmental stimuli, a common trigger for any creative output, often having the highest influence during travel will be identified and analysed to inspect the extent of its impact on the exchange of ideas that resulted in Levant modernism. The efficiency of both the stimulating component of travel and the diaspora of the indigenous, a by-product of travel in catalysing modernism in the Levant has to be proven in order to understand how migration indirectly affected the transmission and adoption of ideas in Levant literature. The paper will revisit the events revolving around these key players and platforms like Shir, to understand how the Lebanese literature, tied down in poetry drastically mutated under the leadership of Adonis, Yusuf et Khal, and other pioneers of Levant literary modernism. The conclision will identify the triggers that helped authors overcome personal and geographical barriers to unite the West and the Levant, and investigate the extent to which the bi-directional migration prompted a transformation in the local poetry. Consequently, the paper aims to shed light into the unique factor that provoked the shift in the literary scene of Twentieth century in the Middle East.Keywords: literature, modernism, Middle East, levant, Beirut
Procedia PDF Downloads 80370 The Achievements and Challenges of Physics Teachers When Implementing Problem-Based Learning: An Exploratory Study Applied to Rural High Schools
Authors: Osman Ali, Jeanne Kriek
Abstract:
Introduction: The current instructional approach entrenched in memorizing does not assist conceptual understanding in science. Instructional approaches that encourage research, investigation, and experimentation, which depict how scientists work, should be encouraged. One such teaching strategy is problem-based learning (PBL). PBL has many advantages; enhanced self-directed learning and improved problem-solving and critical thinking skills. However, despite many advantages, PBL has challenges. Research confirmed is time-consuming and difficult to formulate ill-structured questions. Professional development interventions are needed for in-service educators to adopt the PBL strategy. The purposively selected educators had to implement PBL in their classrooms after the intervention to develop their practice and then reflect on the implementation. They had to indicate their achievements and challenges. This study differs from previous studies as the rural educators were subjected to implementing PBL in their classrooms and reflected on their experiences, beliefs, and attitudes regarding PBL. Theoretical Framework: The study reinforced Vygotskian sociocultural theory. According to Vygotsky, the development of a child's cognitive is sustained by the interaction between the child and more able peers in his immediate environment. The theory suggests that social interactions in small groups create an opportunity for learners to form concepts and skills on their own better than working individually. PBL emphasized learning in small groups. Research Methodology: An exploratory case study was employed. The reason is that the study was not necessarily for specific conclusive evidence. Non-probability purposive sampling was adopted to choose eight schools from 89 rural public schools. In each school, two educators were approached, teaching physical sciences in grades 10 and 11 (N = 16). The research instruments were questionnaires, interviews, and lesson observation protocol. Two open-ended questionnaires were developed before and after intervention and analyzed thematically. Three themes were identified. The semi-structured interviews and responses were coded and transcribed into three themes. Subsequently, the Reform Teaching Observation Protocol (RTOP) was adopted for lesson observation and was analyzed using five constructs. Results: Evidence from analyzing the questionnaires before and after the intervention shows that participants knew better what was required to develop an ill-structured problem during the implementation. Furthermore, indications from the interviews are that participants had positive views about the PBL strategy. They stated that they only act as facilitators, and learners’ problem-solving and critical thinking skills are enhanced. They suggested a change in curriculum to adopt the PBL strategy. However, most participants may not continue to apply the PBL strategy stating that it is time-consuming and difficult to complete the Annual Teaching Plan (ATP). They complained about materials and equipment and learners' readiness to work. Evidence from RTOP shows that after the intervention, participants learn to encourage exploration and use learners' questions and comments to determine the direction and focus of classroom discussions.Keywords: problem-solving, self-directed, critical thinking, intervention
Procedia PDF Downloads 119369 Nostalgia in Photographed Books for Children – the Case of Photography Books of Children in the Kibbutz
Authors: Ayala Amir
Abstract:
The paper presents interdisciplinary research which draws on the literary study and the cultural study of photography to explore a literary genre defined by nostalgia – the photographed book for children. This genre, which was popular in the second half of the 20th century, presents the romantic, nostalgic image of childhood created in the visual arts in the 18th century (as suggested by Ann Higonnet). At the same time, it capitalizes on the nostalgia inherent in the event of photography as formulated by Jennifer Green-Lewis: photography frames a moment in the present while transforming it into a past longed for in the future. Unlike Freudian melancholy, nostalgia is an effect that enables representation by acknowledging the loss and containing it in the very experience of the object. The representation and preservation of the lost object (nature, childhood, innocence) are in the center of the genre of children's photography books – a modern version of ancient pastoral. In it, the unique synergia of word and image results in a nostalgic image of childhood in an era already conquered by modernization. The nostalgic effect works both in the representation of space – an Edenic image of nature already shadowed by its demise, and of time – an image of childhood imbued by what Gill Bartholnyes calls the "looking backward aesthetics" – under the sign of loss. Little critical attention has been devoted to this genre with the exception of the work of Bettina Kümmerling-Meibauer, who noted the nostalgic effect of the well-known series of photography books by Astrid Lindgren and Anna Riwkin-Brick. This research aims to elaborate Kümmerling-Meibauer's approach using the theories of the study of photography, word-image studies, as well as current studies of childhood. The theoretical perspectives are implemented in the case study of photography books created in one of the most innovative social structures in our time – the Israeli Kibbutz. This communal way of life designed a society where children will experience their childhood in a parentless rural environment that will save them from the fate of the Oedipal fall. It is suggested that in documenting these children in a fictional format, photographers and writers, images and words cooperated in creating nostalgic works situated on the border between nature and culture, imagination and reality, utopia and its realization in history.Keywords: nostalgia, photography , childhood, children's books, kibutz
Procedia PDF Downloads 139368 Use of 3D Printed Bioscaffolds from Decellularized Umbilical Cord for Cartilage Regeneration
Authors: Tayyaba Bari, Muhammad Hamza Anjum, Samra Kanwal, Fakhera Ikram
Abstract:
Osteoarthritis, a degenerative condition, affects more than 213 million individuals globally. Since articular cartilage has no or limited vessels, therefore, after deteriorating, it is unable to rejuvenate. Traditional approaches for cartilage repair, like autologous chondrocyte implantation, microfracture and cartilage transplantation are often associated with postoperative complications and lead to further degradation. Decellularized human umbilical cord has gained interest as a viable treatment for cartilage repair. Decellularization removes all cellular contents as well as debris, leaving a biologically active 3D network known as extracellular matrix (ECM). This matrix is biodegradable, non-immunogenic and provides a microenvironment for homeostasis, growth and repair. UC derived bioink function as 3D scaffolding material, not only mediates cell-matrix interactions but also adherence, proliferation and propagation of cells for 3D organoids. This study comprises different physical, chemical and biological approaches to optimize the decellularization of human umbilical cord (UC) tissues followed by the solubilization of these tissues to bioink formation. The decellularization process consisted of two cycles of freeze thaw where the umbilical cord at -20˚C was thawed at room temperature followed by dissection in small sections from 0.5 to 1cm. Similarly decellularization with ionic and non-ionic detergents Sodium dodecyl sulfate (SDS) and Triton-X 100 revealed that both concentrations of SDS i.e 0.1% and 1% were effective in complete removal of cells from the small UC tissues. The results of decellularization was further confirmed by running them on 1% agarose gel. Histological analysis revealed the efficacy of decellularization, which involves paraffin embedded samples of 4μm processed for Hematoxylin-eosin-safran and 4,6-diamidino-2-phenylindole (DAPI). ECM preservation was confirmed by Alcian Blue, and Masson’s trichrome staining on consecutive sections and images were obtained. Sulfated GAG’s content were determined by 1,9-dimethyl-methylene blue (DMMB) assay, similarly collagen quantification was done by hydroxy proline assay. This 3D bioengineered scaffold will provide a typical atmosphere as in the extracellular matrix of the tissue, which would be seeded with the mesenchymal cells to generate the desired 3D ink for in vitro and in vivo cartilage regeneration applications.Keywords: umbilical cord, 3d printing, bioink, tissue engineering, cartilage regeneration
Procedia PDF Downloads 97367 Depictions of Human Cannibalism and the Challenge They Pose to the Understanding of Animal Rights
Authors: Desmond F. Bellamy
Abstract:
Discourses about animal rights usually assume an ontological abyss between human and animal. This supposition of non-animality allows us to utilise and exploit non-humans, particularly those with commercial value, with little regard for their rights or interests. We can and do confine them, inflict painful treatments such as castration and branding, and slaughter them at an age determined only by financial considerations. This paper explores the way images and texts depicting human cannibalism reflect this deprivation of rights back onto our species and examines how this offers new perspectives on our granting or withholding of rights to farmed animals. The animals we eat – sheep, pigs, cows, chickens and a small handful of other species – are during processing de-animalised, turned into commodities, and made unrecognisable as formerly living beings. To do the same to a human requires the cannibal to enact another step – humans must first be considered as animals before they can be commodified or de-animalised. Different iterations of cannibalism in a selection of fiction and non-fiction texts will be considered: survivalism (necessitated by catastrophe or dystopian social collapse), the primitive savage of colonial discourses, and the inhuman psychopath. Each type of cannibalism shows alternative ways humans can be animalised and thereby dispossessed of both their human and animal rights. Human rights, summarised in the UN Universal Declaration of Human Rights as ‘life, liberty, and security of person’ are stubbornly denied to many humans, and are refused to virtually all farmed non-humans. How might this paradigm be transformed by seeing the animal victim replaced by an animalised human? People are fascinated as well as repulsed by cannibalism, as demonstrated by the upsurge of films on the subject in the last few decades. Cannibalism is, at its most basic, about envisaging and treating humans as objects: meat. It is on the dinner plate that the abyss between human and ‘animal’ is most challenged. We grasp at a conscious level that we are a species of animal and may become, if in the wrong place (e.g., shark-infested water), ‘just food’. Culturally, however, strong traditions insist that humans are much more than ‘just meat’ and deserve a better fate than torment and death. The billions of animals on death row awaiting human consumption would ask the same if they could. Depictions of cannibalism demonstrate in graphic ways that humans are animals, made of meat and that we can also be butchered and eaten. These depictions of us as having the same fleshiness as non-human animals reminds us that they have the same capacities for pain and pleasure as we do. Depictions of cannibalism, therefore, unconsciously aid in deconstructing the human/animal binary and give a unique glimpse into the often unnoticed repudiation of animal rights.Keywords: animal rights, cannibalism, human/animal binary, objectification
Procedia PDF Downloads 137366 The Analysis of Noise Harmfulness in Public Utility Facilities
Authors: Monika Sobolewska, Aleksandra Majchrzak, Bartlomiej Chojnacki, Katarzyna Baruch, Adam Pilch
Abstract:
The main purpose of the study is to perform the measurement and analysis of noise harmfulness in public utility facilities. The World Health Organization reports that the number of people suffering from hearing impairment is constantly increasing. The most alarming is the number of young people occurring in the statistics. The majority of scientific research in the field of hearing protection and noise prevention concern industrial and road traffic noise as the source of health problems. As the result, corresponding standards and regulations defining noise level limits are enforced. However, there is another field uncovered by profound research – leisure time. Public utility facilities such as clubs, shopping malls, sport facilities or concert halls – they all generate high-level noise, being out of proper juridical control. Among European Union Member States, the highest legislative act concerning noise prevention is the Environmental Noise Directive 2002/49/EC. However, it omits the problem discussed above and even for traffic, railway and aircraft noise it does not set limits or target values, leaving these issues to the discretion of the Member State authorities. Without explicit and uniform regulations, noise level control at places designed for relaxation and entertainment is often in the responsibility of people having little knowledge of hearing protection, unaware of the risk the noise pollution poses. Exposure to high sound levels in clubs, cinemas, at concerts and sports events may result in a progressive hearing loss, especially among young people, being the main target group of such facilities and events. The first step to change this situation and to raise the general awareness is to perform reliable measurements the results of which will emphasize the significance of the problem. This project presents the results of more than hundred measurements, performed in most types of public utility facilities in Poland. As the most suitable measuring instrument for such a research, personal noise dosimeters were used to collect the data. Each measurement is presented in the form of numerical results including equivalent and peak sound pressure levels and a detailed description considering the type of the sound source, size and furnishing of the room and the subjective sound level evaluation. In the absence of a straight reference point for the interpretation of the data, the limits specified in EU Directive 2003/10/EC were used for comparison. They set the maximum sound level values for workers in relation to their working time length. The analysis of the examined problem leads to the conclusion that during leisure time, people are exposed to noise levels significantly exceeding safe values. As the hearing problems are gradually progressing, most people underplay the problem, ignoring the first symptoms. Therefore, an effort has to be made to specify the noise regulations for public utility facilities. Without any action, in the foreseeable future the majority of Europeans will be dealing with serious hearing damage, which will have a negative impact on the whole societies.Keywords: hearing protection, noise level limits, noise prevention, noise regulations, public utility facilities
Procedia PDF Downloads 222365 Cytokine Profiling in Cultured Endometrial Cells after Hormonal Treatment
Authors: Mark Gavriel, Ariel J. Jaffa, Dan Grisaru, David Elad
Abstract:
The human endometrium-myometrium interface (EMI) is the uterine inner barrier without a separatig layer. It is composed of endometrial epithelial cells (EEC) and endometrial stromal cells (ESC) in the endometrium and myometrial smooth muscle cells (MSMC) in the myometrium. The EMI undergoes structural remodeling during the menstruation cycle which are essential for human reproduction. Recently, we co-cultured a layer-by-layer in vitro model of EEC, ESC and MSMC on a synthetic membrane for mechanobiology experiments. We also treated the model with progesterone and β-estradiol in order to mimic the in vivo receptive uterus In the present study we analyzed the cytokines profile in a single layer of EEC the hormonal treated in vitro model of the EMI. The methodologies of this research include simple tissue-engineering . First, we cultured commercial EEC (RL95-2, ATCC® CRL-1671™) in 24-wellplate. Then, we applied an hormonal stimuli protocol with 17-β-estradiol and progesterone in time dependent concentration according to the human physiology that mimics the menstrual cycle. We collected cell supernatant samples of control, pre-ovulation, ovulation and post-ovulaton periods for analysis of the secreted proteins and cytokines. The cytokine profiling was performed using the Proteome Profiler Human XL Cytokine Array Kit (R&D Systems, Inc., USA) that can detect105 human soluble cytokines. The relative quantification of all the cytokines will be analyzed using xMAP – LUMINEX. We conducted a fishing expedition with the 4 membranes Proteome Profiler. We processed the images, quantified the spots intensity and normalized these values by the negative control and reference spots at the membrane. Analyses of the relative quantities that reflected change higher than 5% of the control points of the kit revealed the The results clearly showed that there are significant changes in the cytokine level for inflammation and angiogenesis pathways. Analysis of tissue-engineered models of the uterine wall will enable deeper investigation of molecular and biomechanical aspects of early reproductive stages (e.g. the window of implantation) or developments of pathologies.Keywords: tissue-engineering, hormonal stimuli, reproduction, multi-layer uterine model, progesterone, β-estradiol, receptive uterine model, fertility
Procedia PDF Downloads 130364 On the Bias and Predictability of Asylum Cases
Authors: Panagiota Katsikouli, William Hamilton Byrne, Thomas Gammeltoft-Hansen, Tijs Slaats
Abstract:
An individual who demonstrates a well-founded fear of persecution or faces real risk of being subjected to torture is eligible for asylum. In Danish law, the exact legal thresholds reflect those established by international conventions, notably the 1951 Refugee Convention and the 1950 European Convention for Human Rights. These international treaties, however, remain largely silent when it comes to how states should assess asylum claims. As a result, national authorities are typically left to determine an individual’s legal eligibility on a narrow basis consisting of an oral testimony, which may itself be hampered by several factors, including imprecise language interpretation, insecurity or lacking trust towards the authorities among applicants. The leaky ground, on which authorities must assess their subjective perceptions of asylum applicants' credibility, questions whether, in all cases, adjudicators make the correct decision. Moreover, the subjective element in these assessments raises questions on whether individual asylum cases could be afflicted by implicit biases or stereotyping amongst adjudicators. In fact, recent studies have uncovered significant correlations between decision outcomes and the experience and gender of the assigned judge, as well as correlations between asylum outcomes and entirely external events such as weather and political elections. In this study, we analyze a publicly available dataset containing approximately 8,000 summaries of asylum cases, initially rejected, and re-tried by the Refugee Appeals Board (RAB) in Denmark. First, we look for variations in the recognition rates, with regards to a number of applicants’ features: their country of origin/nationality, their identified gender, their identified religion, their ethnicity, whether torture was mentioned in their case and if so, whether it was supported or not, and the year the applicant entered Denmark. In order to extract those features from the text summaries, as well as the final decision of the RAB, we applied natural language processing and regular expressions, adjusting for the Danish language. We observed interesting variations in recognition rates related to the applicants’ country of origin, ethnicity, year of entry and the support or not of torture claims, whenever those were made in the case. The appearance (or not) of significant variations in the recognition rates, does not necessarily imply (or not) bias in the decision-making progress. None of the considered features, with the exception maybe of the torture claims, should be decisive factors for an asylum seeker’s fate. We therefore investigate whether the decision can be predicted on the basis of these features, and consequently, whether biases are likely to exist in the decisionmaking progress. We employed a number of machine learning classifiers, and found that when using the applicant’s country of origin, religion, ethnicity and year of entry with a random forest classifier, or a decision tree, the prediction accuracy is as high as 82% and 85% respectively. tentially predictive properties with regards to the outcome of an asylum case. Our analysis and findings call for further investigation on the predictability of the outcome, on a larger dataset of 17,000 cases, which is undergoing.Keywords: asylum adjudications, automated decision-making, machine learning, text mining
Procedia PDF Downloads 92363 Using Photogrammetric Techniques to Map the Mars Surface
Authors: Ahmed Elaksher, Islam Omar
Abstract:
For many years, Mars surface has been a mystery for scientists. Lately with the help of geospatial data and photogrammetric procedures researchers were able to capture some insights about this planet. Two of the most imperative data sources to explore Mars are the The High Resolution Imaging Science Experiment (HiRISE) and the Mars Orbiter Laser Altimeter (MOLA). HiRISE is one of six science instruments carried by the Mars Reconnaissance Orbiter, launched August 12, 2005, and managed by NASA. The MOLA sensor is a laser altimeter carried by the Mars Global Surveyor (MGS) and launched on November 7, 1996. In this project, we used MOLA-based DEMs to orthorectify HiRISE optical images for generating a more accurate and trustful surface of Mars. The MOLA data was interpolated using the kriging interpolation technique. Corresponding tie points were digitized from both datasets. These points were employed in co-registering both datasets using GIS analysis tools. In this project, we employed three different 3D to 2D transformation models. These are the parallel projection (3D affine) transformation model; the extended parallel projection transformation model; the Direct Linear Transformation (DLT) model. A set of tie-points was digitized from both datasets. These points were split into two sets: Ground Control Points (GCPs), used to evaluate the transformation parameters using least squares adjustment techniques, and check points (ChkPs) to evaluate the computed transformation parameters. Results were evaluated using the RMSEs between the precise horizontal coordinates of the digitized check points and those estimated through the transformation models using the computed transformation parameters. For each set of GCPs, three different configurations of GCPs and check points were tested, and average RMSEs are reported. It was found that for the 2D transformation models, average RMSEs were in the range of five meters. Increasing the number of GCPs from six to ten points improve the accuracy of the results with about two and half meters. Further increasing the number of GCPs didn’t improve the results significantly. Using the 3D to 2D transformation parameters provided three to two meters accuracy. Best results were reported using the DLT transformation model. However, increasing the number of GCPS didn’t have substantial effect. The results support the use of the DLT model as it provides the required accuracy for ASPRS large scale mapping standards. However, well distributed sets of GCPs is a key to provide such accuracy. The model is simple to apply and doesn’t need substantial computations.Keywords: mars, photogrammetry, MOLA, HiRISE
Procedia PDF Downloads 56362 Carlos Guillermo 'Cubena' Wilson's Literary Texts as Platforms for Social Commentary and Critique of Panamanian Society
Authors: Laverne Seales
Abstract:
When most people think of Panama, they immediately think of the Canal; however, the construction and the people who made it possible are often omitted and seldom acknowledged. The reality is that the construction of this waterway was achieved through forced migration and discriminatory practices toward people of African descent, specifically black people from the Caribbean. From the colonial period to the opening and subsequent operation of the Panama Canal by the United States, this paper goes through the rich layers of Panamanian history to examine the life of Afro-Caribbeans and their descendants in Panama. It also considers the role of the United States in Panama; it explores how the United States in Panama forged a racially complex country that made the integration of Afro-Caribbeans and their descendants difficult. After laying a historical foundation, the exploration of Afro-Caribbean people and Panamanians of Afro-Caribbean descent are analyzed through Afro-Panamanian writer Carlos Guillermo ‘Cubena’ Wilson's novels, short stories, and poetry. This study focuses on how Cubena addresses racism, discrimination, inequality, and social justice issues towards Afro-Caribbeans and their descendants who traveled to Panama to construct the Canal. Content analysis methodology can yield several significant contributions, and analyzing Carlos Guillermo Wilson's literature under this framework allows us to consider social commentary and critique of Panamanian society. It identifies the social issues and concerns of Afro-Caribbeans and people of Afro-Caribbean descent, such as inequality, corruption, racism, political oppression, and cultural identity. Analysis methodology allows us to explore how Cubena's literature engages with questions of cultural identity and belonging in Panamanian society. By examining themes related to race, ethnicity, language, and heritage, this research uncovers the complexities of Panamanian cultural identity, allowing us to interrogate power dynamics and social hierarchies in Panamanian society. Analyzing the portrayal of different social groups, institutions, and power structures helps uncover how power is wielded, contested, and resisted; Cubena's fictional world allows us to see how it functions in Panama. Content analysis methodology also provides for critiquing political systems and governance in Panama. By examining the representation and presentation of political figures, institutions, and events in Cubena's literature, we uncover his commentary on corruption, authoritarianism, governance, and the role of the United States in Panama. Content analysis highlights how Wilson's literature amplifies the voices and experiences of marginalized individuals and communities in Panamanian society. By centering the narratives of Afro-Panamanians and other marginalized groups, this researcher uncovers Cubena's commitment to social justice and inclusion in his writing and helps the reader engage with historical narratives and collective memory in Panama. Overall, analyzing Carlos Guillermo ‘Cubena’ Wilson's literature as a platform for social commentary and critique of Panamanian society using content analysis methodology provides valuable insights into the cultural, social, and political dimensions of Afro-Panamanians during and after the construction of the Panama Canal.Keywords: Afro-Caribbean, Panama Canal, race, Afro-Panamanian, identity, history
Procedia PDF Downloads 41