Search results for: push time
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18420

Search results for: push time

14880 Big Data Analysis Approach for Comparison New York Taxi Drivers' Operation Patterns between Workdays and Weekends Focusing on the Revenue Aspect

Authors: Yongqi Dong, Zuo Zhang, Rui Fu, Li Li

Abstract:

The records generated by taxicabs which are equipped with GPS devices is of vital importance for studying human mobility behavior, however, here we are focusing on taxi drivers' operation strategies between workdays and weekends temporally and spatially. We identify a group of valuable characteristics through large scale drivers' behavior in a complex metropolis environment. Based on the daily operations of 31,000 taxi drivers in New York City, we classify drivers into top, ordinary and low-income groups according to their monthly working load, daily income, daily ranking and the variance of the daily rank. Then, we apply big data analysis and visualization methods to compare the different characteristics among top, ordinary and low income drivers in selecting of working time, working area as well as strategies between workdays and weekends. The results verify that top drivers do have special operation tactics to help themselves serve more passengers, travel faster thus make more money per unit time. This research provides new possibilities for fully utilizing the information obtained from urban taxicab data for estimating human behavior, which is not only very useful for individual taxicab driver but also to those policy-makers in city authorities.

Keywords: big data, operation strategies, comparison, revenue, temporal, spatial

Procedia PDF Downloads 228
14879 An Enhanced Hybrid Backoff Technique for Minimizing the Occurrence of Collision in Mobile Ad Hoc Networks

Authors: N. Sabiyath Fatima, R. K. Shanmugasundaram

Abstract:

In Mobile Ad-hoc Networks (MANETS), every node performs both as transmitter and receiver. The existing backoff models do not exactly forecast the performance of the wireless network. Also, the existing models experience elevated packet collisions. Every time a collision happens, the station’s contention window (CW) is doubled till it arrives at the utmost value. The main objective of this paper is to diminish collision by means of contention window Multiplicative Increase Decrease Backoff (CWMIDB) scheme. The intention of rising CW is to shrink the collision possibility by distributing the traffic into an outsized point in time. Within wireless Ad hoc networks, the CWMIDB algorithm dynamically controls the contention window of the nodes experiencing collisions. During packet communication, the backoff counter is evenly selected from the given choice of [0, CW-1]. At this point, CW is recognized as contention window and its significance lies on the amount of unsuccessful transmission that had happened for the packet. On the initial transmission endeavour, CW is put to least amount value (C min), if transmission effort fails, subsequently the value gets doubled, and once more the value is set to least amount on victorious broadcast. CWMIDB is simulated inside NS2 environment and its performance is compared with Binary Exponential Backoff Algorithm. The simulation results show improvement in transmission probability compared to that of the existing backoff algorithm.

Keywords: backoff, contention window, CWMIDB, MANET

Procedia PDF Downloads 280
14878 Unveiling the Potential of MoSe₂ for Toxic Gas Sensing: Insights from Density Functional Theory and Non-equilibrium Green’s Function Calculations

Authors: Si-Jie Ji, Santhanamoorthi Nachimuthu, Jyh-Chiang Jiang

Abstract:

With the rapid development of industrialization and urbanization, air pollution poses significant global environmental challenges, contributing to acid rain, global warming, and adverse health effects. Therefore, it is necessary to monitor the concentration of toxic gases in the atmospheric environment in real-time and to deploy cost-effective gas sensors capable of detecting their emissions. In this study, we systematically investigated the sensing capabilities of the two-dimensional MoSe₂ for seven key environmental gases (NO, NO₂, CO, CO₂, SO₂, SO₃, and O₂) using density functional theory (DFT) and non-equilibrium Green’s function (NEGF) calculations. We also investigated the impact of H₂O as an interfering gas. Our results indicate that the MoSe₂ monolayer is thermodynamically stable and exhibits strong gas-sensing capabilities. The calculated adsorption energies indicate that these gases can stably adsorb on MoSe₂, with SO₃ exhibiting the strongest adsorption energy (-0.63 eV). Electronic structure analysis, including projected density of states (PDOS) and Bader charge analysis, demonstrates significant changes in the electronic properties of MoSe₂ upon gas adsorption, affecting its conductivity and sensing performance. We find that oxygen (O₂) adsorption notably influenced the deformation of MoSe₂. To comprehensively understand the potential of MoSe₂ as a gas sensor, we used the NEGF method to assess the electronic transport properties of MoSe₂ under gas adsorption, evaluating current-voltage (I-V), resistance-voltage (R-V) characteristics, and transmission spectra to determine sensitivity, selectivity, and recovery time compared to pristine MoSe₂. Sensitivity, selectivity, and recovery time are analyzed at a bias voltage of 1.7V, showing excellent performance of MoSe₂ in detecting SO₃, among other gases. The pronounced changes in electronic transport behavior induced by SO₃ adsorption confirm MoSe₂’s strong potential as a high-performance gas-sensing material. Overall, this theoretical study provides new insights into the development of high-performance gas sensors, demonstrating the potential of MoSe₂ as a gas-sensing material, particularly for gases like SO₃.

Keywords: density functional theory, gas sensing, MoSe₂, non-equilibrium Green’s function, SO

Procedia PDF Downloads 25
14877 Study of Rehydration Process of Dried Squash (Cucurbita pepo) at Different Temperatures and Dry Matter-Water Ratios

Authors: Sima Cheraghi Dehdezi, Nasser Hamdami

Abstract:

Air-drying is the most widely employed method for preserving fruits and vegetables. Most of the dried products must be rehydrated by immersion in water prior to their use, so the study of rehydration kinetics in order to optimize rehydration phenomenon has great importance. Rehydration typically composes of three simultaneous processes: the imbibition of water into dried material, the swelling of the rehydrated products and the leaching of soluble solids to rehydration medium. In this research, squash (Cucurbita pepo) fruits were cut into 0.4 cm thick and 4 cm diameter slices. Then, squash slices were blanched in a steam chamber for 4 min. After cooling to room temperature, squash slices were dehydrated in a hot air dryer, under air flow 1.5 m/s and air temperature of 60°C up to moisture content of 0.1065 kg H2O per kg d.m. Dehydrated samples were kept in polyethylene bags and stored at 4°C. Squash slices with specified weight were rehydrated by immersion in distilled water at different temperatures (25, 50, and 75°C), various dry matter-water ratios (1:25, 1:50, and 1:100), which was agitated at 100 rpm. At specified time intervals, up to 300 min, the squash samples were removed from the water, and the weight, moisture content and rehydration indices of the sample were determined.The texture characteristics were examined over a 180 min period. The results showed that rehydration time and temperature had significant effects on moisture content, water absorption capacity (WAC), dry matter holding capacity (DHC), rehydration ability (RA), maximum force and stress in dried squash slices. Dry matter-water ratio had significant effect (p˂0.01) on all squash slice properties except DHC. Moisture content, WAC and RA of squash slices increased, whereas DHC and texture firmness (maximum force and stress) decreased with rehydration time. The maximum moisture content, WAC and RA and the minimum DHC, force and stress, were observed in squash slices rehydrated into 75°C water. The lowest moisture content, WAC and RA and the highest DHC, force and stress, were observed in squash slices immersed in water at 1:100 dry matter-water ratio. In general, for all rehydration conditions of squash slices, the highest water absorption rate occurred during the first minutes of process. Then, this rate decreased. The highest rehydration rate and amount of water absorption occurred in 75°C.

Keywords: dry matter-water ratio, squash, maximum force, rehydration ability

Procedia PDF Downloads 314
14876 Different Sampling Schemes for Semi-Parametric Frailty Model

Authors: Nursel Koyuncu, Nihal Ata Tutkun

Abstract:

Frailty model is a survival model that takes into account the unobserved heterogeneity for exploring the relationship between the survival of an individual and several covariates. In the recent years, proposed survival models become more complex and this feature causes convergence problems especially in large data sets. Therefore selection of sample from these big data sets is very important for estimation of parameters. In sampling literature, some authors have defined new sampling schemes to predict the parameters correctly. For this aim, we try to see the effect of sampling design in semi-parametric frailty model. We conducted a simulation study in R programme to estimate the parameters of semi-parametric frailty model for different sample sizes, censoring rates under classical simple random sampling and ranked set sampling schemes. In the simulation study, we used data set recording 17260 male Civil Servants aged 40–64 years with complete 10-year follow-up as population. Time to death from coronary heart disease is treated as a survival-time and age, systolic blood pressure are used as covariates. We select the 1000 samples from population using different sampling schemes and estimate the parameters. From the simulation study, we concluded that ranked set sampling design performs better than simple random sampling for each scenario.

Keywords: frailty model, ranked set sampling, efficiency, simple random sampling

Procedia PDF Downloads 214
14875 Distributional and Developmental Analysis of PM2.5 in Beijing, China

Authors: Alexander K. Guo

Abstract:

PM2.5 poses a large threat to people’s health and the environment and is an issue of large concern in Beijing, brought to the attention of the government by the media. In addition, both the United States Embassy in Beijing and the government of China have increased monitoring of PM2.5 in recent years, and have made real-time data available to the public. This report utilizes hourly historical data (2008-2016) from the U.S. Embassy in Beijing for the first time. The first objective was to attempt to fit probability distributions to the data to better predict a number of days exceeding the standard, and the second was to uncover any yearly, seasonal, monthly, daily, and hourly patterns and trends that may arise to better understand of air control policy. In these data, 66,650 hours and 2687 days provided valid data. Lognormal, gamma, and Weibull distributions were fit to the data through an estimation of parameters. The Chi-squared test was employed to compare the actual data with the fitted distributions. The data were used to uncover trends, patterns, and improvements in PM2.5 concentration over the period of time with valid data in addition to specific periods of time that received large amounts of media attention, analyzed to gain a better understanding of causes of air pollution. The data show a clear indication that Beijing’s air quality is unhealthy, with an average of 94.07µg/m3 across all 66,650 hours with valid data. It was found that no distribution fit the entire dataset of all 2687 days well, but each of the three above distribution types was optimal in at least one of the yearly data sets, with the lognormal distribution found to fit recent years better. An improvement in air quality beginning in 2014 was discovered, with the first five months of 2016 reporting an average PM2.5 concentration that is 23.8% lower than the average of the same period in all years, perhaps the result of various new pollution-control policies. It was also found that the winter and fall months contained more days in both good and extremely polluted categories, leading to a higher average but a comparable median in these months. Additionally, the evening hours, especially in the winter, reported much higher PM2.5 concentrations than the afternoon hours, possibly due to the prohibition of trucks in the city in the daytime and the increased use of coal for heating in the colder months when residents are home in the evening. Lastly, through analysis of special intervals that attracted media attention for either unnaturally good or bad air quality, the government’s temporary pollution control measures, such as more intensive road-space rationing and factory closures, are shown to be effective. In summary, air quality in Beijing is improving steadily and do follow standard probability distributions to an extent, but still needs improvement. Analysis will be updated when new data become available.

Keywords: Beijing, distribution, patterns, pm2.5, trends

Procedia PDF Downloads 247
14874 Influence of Processing Parameters on the Reliability of Sieving as a Particle Size Distribution Measurements

Authors: Eseldin Keleb

Abstract:

In the pharmaceutical industry particle size distribution is an important parameter for the characterization of pharmaceutical powders. The powder flowability, reactivity and compatibility, which have a decisive impact on the final product, are determined by particle size and size distribution. Therefore, the aim of this study was to evaluate the influence of processing parameters on the particle size distribution measurements. Different Size fractions of α-lactose monohydrate and 5% polyvinylpyrrolidone were prepared by wet granulation and were used for the preparation of samples. The influence of sieve load (50, 100, 150, 200, 250, 300, and 350 g), processing time (5, 10, and 15 min), sample size ratios (high percentage of small and large particles), type of disturbances (vibration and shaking) and process reproducibility have been investigated. Results obtained showed that a sieve load of 50 g produce the best separation, a further increase in sample weight resulted in incomplete separation even after the extension of the processing time for 15 min. Performing sieving using vibration was rapider and more efficient than shaking. Meanwhile between day reproducibility showed that particle size distribution measurements are reproducible. However, for samples containing 70% fines or 70% large particles, which processed at optimized parameters, the incomplete separation was always observed. These results indicated that sieving reliability is highly influenced by the particle size distribution of the sample and care must be taken for samples with particle size distribution skewness.

Keywords: sieving, reliability, particle size distribution, processing parameters

Procedia PDF Downloads 617
14873 Estimating Knowledge Flow Patterns of Business Method Patents with a Hidden Markov Model

Authors: Yoonjung An, Yongtae Park

Abstract:

Knowledge flows are a critical source of faster technological progress and stouter economic growth. Knowledge flows have been accelerated dramatically with the establishment of a patent system in which each patent is required by law to disclose sufficient technical information for the invention to be recreated. Patent analysis, thus, has been widely used to help investigate technological knowledge flows. However, the existing research is limited in terms of both subject and approach. Particularly, in most of the previous studies, business method (BM) patents were not covered although they are important drivers of knowledge flows as other patents. In addition, these studies usually focus on the static analysis of knowledge flows. Some use approaches that incorporate the time dimension, yet they still fail to trace a true dynamic process of knowledge flows. Therefore, we investigate dynamic patterns of knowledge flows driven by BM patents using a Hidden Markov Model (HMM). An HMM is a popular statistical tool for modeling a wide range of time series data, with no general theoretical limit in regard to statistical pattern classification. Accordingly, it enables characterizing knowledge patterns that may differ by patent, sector, country and so on. We run the model in sets of backward citations and forward citations to compare the patterns of knowledge utilization and knowledge dissemination.

Keywords: business method patents, dynamic pattern, Hidden-Markov Model, knowledge flow

Procedia PDF Downloads 331
14872 Hybrid Genetic Approach for Solving Economic Dispatch Problems with Valve-Point Effect

Authors: Mohamed I. Mahrous, Mohamed G. Ashmawy

Abstract:

Hybrid genetic algorithm (HGA) is proposed in this paper to determine the economic scheduling of electric power generation over a fixed time period under various system and operational constraints. The proposed technique can outperform conventional genetic algorithms (CGAs) in the sense that HGA make it possible to improve both the quality of the solution and reduce the computing expenses. In contrast, any carefully designed GA is only able to balance the exploration and the exploitation of the search effort, which means that an increase in the accuracy of a solution can only occure at the sacrifice of convergent speed, and vice visa. It is unlikely that both of them can be improved simultaneously. The proposed hybrid scheme is developed in such a way that a simple GA is acting as a base level search, which makes a quick decision to direct the search towards the optimal region, and a local search method (pattern search technique) is next employed to do the fine tuning. The aim of the strategy is to achieve the cost reduction within a reasonable computing time. The effectiveness of the proposed hybrid technique is verified on two real public electricity supply systems with 13 and 40 generator units respectively. The simulation results obtained with the HGA for the two real systems are very encouraging with regard to the computational expenses and the cost reduction of power generation.

Keywords: genetic algorithms, economic dispatch, pattern search

Procedia PDF Downloads 447
14871 Model Predictive Control with Unscented Kalman Filter for Nonlinear Implicit Systems

Authors: Takashi Shimizu, Tomoaki Hashimoto

Abstract:

A class of implicit systems is known as a more generalized class of systems than a class of explicit systems. To establish a control method for such a generalized class of systems, we adopt model predictive control method which is a kind of optimal feedback control with a performance index that has a moving initial time and terminal time. However, model predictive control method is inapplicable to systems whose all state variables are not exactly known. In other words, model predictive control method is inapplicable to systems with limited measurable states. In fact, it is usual that the state variables of systems are measured through outputs, hence, only limited parts of them can be used directly. It is also usual that output signals are disturbed by process and sensor noises. Hence, it is important to establish a state estimation method for nonlinear implicit systems with taking the process noise and sensor noise into consideration. To this purpose, we apply the model predictive control method and unscented Kalman filter for solving the optimization and estimation problems of nonlinear implicit systems, respectively. The objective of this study is to establish a model predictive control with unscented Kalman filter for nonlinear implicit systems.

Keywords: optimal control, nonlinear systems, state estimation, Kalman filter

Procedia PDF Downloads 205
14870 Forensic Necropsy-Importance in Wildlife Conservation

Authors: G. V. Sai Soumya, Kalpesh Solanki, Sumit K. Choudhary

Abstract:

Necropsy is another term used for an autopsy, which is known as death examination in the case of animals. It is a complete standardized procedure involving dissection, observation, interpretation, and documentation. Government Bodies like National Tiger Conservation Authority (NTCA) have given standard operating procedures for commencing the necropsies. Necropsies are rarely performed as compared to autopsies performed on human bodies. There are no databases which maintain the count of autopsies in wildlife, but the research in this area has shown a very small number of necropsies. Long back, wildlife forensics came into existence but is coming into light nowadays as there is an increase in wildlife crime cases, including the smuggling of trophies, pooching, and many more. Physical examination in cases of animals is not sufficient to yield fruitful information, and thus postmortem examination plays an important role. Postmortem examination helps in the determination of time since death, cause of death, manner of death, factors affecting the case under investigation, and thus decreases the amount of time required in solving cases. Increasing the rate of necropsies will help forensic veterinary pathologists to build standardized provision and confidence within them, which will ultimately yield a higher success rate in solving wildlife crime cases.

Keywords: necropsy, wildlife crime, postmortem examination, forensic application

Procedia PDF Downloads 142
14869 Measuring Emotion Dynamics on Facebook: Associations between Variability in Expressed Emotion and Psychological Functioning

Authors: Elizabeth M. Seabrook, Nikki S. Rickard

Abstract:

Examining time-dependent measures of emotion such as variability, instability, and inertia, provide critical and complementary insights into mental health status. Observing changes in the pattern of emotional expression over time could act as a tool to identify meaningful shifts between psychological well- and ill-being. From a practical standpoint, however, examining emotion dynamics day-to-day is likely to be burdensome and invasive. Utilizing social media data as a facet of lived experience can provide real-world, temporally specific access to emotional expression. Emotional language on social media may provide accurate and sensitive insights into individual and community mental health and well-being, particularly with focus placed on the within-person dynamics of online emotion expression. The objective of the current study was to examine the dynamics of emotional expression on the social network platform Facebook for active users and their relationship with psychological well- and ill-being. It was expected that greater positive and negative emotion variability, instability, and inertia would be associated with poorer psychological well-being and greater depression symptoms. Data were collected using a smartphone app, MoodPrism, which delivered demographic questionnaires, psychological inventories assessing depression symptoms and psychological well-being, and collected the Status Updates of consenting participants. MoodPrism also delivered an experience sampling methodology where participants completed items assessing positive affect, negative affect, and arousal, daily for a 30-day period. The number of positive and negative words in posts was extracted and automatically collated by MoodPrism. The relative proportion of positive and negative words from the total words written in posts was then calculated. Preliminary analyses have been conducted with the data of 9 participants. While these analyses are underpowered due to sample size, they have revealed trends that greater variability in the emotion valence expressed in posts is positively associated with greater depression symptoms (r(9) = .56, p = .12), as is greater instability in emotion valence (r(9) = .58, p = .099). Full data analysis utilizing time-series techniques to explore the Facebook data set will be presented at the conference. Identifying the features of emotion dynamics (variability, instability, inertia) that are relevant to mental health in social media emotional expression is a fundamental step in creating automated screening tools for mental health that are temporally sensitive, unobtrusive, and accurate. The current findings show how monitoring basic social network characteristics over time can provide greater depth in predicting risk and changes in depression and positive well-being.

Keywords: emotion, experience sampling methods, mental health, social media

Procedia PDF Downloads 251
14868 Analysis of the Production Time in a Pharmaceutical Company

Authors: Hanen Khanchel, Karim Ben Kahla

Abstract:

Pharmaceutical companies are facing competition. Indeed, the price differences between competing products can be such that it becomes difficult to compensate them by differences in value added. The conditions of competition are no longer homogeneous for the players involved. The price of a product is a given that puts a company and its customer face to face. However, price fixing obliges the company to consider internal factors relating to production costs and external factors such as customer attitudes, the existence of regulations and the structure of the market on which the firm evolved. In setting the selling price, the company must first take into account internal factors relating to its costs: costs of production fall into two categories, fixed costs and variable costs that depend on the quantities produced. The company cannot consider selling below what it costs the product. It, therefore, calculates the unit cost of production to which it adds the unit cost of distribution, enabling it to know the unit cost of production of the product. The company adds its margin and thus determines its selling price. The margin is used to remunerate the capital providers and to finance the activity of the company and its investments. Production costs are related to the quantities produced: large-scale production generally reduces the unit cost of production, which is an asset for companies with mass production markets. This shows that small and medium-sized companies with limited market segments need to make greater efforts to ensure their profit margins. As a result, and faced with high and low market prices for raw materials and increasing staff costs, the company must seek to optimize its production time in order to reduce loads and eliminate waste. Then, the customer pays only value added. Thus, and based on this principle we decided to create a project that deals with the problem of waste in our company, and having as objectives the reduction of production costs and improvement of performance indicators. This paper presents the implementation of the Value Stream Mapping (VSM) project in a pharmaceutical company. It is structured as follows: 1) determination of the family of products, 2) drawing of the current state, 3) drawing of the future state, 4) action plan and implementation.

Keywords: VSM, waste, production time, kaizen, cartography, improvement

Procedia PDF Downloads 152
14867 Experiences and Views of Foundation Phase Teachers When Teaching English First Additional Language in Rural Schools

Authors: Rendani Mercy Makhwathana

Abstract:

This paper intends to explore the experiences and views of Foundation Phase teachers when teaching English First Additional Language in rural public schools. Teachers all over the world are pillars of any education system. Consequently, any education transformation should start with teachers as critical role players in the education system. As a result, teachers’ experiences and views are worth consideration, for they impact on learners learning and the wellbeing of education in general. An exploratory qualitative approach with the use of phenomenological research design was used in this paper. The population for this paper comprised all Foundation Phase teachers in the district. Purposive sampling technique was used to select a sample of 15 Foundation Phase teachers from five rural-based schools. Data was collected through classroom observation and individual face-to-face interviews. Data were categorised, analysed and interpreted. The findings revealed that from time-to-time teachers experiences one or more challenging situations, learners’ low participation in the classroom to lack of resources. This paper recommends that teachers should be provided with relevant resources and support to effectively teach English First Additional Language.

Keywords: the education system, first additional language, foundation phase, intermediate phase, language of learning and teaching, medium of instruction, teacher professional development

Procedia PDF Downloads 96
14866 Application of GPR for Prospection in Two Archaeological Sites at Aswan Area, Egypt

Authors: Abbas Mohamed Abbas, Raafat El-Shafie Fat-Helbary, Karrar Omar El Fergawy, Ahmed Hamed Sayed

Abstract:

The exploration in archaeological area requires non-invasive methods, and hence the Ground Penetrating Radar (GPR) technique is a proper candidate for this task. GPR investigation is widely applied for searching for hidden ancient targets. So, in this paper GPR technique has been used in archaeological investigation. The aim of this study was to obtain information about the subsurface and associated structures beneath two selected sites at the western bank of the River Nile at Aswan city. These sites have archaeological structures of different ages starting from 6thand 12th Dynasties to the Greco-Roman period. The first site is called Nag’ El Gulab, the study area was 30 x 16 m with separating distance 2m between each profile, while the second site is Nag’ El Qoba, the survey method was not in grid but in lines pattern with different lengths. All of these sites were surveyed by GPR model SIR-3000 with antenna 200 MHz. Beside the processing of each profile individually, the time-slice maps have been conducted Nag’ El Gulab site, to view the amplitude changes in a series of horizontal time slices within the ground. The obtained results show anomalies may interpret as presence of associated tombs structures. The probable tombs structures similar in their depth level to the opened tombs in the studied areas.

Keywords: ground penetrating radar, archeology, Nag’ El Gulab, Nag’ El Qoba

Procedia PDF Downloads 396
14865 The Impact of Childhood Cancer on the Quality of Life of Survivor: A Qualitative Analysis of Functionality and Participation

Authors: Catarina Grande, Barbara Mota

Abstract:

The main goal of the present study was to understand the impact of childhood cancer on the quality of life of survivors and the extent to which oncologic disease affects the functionality and participation of survivors at the present time, compared to the time of diagnosis. Six survivors of pediatric cancer participated in the study. Participants were interviewed using a semi-structured interview, adapted from two instruments present in the literature - QALY and QLACS - and piloted through a previous study. This study is based on a qualitative approach using content analysis, allowing the identification of categories and subcategories. Subsequently, the correspondence between the units of meaning and the codes in the International Classification of Functioning, Disability, and Health for Children and Young, which contributed to a more detailed analysis of the impact on the quality of life of survivors in relation to the domains under study. The results showed significant changes between the moment of diagnosis and the present moment, concretely at the microsystem of the survivor. Regarding functionality and participation, the results show that the functions of the body are the most affected domain, emphasizing the emotional component that currently has a greater impact on the quality of life of survivors. The present study allowed identifying a set of codes for the development of a CIF-CJ core set for pediatric cancer survivors. He also indicated the need for future studies to validate and deepen these issues.

Keywords: cancer, participation, quality of life, survivor

Procedia PDF Downloads 241
14864 Design of a Lumbar Interspinous Process Fixation Device for Minimizing Soft Tissue Removal and Operation Time

Authors: Minhyuk Heo, Jihwan Yun, Seonghun Park

Abstract:

It has been reported that intervertebral fusion surgery, which removes most of the ligaments and muscles of the spine, increases the degenerative disease in adjacent spinal segments. Therefore, it is required to develop a lumbar interspinous process fixation device that minimizes the risks and side effects from the surgery. The objective of the current study is to design an interspinous process fixation device with simple structures in order to minimize soft tissue removal and operation time during intervertebral fusion surgery. For the design concepts of a lumbar fixation device, the principle of the ratchet was first applied on the joining parts of the device in order to shorten the operation time. The coil spring structure was selected for connecting parts between the spinous processes so that a normal range of motion in spinal segments is preserved and degenerative spinal diseases are not developed in the adjacent spinal segments. The stiffness of the spring was determined not to interrupt the motion of a lumbar spine. The designed value of the spring stiffness allows the upper part of the spring to move ~10° which is higher than the range of flexion and extension for normal lumbar spine (6°-8°), when a moment of 10Nm is applied on the upper face of L1. A finite element (FE) model composed of L1 to L5 lumbar spines was generated to verify the mechanical integrity and the dynamic stability of the designed lumbar fixation device and to further optimize the lumbar fixation device. The FE model generated above produced the same pressure value on intervertebral disc and dynamic behavior as the normal intact model reported in the literature. The consistent results from this comparison validates the accuracy in the modeling of the current FE model. Currently, we are trying to generate an abnormal model with defects in one or more components of the normal FE model above. Then, the mechanical integrity and the dynamic stability of the designed lumbar fixation device will be analyzed after being installed in the abnormal model and then the lumbar fixation device will be further optimized.

Keywords: lumbar interspinous process fixation device, finite element method, lumbar spine, kinematics

Procedia PDF Downloads 229
14863 The Effect of Mindfulness Meditation on Pain, Sleep Quality, and Self-Esteem in Patients Receiving Hemodialysis in Jordan

Authors: Hossam N. Alhawatmeh, Areen I. Albustanji

Abstract:

Hemodialysis negatively affects physical and psychological health. Pain, poor sleep quality, and low self-esteem are highly prevalent among patients with end-stage renal disease (ESRD) who receive hemodialysis, significantly increasing mortality and morbidity of those patients. Mind-body interventions (MBI), such as mindfulness meditation, have been recently gaining popularity that improved pain, sleep quality, and self-esteem in different populations. However, to our best knowledge, its effects on these health problems in patients receiving hemodialysis have not been studied in Jordan. Thus, the purpose of the study was to examine the effect of mindfulness meditation on pain, sleep quality, and self-esteem in patients with ESR receiving hemodialysis in Jordan. An experimental repeated-measures, randomized, parallel control design was conducted on (n =60) end-stage renal disease patients undergoing hemodialysis between March and June 2023 in the dialysis center at a public hospital in Jordan. Participants were randomly assigned to the experimental (n =30) and control groups (n =30) using a simple random assignment method. The experimental group practiced mindfulness meditation for 30 minutes three times per week for five weeks during their hemodialysis treatments. The control group's patients continued to receive hemodialysis treatment as usual for five weeks during hemodialysis sessions. The study variables for both groups were measured at baseline (Time 0), two weeks after intervention (Time 1), and at the end of intervention (Time 3). The numerical rating scale (NRS), the Rosenberg Self-Esteem Scale (RSES-M), and the Pittsburgh Sleep Quality Index (PSQI) were used to measure pain, self-esteem, and sleep quality, respectively. SPSS version 25 was used to analyze the study data. The sample was described by frequency, mean, and standard deviation as an appropriate. The repeated measures analysis of variance (ANOVA) tests were run to test the study hypotheses. The results of repeated measures ANOVA (within-subject) revealed that mindfulness meditation significantly decrease pain by the end of the intervention in the experimental group. Additionally, mindfulness meditation improved sleep quality and self-esteem in the experimental group, and these improvements occurred significantly after two weeks of the intervention and at the end of the intervention. The results of repeated measures ANOVA (within and between-subject) revealed that the experimental group, compared to the control group, experienced lower levels of pain and higher levels of sleep quality and self-esteem over time. In conclusion, the results provided substantial evidence supporting the positive impacts of mindfulness meditation on pain, sleep quality, and self-esteem in patients with ESRD undergoing hemodialysis. These results highlight the potential of mindfulness meditation as an adjunctive therapy in the comprehensive care of this patient population. Incorporating mindfulness meditation into the treatment plan for patients receiving hemodialysis may contribute to improved well-being and overall quality of life.

Keywords: hemodialysis, pain, sleep quality, self-esteem, mindfulness

Procedia PDF Downloads 89
14862 Presence and Absence: The Use of Photographs in Paris, Texas

Authors: Yi-Ting Wang, Wen-Shu Lai

Abstract:

The subject of this paper is the photography in the 1983 film Paris, Texas, directed by Wim Wenders. Wenders is well known as a film director as well as a photographer. We have found that photography is shown as a photographic element in many of his films. Some of these photographs serve as details within the films, while others play important roles that are relevant to the story. This paper aims to consider photographs in film as a specific type of text, which is the output of both still photography and the film itself. In the film Paris, Texas, three sets of important photographs appear whose symbolic meanings are as dialectical as their text types. The relationship between the existence of these photos and the storyline is both dependent and isolated. The film’s images fly by and progress into other images, while the photos in the film serve a unique narrative function by stopping the continuously flowing images thus provide the viewer a space for imagination and contemplation. They are more than just artistic forms; they also contained multiple meanings. The photographs in Paris, Texas play the role of both presence and absence according to their shifting meanings. There are references to their presence: photographs exist between film time and narrative time, so in terms of the interaction between the characters in the film, photographs are a common symbol of the beginning and end of the characters’ journeys. In terms of the audience, the film’s photographs are a link in the viewing frame structure, through which the creative motivation of the film director can be explored. Photographs also point to the absence of certain objects: the scenes in the photos represent an imaginary map of emotion. The town of Paris, Texas is therefore isolated from the physical presence of the photograph, and is far more abstract than the reality in the film. This paper embraces the ambiguous nature of photography and demonstrates its presence and absence in film with regard to the meaning of text. However, it is worth reflecting that the temporary nature of the interpretation of the film’s photographs is far greater than any other type of photographic text: the characteristics of the text cause the interpretation results to change along with the variations in the interpretation process, which makes their meaning a dynamic process. The photographs’ presence or absence in the context of Paris, Texas also demonstrates the presence and absence of the creator, time, the truth, and the imagination. The film becomes more complete as a result of the revelation of the photographs, while the intertextual connection between these two forms simultaneously provides multiple possibilities for the interpretation of the photographs in the film.

Keywords: film, Paris, Texas, photography, Wim Wenders

Procedia PDF Downloads 320
14861 Embedded Visual Perception for Autonomous Agricultural Machines Using Lightweight Convolutional Neural Networks

Authors: René A. Sørensen, Søren Skovsen, Peter Christiansen, Henrik Karstoft

Abstract:

Autonomous agricultural machines act in stochastic surroundings and therefore, must be able to perceive the surroundings in real time. This perception can be achieved using image sensors combined with advanced machine learning, in particular Deep Learning. Deep convolutional neural networks excel in labeling and perceiving color images and since the cost of high-quality RGB-cameras is low, the hardware cost of good perception depends heavily on memory and computation power. This paper investigates the possibility of designing lightweight convolutional neural networks for semantic segmentation (pixel wise classification) with reduced hardware requirements, to allow for embedded usage in autonomous agricultural machines. Using compression techniques, a lightweight convolutional neural network is designed to perform real-time semantic segmentation on an embedded platform. The network is trained on two large datasets, ImageNet and Pascal Context, to recognize up to 400 individual classes. The 400 classes are remapped into agricultural superclasses (e.g. human, animal, sky, road, field, shelterbelt and obstacle) and the ability to provide accurate real-time perception of agricultural surroundings is studied. The network is applied to the case of autonomous grass mowing using the NVIDIA Tegra X1 embedded platform. Feeding case-specific images to the network results in a fully segmented map of the superclasses in the image. As the network is still being designed and optimized, only a qualitative analysis of the method is complete at the abstract submission deadline. Proceeding this deadline, the finalized design is quantitatively evaluated on 20 annotated grass mowing images. Lightweight convolutional neural networks for semantic segmentation can be implemented on an embedded platform and show competitive performance with regards to accuracy and speed. It is feasible to provide cost-efficient perceptive capabilities related to semantic segmentation for autonomous agricultural machines.

Keywords: autonomous agricultural machines, deep learning, safety, visual perception

Procedia PDF Downloads 399
14860 Development and Characterization of Wheat Bread with Lupin Flour

Authors: Paula M. R. Correia, Marta Gonzaga, Luis M. Batista, Luísa Beirão-Costa, Raquel F. P. Guiné

Abstract:

The purpose of the present work was to develop an innovative food product with good textural and sensorial characteristics. The product, a new type of bread, was prepared with wheat (90%) and lupin (10%) flours, without the addition of any conservatives. Several experiences were also done to find the most appropriate proportion of lupin flour. The optimized product was characterized considering the rheological, physical-chemical and sensorial properties. The water absorption of wheat flour with 10% of lupin was higher than that of the normal wheat flours, and Wheat Ceres flour presented the lower value, with lower dough development time and high stability time. The breads presented low moisture but a considerable water activity. The density of bread decreased with the introduction of lupin flour. The breads were quite white, and during storage the colour parameters decreased. The lupin flour clearly increased the number of alveolus, but the total area increased significantly just for the Wheat Cerealis bread. The addition of lupin flour increased the hardness and chewiness of breads, but the elasticity did not vary significantly. Lupin bread was sensorially similar to wheat bread produced with WCerealis flour, and the main differences are the crust rugosity, colour and alveolus characteristics.

Keywords: Lupin flour, physical-chemical properties, sensorial analysis, wheat flour

Procedia PDF Downloads 518
14859 The Effect of Supercritical Fluid on the Extraction Efficiency of Heavy Metal from Soil

Authors: Haifa El-Sadi, Maria Elektorowicz, Reed Rushing, Ammar Badawieh, Asif Chaudry

Abstract:

Clay soils have particular properties that affect the assessment and remediation of contaminated sites. In clay soils, electro-kinetic transport of heavy metals has been carried out. The transport of these metals is predicated on maintaining a low pH throughout the cell, which, in turn, keeps the metals in the pore water phase where they are accessible to electro-kinetic transport. Supercritical fluid extraction and acid digestion were used for the analysis of heavy metals concentrations after the completion of electro-kinetic experimentation. Supercritical fluid (carbon dioxide) extraction is a new technique used to extract the heavy metal (lead, nickel, calcium and potassium) from clayey soil. The comparison between supercritical extraction and acid digestion of different metals was carried out. Supercritical fluid extraction, using ethylenediaminetetraacetic acid (EDTA) as a modifier, proved to be efficient and a safer technique than acid digestion technique in extracting metals from clayey soil. Mixing time of soil with EDTA before extracting heavy metals from clayey soil was investigated. The optimum and most practical shaking time for the extraction of lead, nickel, calcium and potassium was two hours.

Keywords: clay soil, heavy metals, supercritical fluid extraction, acid digestion

Procedia PDF Downloads 471
14858 Management of Acute Biliary Pathology at Gozo General Hospital

Authors: Kristian Bugeja, Upeshala A. Jayawardena, Clarissa Fenech, Mark Zammit Vincenti

Abstract:

Introduction: Biliary colic, acute cholecystitis, and gallstone pancreatitis are some of the most common surgical presentations at Gozo General Hospital (GGH). National Institute for Health and Care Excellence (NICE) guidelines advise that suitable patients with acute biliary problems should be offered a laparoscopic cholecystectomy within one week of diagnosis. There has traditionally been difficulty in achieving this mainly due to the reluctance of some surgeons to operate in the acute setting, limited, timely access to MRCP and ERCP, and organizational issues. Methodology: A retrospective study was performed involving all biliary pathology-related admissions to GGH during the two-year period of 2019 and 2020. Patients’ files and electronic case summary (ECS) were used for data collection, which included demographic data, primary diagnosis, co-morbidities, management, waiting time to surgery, length of stay, readmissions, and reason for readmissions. NICE clinical guidance 188 – Gallstone disease were used as the standard. Results: 51 patients were included in the study. The mean age was 58 years, and 35 (68.6%) were female. The main diagnoses on admission were biliary colic in 31 (60.8%), acute cholecystitis in 10 (19.6%). Others included gallstone pancreatitis in 3 (5.89%), chronic cholecystitis in 2 (3.92%), gall bladder malignancy in 4 (7.84%), and ascending cholangitis in 1 (1.97%). Management included laparoscopic cholecystectomy in 34 (66.7%); conservative in 8 (15.7%) and ERCP in 6 (11.7%). The mean waiting time for laparoscopic cholecystectomy for patients with acute cholecystitis was 74 days – range being between 3 and 146 days since the date of diagnosis. Only one patient who was diagnosed with acute cholecystitis and managed with laparoscopic cholecystectomy was done so within the 7-day time frame. Hospital re-admissions were reported in 5 patients (9.8%) due to vomiting (1), ascending cholangitis (1), and gallstone pancreatitis (3). Discussion: Guidelines were not met for patients presenting to Gozo General Hospital with acute biliary pathology. This resulted in 5 patients being re-admitted to hospital while waiting for definitive surgery. The local issues resulting in the delay to surgery need to be identified and steps are taken to facilitate the provision of urgent cholecystectomy for suitable patients.

Keywords: biliary colic, acute cholecystits, laparoscopic cholecystectomy, conservative management

Procedia PDF Downloads 164
14857 Empirical Investigation of Gender Differences in Information Processing Style, Tinkering, and Self-Efficacy for Robot Tele-Operation

Authors: Dilruba Showkat, Cindy Grimm

Abstract:

As robots become more ubiquitous, it is significant for us to understand how different groups of people respond to possible ways of interacting with the robot. In this study, we focused on gender differences while users were tele-operating a humanoid robot that was physically co-located with them. We investigated three factors during the human-robot interaction (1) information processing strategy (2) self-efficacy and (3) tinkering or exploratory behavior. The experimental results show that the information on how to use the robot was processed comprehensively by the female participants whereas males processed them selectively (p < 0.001). Males were more confident when using the robot than females (p = 0.0002). Males tinkered more with the robot than females (p = 0.0021). We found that tinkering was positively correlated (p = 0.0068) with task success and negatively correlated (p = 0.0032) with task completion time. Tinkering might have resulted in greater task success and lower task completion time for males. Findings from this research can be used for making design decisions for robots and open new research directions. Our results show the importance of accounting for gender differences when developing interfaces for interacting with robots and open new research directions.

Keywords: humanoid robots, tele-operation, gender differences, human-robot interaction

Procedia PDF Downloads 170
14856 Massive Intrapartum Hemorrhage Following by Inner Myometrial Laceration during a Vaginal Delivery: A Rare Case Report

Authors: Bahareh Khakifirooz, Arian Shojaei, Amirhossein Hajialigol, Bahare Abdolahi

Abstract:

Laceration of the inner layer of the myometrium can cause massive bleeding during and after childbirth, which can lead to the death of the mother if it is not diagnosed in time. We studied a rare case of massive intrapartum bleeding following myometrial laceration that was diagnosed correctly, and the patient survived with in-time treatments. The patient was a 26 years-old woman who was under observation for term pregnancy and complaint of rupture of membranes (ROM) and vaginal bleeding. Following the spontaneous course of labor and without receiving oxytocin, during the normal course of labor, she had an estimated total blood loss of 750 mL bleeding, which, despite the normal fetal heart rate and with the mother's indication for cesarean section, was transferred to the operating room and underwent cesarean section. During the cesarean section, the amniotic fluid was clear; after the removal of the placenta, severe and clear bleeding was flowing from the posterior wall of the uterus, which was caused by the laceration of the inner layer of the myometrium in the posterior wall of the lower segment of the uterus. The myometrial laceration was repaired with absorbable continuous locked sutures, and hemostasis was established, then, the patient used uterotonic drugs, and after monitoring, the patient was discharged from the hospital in good condition.

Keywords: intrapartum hemorrhage, inner myometrial laceration, labor, Increased intrauterine pressure

Procedia PDF Downloads 30
14855 Solution of Singularly Perturbed Differential Difference Equations Using Liouville Green Transformation

Authors: Y. N. Reddy

Abstract:

The class of differential-difference equations which have characteristics of both classes, i.e., delay/advance and singularly perturbed behaviour is known as singularly perturbed differential-difference equations. The expression ‘positive shift’ and ‘negative shift’ are also used for ‘advance’ and ‘delay’ respectively. In general, an ordinary differential equation in which the highest order derivative is multiplied by a small positive parameter and containing at least one delay/advance is known as singularly perturbed differential-difference equation. Singularly perturbed differential-difference equations arise in the modelling of various practical phenomena in bioscience, engineering, control theory, specifically in variational problems, in describing the human pupil-light reflex, in a variety of models for physiological processes or diseases and first exit time problems in the modelling of the determination of expected time for the generation of action potential in nerve cells by random synaptic inputs in dendrites. In this paper, we envisage the use of Liouville Green Transformation to find the solution of singularly perturbed differential difference equations. First, using Taylor series, the given singularly perturbed differential difference equation is approximated by an asymptotically equivalent singularly perturbation problem. Then the Liouville Green Transformation is applied to get the solution. Several model examples are solved, and the results are compared with other methods. It is observed that the present method gives better approximate solutions.

Keywords: difference equations, differential equations, singular perturbations, boundary layer

Procedia PDF Downloads 201
14854 Neighbourhood Walkability and Quality of Life: The Mediating Role of Place Adherence and Social Interaction

Authors: Michał Jaśkiewicz

Abstract:

The relation between walkability, place adherence, social relations and quality of life was explored in a Polish context. A considerable number of studies have suggested that environmental factors may influence the quality of life through indirect pathways. The list of possible psychological mediators includes social relations and identity-related variables. Based on the results of Study 1, local identity is a significant mediator in the relationship between neighbourhood walkability and quality of life. It was assumed that pedestrian-oriented neighbourhoods enable residents to interact and that these spontaneous interactions can help to strengthen a sense of local identity, thus influencing the quality of life. We, therefore, conducted further studies, testing the relationship experimentally in studies 2a and 2b. Participants were exposed to (2a) photos of walkable/non-walkable neighbourhoods or (2b) descriptions of high/low-walkable neighbourhoods. They were then asked to assess the walkability of the neighbourhoods and to evaluate their potential social relations and quality of life in these places. In both studies, social relations with neighbours turned out to be a significant mediator between walkability and quality of life. In Study 3, we implemented the measure of overlapping individual and communal identity (fusion with the neighbourhood) and willingness to collective action as mediators. Living in a walkable neighbourhood was associated with identity fusion with that neighbourhood. Participants who felt more fused expressed greater willingness to engage in collective action with other neighbours. Finally, this willingness was positively related to the quality of life in the city. In Study 4, we used commuting time (an aspect of walkability related to the time that people spend travelling to work) as the independent variable. The results showed that a shorter average daily commuting time was linked to more frequent social interactions in the neighbourhood. Individuals who assessed their social interactions as more frequent expressed a stronger city identification, which was in turn related to quality of life. To sum up, our research replicated and extended previous findings on the association between walkability and well-being measures. We introduced potential mediators of this relationship: social interactions in the neighbourhood and identity-related variables.

Keywords: walkability, quality of life, social relations, analysis of mediation

Procedia PDF Downloads 328
14853 Graphic Calculator Effectiveness in Biology Teaching and Learning

Authors: Nik Azmah Nik Yusuff, Faridah Hassan Basri, Rosnidar Mansor

Abstract:

The purpose of the study is to find out the effectiveness of using Graphic calculators (GC) with Calculator Based Laboratory 2 (CBL2) in teaching and learning of form four biology for these topics: Nutrition, Respiration and Dynamic Ecosystem. Sixty form four science stream students were the participants of this study. The participants were divided equally into the treatment and control groups. The treatment group used GC with CBL2 during experiments while the control group used the ordinary conventional laboratory apparatus without using GC with CBL2. Instruments in this study were a set of pre-test and post-test and a questionnaire. T-Test was used to compare the student’s biology achievement while a descriptive statistic was used to analyze the outcome of the questionnaire. The findings of this study indicated the use of GC with CBL2 in biology had significant positive effect. The highest mean was 4.43 for item stating the use of GC with CBL2 had saved collecting experiment result’s time. The second highest mean was 4.10 for item stating GC with CBL2 had saved drawing and labelling graphs. The outcome from the questionnaire also showed that GC with CBL2 were easy to use and save time. Thus, teachers should use GC with CBL2 in support of efforts by Malaysia Ministry of Education in encouraging technology-enhanced lessons.

Keywords: biology experiments, Calculator-Based Laboratory 2 (CBL2), graphic calculators, Malaysia Secondary School, teaching/learning

Procedia PDF Downloads 404
14852 Plotting of an Ideal Logic versus Resource Outflow Graph through Response Analysis on a Strategic Management Case Study Based Questionnaire

Authors: Vinay A. Sharma, Shiva Prasad H. C.

Abstract:

The initial stages of any project are often observed to be in a mixed set of conditions. Setting up the project is a tough task, but taking the initial decisions is rather not complex, as some of the critical factors are yet to be introduced into the scenario. These simple initial decisions potentially shape the timeline and subsequent events that might later be plotted on it. Proceeding towards the solution for a problem is the primary objective in the initial stages. The optimization in the solutions can come later, and hence, the resources deployed towards attaining the solution are higher than what they would have been in the optimized versions. A ‘logic’ that counters the problem is essentially the core of the desired solution. Thus, if the problem is solved, the deployment of resources has led to the required logic being attained. As the project proceeds along, the individuals working on the project face fresh challenges as a team and are better accustomed to their surroundings. The developed, optimized solutions are then considered for implementation, as the individuals are now experienced, and know better of the consequences and causes of possible failure, and thus integrate the adequate tolerances wherever required. Furthermore, as the team graduates in terms of strength, acquires prodigious knowledge, and begins its efficient transfer, the individuals in charge of the project along with the managers focus more on the optimized solutions rather than the traditional ones to minimize the required resources. Hence, as time progresses, the authorities prioritize attainment of the required logic, at a lower amount of dedicated resources. For empirical analysis of the stated theory, leaders and key figures in organizations are surveyed for their ideas on appropriate logic required for tackling a problem. Key-pointers spotted in successfully implemented solutions are noted from the analysis of the responses and a metric for measuring logic is developed. A graph is plotted with the quantifiable logic on the Y-axis, and the dedicated resources for the solutions to various problems on the X-axis. The dedicated resources are plotted over time, and hence the X-axis is also a measure of time. In the initial stages of the project, the graph is rather linear, as the required logic will be attained, but the consumed resources are also high. With time, the authorities begin focusing on optimized solutions, since the logic attained through them is higher, but the resources deployed are comparatively lower. Hence, the difference between consecutive plotted ‘resources’ reduces and as a result, the slope of the graph gradually increases. On an overview, the graph takes a parabolic shape (beginning on the origin), as with each resource investment, ideally, the difference keeps on decreasing, and the logic attained through the solution keeps increasing. Even if the resource investment is higher, the managers and authorities, ideally make sure that the investment is being made on a proportionally high logic for a larger problem, that is, ideally the slope of the graph increases with the plotting of each point.

Keywords: decision-making, leadership, logic, strategic management

Procedia PDF Downloads 111
14851 Monte Carlo and Biophysics Analysis in a Criminal Trial

Authors: Luca Indovina, Carmela Coppola, Carlo Altucci, Riccardo Barberi, Rocco Romano

Abstract:

In this paper a real court case, held in Italy at the Court of Nola, in which a correct physical description, conducted with both a Monte Carlo and biophysical analysis, would have been sufficient to arrive at conclusions confirmed by documentary evidence, is considered. This will be an example of how forensic physics can be useful in confirming documentary evidence in order to reach hardly questionable conclusions. This was a libel trial in which the defendant, Mr. DS (Defendant for Slander), had falsely accused one of his neighbors, Mr. OP (Offended Person), of having caused him some damages. The damages would have been caused by an external plaster piece that would have detached from the neighbor’s property and would have hit Mr DS while he was in his garden, much more than a meter far away from the facade of the building from which the plaster piece would have detached. In the trial, Mr. DS claimed to have suffered a scratch on his forehead, but he never showed the plaster that had hit him, nor was able to tell from where the plaster would have arrived. Furthermore, Mr. DS presented a medical certificate with a diagnosis of contusion of the cerebral cortex. On the contrary, the images of Mr. OP’s security cameras do not show any movement in the garden of Mr. DS in a long interval of time (about 2 hours) around the time of the alleged accident, nor do they show any people entering or coming out from the house of Mr. DS in the same interval of time. Biophysical analysis shows that both the diagnosis of the medical certificate and the wound declared by the defendant, already in conflict with each other, are not compatible with the fall of external plaster pieces too small to be found. The wind was at a level 1 of the Beaufort scale, that is, unable to raise even dust (level 4 of the Beaufort scale). Therefore, the motion of the plaster pieces can be described as a projectile motion, whereas collisions with the building cornice can be treated using Newtons law of coefficients of restitution. Numerous numerical Monte Carlo simulations show that the pieces of plaster would not have been able to reach even the garden of Mr. DS, let alone a distance over 1.30 meters. Results agree with the documentary evidence (images of Mr. OP’s security cameras) that Mr. DS could not have been hit by plaster pieces coming from Mr. OP’s property.

Keywords: biophysics analysis, Monte Carlo simulations, Newton’s law of restitution, projectile motion

Procedia PDF Downloads 134