Search results for: loss distribution approach
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20271

Search results for: loss distribution approach

13851 Reconstructed Phase Space Features for Estimating Post Traumatic Stress Disorder

Authors: Andre Wittenborn, Jarek Krajewski

Abstract:

Trauma-related sadness in speech can alter the voice in several ways. The generation of non-linear aerodynamic phenomena within the vocal tract is crucial when analyzing trauma-influenced speech production. They include non-laminar flow and formation of jets rather than well-behaved laminar flow aspects. Especially state-space reconstruction methods based on chaotic dynamics and fractal theory have been suggested to describe these aerodynamic turbulence-related phenomena of the speech production system. To extract the non-linear properties of the speech signal, we used the time delay embedding method to reconstruct from a scalar time series (reconstructed phase space, RPS). This approach results in the extraction of 7238 Features per .wav file (N= 47, 32 m, 15 f). The speech material was prompted by telling about autobiographical related sadness-inducing experiences (sampling rate 16 kHz, 8-bit resolution). After combining these features in a support vector machine based machine learning approach (leave-one-sample out validation), we achieved a correlation of r = .41 with the well-established, self-report ground truth measure (RATS) of post-traumatic stress disorder (PTSD).

Keywords: non-linear dynamics features, post traumatic stress disorder, reconstructed phase space, support vector machine

Procedia PDF Downloads 93
13850 Management and Evaluating Technologies of Tissue Engineering Various Fields of Bone

Authors: Arash Sepehri Bonab

Abstract:

Techniques to switch cells between development and differentiation, which tend to be commonly exclusive, are utilized in arrange to supply an expansive cell mass that can perform particular separated capacities required for the tissue to develop. Approaches to tissue engineering center on the have to give signals to cell populaces to advance cell multiplication and separation. Current tissue regenerative procedures depend primarily on tissue repair by transplantation of synthetic/natural inserts. In any case, restrictions on the existing procedures have expanded the request for tissue designing approaches. Tissue engineering innovation and stem cell investigation based on tissue building have made awesome advances in overcoming the issues of tissue and organ damage, useful loss, and surgical complications. Bone tissue has the capability to recover itself; in any case, surrenders of a basic estimate anticipate the bone from recovering and require extra support. The advancement of bone tissue building has been utilized to form useful options to recover the bone. This paper primarily portrays current advances in tissue engineering in different fields of bone and talks about the long-term trend of tissue designing innovation in the treatment of complex diseases.

Keywords: tissue engineering, bone, technologies, treatment

Procedia PDF Downloads 85
13849 A Framework for Designing Complex Product-Service Systems with a Multi-Domain Matrix

Authors: Yoonjung An, Yongtae Park

Abstract:

Offering a Product-Service System (PSS) is a well-accepted strategy that companies may adopt to provide a set of systemic solutions to customers. PSSs were initially provided in a simple form but now take diversified and complex forms involving multiple services, products and technologies. With the growing interest in the PSS, frameworks for the PSS development have been introduced by many researchers. However, most of the existing frameworks fail to examine various relations existing in a complex PSS. Since designing a complex PSS involves full integration of multiple products and services, it is essential to identify not only product-service relations but also product-product/ service-service relations. It is also equally important to specify how they are related for better understanding of the system. Moreover, as customers tend to view their purchase from a more holistic perspective, a PSS should be developed based on the whole system’s requirements, rather than focusing only on the product requirements or service requirements. Thus, we propose a framework to develop a complex PSS that is coordinated fully with the requirements of both worlds. Specifically, our approach adopts a multi-domain matrix (MDM). A MDM identifies not only inter-domain relations but also intra-domain relations so that it helps to design a PSS that includes highly desired and closely related core functions/ features. Also, various dependency types and rating schemes proposed in our approach would help the integration process.

Keywords: inter-domain relations, intra-domain relations, multi-domain matrix, product-service system design

Procedia PDF Downloads 629
13848 Comparative Analysis of High Lift Airfoils for Motorsports Applications

Authors: M. Fozan Ur Rab, Mahrukh, M. Alam, N. Sheikh

Abstract:

The purpose of this study is to analyze various high lift low Reynolds number airfoils using two-dimensional Computational Fluid Dynamics (CFD) code in the isolated flow field and select optimum airfoil to suit the motorsports application. The airfoil is selected after comparing the stall behavior, transition location, pressure recovery, pressure distribution and boundary layer characteristics of various airfoils. The prime consideration while selecting airfoil is highest Cl while achieving the sustainable performance over a range of Reynolds numbers encountered on the race track. The increase in Cl is always accompanied by the increase in Cd but this must be compromised since the main goal is to increase an aerodynamic grip. It is always desirable to increase the down-force in Formula One (F1)/Formula Student (FS) to gain reduction in lap time. This paper establishes the criteria for selection of high lift low Reynolds number airfoil while considering various parameters which affect the performance of airfoils.

Keywords: aerodynamics, airfoil, downforce, formula student, lap time

Procedia PDF Downloads 266
13847 Evaluation of the Effect of Auriculotherapy on Pain Control and Sleep Quality in Chronic Patients

Authors: Fagner Luiz P. Salles, Janaina C. Oliveira, Ivair P. Cesar

Abstract:

Statement of the Problem: Auriculotherapy (AT) is a TCM technique, which uses seeds instead of needles, based physiologically on the mechanical stimulation of the cranial nerves. In the context of understanding the new concept of health of the WHO, the AT is an integrative approach for achieving Global Health Care so as to achieve the global health care concerns. This study aimed to evaluate the effect of auriculotherapy on pain and sleep quality in patients with chronic pain. Methodology and Theoretical Orientation: This study was performed between February and March 2017 at the Faculdade Estácio de Sá de Vitória, Brazil. The pain evaluation was through VAS in 4 periods: maximum, minimum, average and at the time of evaluation; the evaluation of sleep quality was used the Pittsburgh Sleep Quality Index. Socio-demographic data included: gender, age, use of medication and BMI. All data are presented as mean (standard deviation), Teste Mann-Whitney and T-student with P-values < 0.05 were regarded as significant. Findings: Participated in this study thirty-two individuals with age (M = 43.18, SD = 17.86), the time with pain in years (M = 3.67, SD = 3.68), 81.7% were female, 75% of the individuals used medication and BMI (M = 26.67; SD = 6.20). The pain presented improvement in the maximum level and the average of the pain and sleep quality before did not have statistically significant results. Conclusion and Significance: This study showed that TA is efficacy for reduction levels of pain. However, AT was not effective in improving sleep quality.

Keywords: auriculotherapy, chronic pain, sleep quality, integrative approach

Procedia PDF Downloads 195
13846 Segmentation of the Liver and Spleen From Abdominal CT Images Using Watershed Approach

Authors: Belgherbi Aicha, Hadjidj Ismahen, Bessaid Abdelhafid

Abstract:

The phase of segmentation is an important step in the processing and interpretation of medical images. In this paper, we focus on the segmentation of liver and spleen from the abdomen computed tomography (CT) images. The importance of our study comes from the fact that the segmentation of ROI from CT images is usually a difficult task. This difficulty is the gray’s level of which is similar to the other organ also the ROI are connected to the ribs, heart, kidneys, etc. Our proposed method is based on the anatomical information and mathematical morphology tools used in the image processing field. At first, we try to remove the surrounding and connected organs and tissues by applying morphological filters. This first step makes the extraction of interest regions easier. The second step consists of improving the quality of the image gradient. In this step, we propose a method for improving the image gradient to reduce these deficiencies by applying the spatial filters followed by the morphological filters. Thereafter we proceed to the segmentation of the liver, spleen. To validate the segmentation technique proposed, we have tested it on several images. Our segmentation approach is evaluated by comparing our results with the manual segmentation performed by an expert. The experimental results are described in the last part of this work.The system has been evaluated by computing the sensitivity and specificity between the semi-automatically segmented (liver and spleen) contour and the manually contour traced by radiological experts.

Keywords: CT images, liver and spleen segmentation, anisotropic diffusion filter, morphological filters, watershed algorithm

Procedia PDF Downloads 478
13845 Reconstruction of Signal in Plastic Scintillator of PET Using Tikhonov Regularization

Authors: L. Raczynski, P. Moskal, P. Kowalski, W. Wislicki, T. Bednarski, P. Bialas, E. Czerwinski, A. Gajos, L. Kaplon, A. Kochanowski, G. Korcyl, J. Kowal, T. Kozik, W. Krzemien, E. Kubicz, Sz. Niedzwiecki, M. Palka, Z. Rudy, O. Rundel, P. Salabura, N.G. Sharma, M. Silarski, A. Slomski, J. Smyrski, A. Strzelecki, A. Wieczorek, M. Zielinski, N. Zon

Abstract:

The J-PET scanner, which allows for single bed imaging of the whole human body, is currently under development at the Jagiellonian University. The J-PET detector improves the TOF resolution due to the use of fast plastic scintillators. Since registration of the waveform of signals with duration times of few nanoseconds is not feasible, a novel front-end electronics allowing for sampling in a voltage domain at four thresholds was developed. To take fully advantage of these fast signals a novel scheme of recovery of the waveform of the signal, based on ideas from the Tikhonov regularization (TR) and Compressive Sensing methods, is presented. The prior distribution of sparse representation is evaluated based on the linear transformation of the training set of waveform of the signals by using the Principal Component Analysis (PCA) decomposition. Beside the advantage of including the additional information from training signals, a further benefit of the TR approach is that the problem of signal recovery has an optimal solution which can be determined explicitly. Moreover, from the Bayes theory the properties of regularized solution, especially its covariance matrix, may be easily derived. This step is crucial to introduce and prove the formula for calculations of the signal recovery error. It has been proven that an average recovery error is approximately inversely proportional to the number of samples at voltage levels. The method is tested using signals registered by means of the single detection module of the J-PET detector built out from the 30 cm long BC-420 plastic scintillator strip. It is demonstrated that the experimental and theoretical functions describing the recovery errors in the J-PET scenario are largely consistent. The specificity and limitations of the signal recovery method in this application are discussed. It is shown that the PCA basis offers high level of information compression and an accurate recovery with just eight samples, from four voltage levels, for each signal waveform. Moreover, it is demonstrated that using the recovered waveform of the signals, instead of samples at four voltage levels alone, improves the spatial resolution of the hit position reconstruction. The experiment shows that spatial resolution evaluated based on information from four voltage levels, without a recovery of the waveform of the signal, is equal to 1.05 cm. After the application of an information from four voltage levels to the recovery of the signal waveform, the spatial resolution is improved to 0.94 cm. Moreover, the obtained result is only slightly worse than the one evaluated using the original raw-signal. The spatial resolution calculated under these conditions is equal to 0.93 cm. It is very important information since, limiting the number of threshold levels in the electronic devices to four, leads to significant reduction of the overall cost of the scanner. The developed recovery scheme is general and may be incorporated in any other investigation where a prior knowledge about the signals of interest may be utilized.

Keywords: plastic scintillators, positron emission tomography, statistical analysis, tikhonov regularization

Procedia PDF Downloads 429
13844 Developing a Systemic Monoclonal Antibody Therapy for the Treatment of Large Burn Injuries

Authors: Alireza Hassanshahi, Xanthe Strudwick, Zlatko Kopecki, Allison J Cowin

Abstract:

Studies have shown that Flightless (Flii) is elevated in human wounds, including burns, and reducing the level of Flii is a promising approach for improving wound repair and reducing scar formation. The most effective approach has been to neutralise Flii activity using localized, intradermal application of function blocking monoclonal antibodies. However, large surface area burns are difficult to treat by intradermal injection of therapeutics, so the aim of this study was to investigate if a systemic injection of a monoclonal antibody against Flii could improve healing in mice following burn injury. Flii neutralizing antibodies (FnAbs) were labelled with Alxa-Fluor-680 for biodistribution studies and the healing effects of systemically administered FnAbs to mice with burn injuries. A partial thickness, 7% (70mm2) total body surface area scald burn injury was created on the dorsal surface of mice (n=10/group), and 100µL of Alexa-Flour-680-labeled FnAbs were injected into the intraperitoneal cavity (IP) at time of injury. The burns were imaged on days 0, 1, 2, 3, 4, and 7 using IVIS Lumina S5 Imaging System, and healing was assessed macroscopically, histologically, and using immunohistochemistry. Fluorescent radiance efficiency measurements showed that IP injected Alexa-Fluor-680-FnAbs localized at the site of burn injury from day 1, remaining there for the whole 7-day study. The burns treated with FnAbs showed a reduction in macroscopic wound area and an increased rate of epithelialization compared to controls. Immunohistochemistry for NIMP-R14 showed a reduction in the inflammatory infiltrate, while CD31/VEGF staining showed improved angiogenesis post-systemic FnAb treatment. These results suggest that systemically administered FnAbs are active within the burn site and can improve healing outcomes. The clinical application of systemically injected Flii monoclonal antibodies could therefore be a potential approach for promoting the healing of large surface area burns immediately after injury.

Keywords: biodistribution, burn, flightless, systemic, fnAbs

Procedia PDF Downloads 158
13843 Dynamic Modeling of Energy Systems Adapted to Low Energy Buildings in Lebanon

Authors: Nadine Yehya, Chantal Maatouk

Abstract:

Low energy buildings have been developed to achieve global climate commitments in reducing energy consumption. They comprise energy efficient buildings, zero energy buildings, positive buildings and passive house buildings. The reduced energy demands in Low Energy buildings call for advanced building energy modeling that focuses on studying active building systems such as heating, cooling and ventilation, improvement of systems performances, and development of control systems. Modeling and building simulation have expanded to cover different modeling approach i.e.: detailed physical model, dynamic empirical models, and hybrid approaches, which are adopted by various simulation tools. This paper uses DesignBuilder with EnergyPlus simulation engine in order to; First, study the impact of efficiency measures on building energy behavior by comparing Low energy residential model to a conventional one in Beirut-Lebanon. Second, choose the appropriate energy systems for the studied case characterized by an important cooling demand. Third, study dynamic modeling of Variable Refrigerant Flow (VRF) system in EnergyPlus that is chosen due to its advantages over other systems and its availability in the Lebanese market. Finally, simulation of different energy systems models with different modeling approaches is necessary to confront the different modeling approaches and to investigate the interaction between energy systems and building envelope that affects the total energy consumption of Low Energy buildings.

Keywords: physical model, variable refrigerant flow heat pump, dynamic modeling, EnergyPlus, the modeling approach

Procedia PDF Downloads 205
13842 Design Optimisation of Compound Parabolic Concentrator (CPC) for Improved Performance

Authors: R. Abd-Rahman, M. M. Isa, H. H. Goh

Abstract:

A compound parabolic concentrator (CPC) is a well known non-imaging concentrator that will concentrate the solar radiation onto receiver (PV cell). One of disadvantage of CPC is has tall and narrow height compared to its diameter entry aperture area. Therefore, for economic reason, a truncation had been done by removed from the top of the full height CPC. This is also will lead to the decreases of concentration ratio but it will be negligible. In this paper, the flux distribution of untruncated and truncated 2-D hollow compound parabolic trough concentrator (hCPTC) design is presented. The untruncated design has initial height, H=193.4mm with concentration ratio, C_(2-D)=4. This paper presents the optical simulation of compound parabolic trough concentrator using ray-tracing software TracePro. Results showed that, after the truncation, the height of CPC reduced 45% from initial height with the geometrical concentration ratio only decrease 10%. Thus, the cost of reflector and material dielectric usage can be saved especially at manufacturing site.

Keywords: compound parabolic trough concentrator, optical modelling, ray-tracing analysis, improved performance

Procedia PDF Downloads 448
13841 Recognizing Customer Preferences Using Review Documents: A Hybrid Text and Data Mining Approach

Authors: Oshin Anand, Atanu Rakshit

Abstract:

The vast increment in the e-commerce ventures makes this area a prominent research stream. Besides several quantified parameters, the textual content of reviews is a storehouse of many information that can educate companies and help them earn profit. This study is an attempt in this direction. The article attempts to categorize data based on a computed metric that quantifies the influencing capacity of reviews rendering two categories of high and low influential reviews. Further, each of these document is studied to conclude several product feature categories. Each of these categories along with the computed metric is converted to linguistic identifiers and are used in an association mining model. The article makes a novel attempt to combine feature attraction with quantified metric to categorize review text and finally provide frequent patterns that depict customer preferences. Frequent mentions in a highly influential score depict customer likes or preferred features in the product whereas prominent pattern in low influencing reviews highlights what is not important for customers. This is achieved using a hybrid approach of text mining for feature and term extraction, sentiment analysis, multicriteria decision-making technique and association mining model.

Keywords: association mining, customer preference, frequent pattern, online reviews, text mining

Procedia PDF Downloads 376
13840 Failure Mechanism in Fixed-Ended Reinforced Concrete Deep Beams under Cyclic Load

Authors: A. Aarabzadeh, R. Hizaji

Abstract:

Reinforced Concrete (RC) deep beams are a special type of beams due to their geometry, boundary conditions, and behavior compared to ordinary shallow beams. For example, assumption of a linear strain-stress distribution in the cross section is not valid. Little study has been dedicated to fixed-end RC deep beams. Also, most experimental studies are carried out on simply supported deep beams. Regarding recent tendency for application of deep beams, possibility of using fixed-ended deep beams has been widely increased in structures. Therefore, it seems necessary to investigate the aforementioned structural element in more details. In addition to experimental investigation of a concrete deep beam under cyclic load, different failure mechanisms of fixed-ended deep beams under this type of loading have been evaluated in the present study. The results show that failure mechanisms of deep beams under cyclic loads are quite different from monotonic loads.

Keywords: deep beam, cyclic load, reinforced concrete, fixed-ended

Procedia PDF Downloads 345
13839 Investigation on Ultrahigh Heat Flux of Nanoporous Membrane Evaporation Using Dimensionless Lattice Boltzmann Method

Authors: W. H. Zheng, J. Li, F. J. Hong

Abstract:

Thin liquid film evaporation in ultrathin nanoporous membranes, which reduce the viscous resistance while still maintaining high capillary pressure and efficient liquid delivery, is a promising thermal management approach for high-power electronic devices cooling. Given the challenges and technical limitations of experimental studies for accurate interface temperature sensing, complex manufacturing process, and short duration of membranes, a dimensionless lattice Boltzmann method capable of restoring thermophysical properties of working fluid is particularly derived. The evaporation of R134a to its pure vapour ambient in nanoporous membranes with the pore diameter of 80nm, thickness of 472nm, and three porosities of 0.25, 0.33 and 0.5 are numerically simulated. The numerical results indicate that the highest heat transfer coefficient is about 1740kW/m²·K; the highest heat flux is about 1.49kW/cm² with only about the wall superheat of 8.59K in the case of porosity equals to 0.5. The dissipated heat flux scaled with porosity because of the increasing effective evaporative area. Additionally, the self-regulation of the shape and curvature of the meniscus under different operating conditions is also observed. This work shows a promising approach to forecast the membrane performance for different geometry and working fluids.

Keywords: high heat flux, ultrathin nanoporous membrane, thin film evaporation, lattice Boltzmann method

Procedia PDF Downloads 149
13838 Application All Digits Number Benford Law in Financial Statement

Authors: Teguh Sugiarto

Abstract:

Background: The research aims to explore if there is fraud in a financial statement, use the Act stated that Benford's distribution all digits must compare the number will follow the trend of lower number. Research methods: This research uses all the analysis number being in Benford's law. After receiving the results of the analysis of all the digits, the author makes a distinction between implementation using the scale above and below 5%, the rate of occurrence of difference. With the number which have differences in the range of 5%, then can do the follow-up and the detection of the onset of fraud against the financial statements. The findings: From the research that has been done can be drawn the conclusion that the average of all numbers appear in the financial statements, and compare the rates of occurrence of numbers according to the characteristics of Benford's law. About the existence of errors and fraud in the financial statements of PT medco Energy Tbk did not occur. Conclusions: The study concludes that Benford's law can serve as indicator tool in detecting the possibility of in financial statements to case studies of PT Medco Energy Tbk for the fiscal year 2000-2010.

Keywords: Benford law, first digits, all digits number Benford law, financial statement

Procedia PDF Downloads 229
13837 External Validation of Established Pre-Operative Scoring Systems in Predicting Response to Microvascular Decompression for Trigeminal Neuralgia

Authors: Kantha Siddhanth Gujjari, Shaani Singhal, Robert Andrew Danks, Adrian Praeger

Abstract:

Background: Trigeminal neuralgia (TN) is a heterogenous pain syndrome characterised by short paroxysms of lancinating facial pain in the distribution of the trigeminal nerve, often triggered by usually innocuous stimuli. TN has a low prevalence of less than 0.1%, of which 80% to 90% is caused by compression of the trigeminal nerve from an adjacent artery or vein. The root entry zone of the trigeminal nerve is most sensitive to neurovascular conflict (NVC), causing dysmyelination. Whilst microvascular decompression (MVD) is an effective treatment for TN with NVC, all patients do not achieve long-term pain relief. Pre-operative scoring systems by Panczykowski and Hardaway have been proposed but have not been externally validated. These pre-operative scoring systems are composite scores calculated according to a subtype of TN, presence and degree of neurovascular conflict, and response to medical treatments. There is discordance in the assessment of NVC identified on pre-operative magnetic resonance imaging (MRI) between neurosurgeons and radiologists. To our best knowledge, the prognostic impact for MVD of this difference of interpretation has not previously been investigated in the form of a composite scoring system such as those suggested by Panczykowski and Hardaway. Aims: This study aims to identify prognostic factors and externally validate the proposed scoring systems by Panczykowski and Hardaway for TN. A secondary aim is to investigate the prognostic difference between a neurosurgeon's interpretation of NVC on MRI compared with a radiologist’s. Methods: This retrospective cohort study included 95 patients who underwent de novo MVD in a single neurosurgical unit in Melbourne. Data was recorded from patients’ hospital records and neurosurgeon’s correspondence from perioperative clinic reviews. Patient demographics, type of TN, distribution of TN, response to carbamazepine, neurosurgeon, and radiologist interpretation of NVC on MRI, were clearly described prospectively and preoperatively in the correspondence. Scoring systems published by Panczykowski et al. and Hardaway et al. were used to determine composite scores, which were compared with the recurrence of TN recorded during follow-up over 1-year. Categorical data analysed using Pearson chi-square testing. Independent numerical and nominal data analysed with logistical regression. Results: Logistical regression showed that a Panczykowski composite score of greater than 3 points was associated with a higher likelihood of pain-free outcome 1-year post-MVD with an OR 1.81 (95%CI 1.41-2.61, p=0.032). The composite score using neurosurgeon’s impression of NVC had an OR 2.96 (95%CI 2.28-3.31, p=0.048). A Hardaway composite score of greater than 2 points was associated with a higher likelihood of pain-free outcome 1 year post-MVD with an OR 3.41 (95%CI 2.58-4.37, p=0.028). The composite score using neurosurgeon’s impression of NVC had an OR 3.96 (95%CI 3.01-4.65, p=0.042). Conclusion: Composite scores developed by Panczykowski and Hardaway were validated for the prediction of response to MVD in TN. A composite score based on the neurosurgeon’s interpretation of NVC on MRI, when compared with the radiologist’s had a greater correlation with pain-free outcomes 1 year post-MVD.

Keywords: de novo microvascular decompression, neurovascular conflict, prognosis, trigeminal neuralgia

Procedia PDF Downloads 64
13836 Delamination of Scale in a Fe Carbon Steel Surface by Effect of Interface Roughness and Oxide Scale Thickness

Authors: J. M. Lee, W. R. Noh, C. Y. Kim, M. G. Lee

Abstract:

Delamination of oxide scale has been often discovered at the interface between Fe carbon steel and oxide scale. Among several mechanisms of this delamination behavior, the normal tensile stress to the substrate-scale interface has been described as one of the main factors. The stress distribution at the interface is also known to be affected by thermal expansion mismatch between substrate and oxide scale, creep behavior during cooling and the geometry of the interface. In this study, stress states near the interface in a Fe carbon steel with oxide scale have been investigated using FE simulations. The thermal and mechanical properties of oxide scales are indicated in literature and Fe carbon steel is measured using tensile testing machine. In particular, the normal and shear stress components developed at the interface during bending are investigated. Preliminary numerical sensitivity analyses are provided to explain the effects of the interface geometry and oxide thickness on the delamination behavior.

Keywords: oxide scale, delamination, Fe analysis, roughness, thickness, stress state

Procedia PDF Downloads 332
13835 End-to-End Performance of MPPM in Multihop MIMO-FSO System Over Dependent GG Atmospheric Turbulence Channels

Authors: Hechmi Saidi, Noureddine Hamdi

Abstract:

The performance of decode and forward (DF) multihop free space optical (FSO) scheme deploying multiple input multiple output (MIMO) configuration under gamma-gamma (GG) statistical distribution, that adopts M-ary pulse position modulation (MPPM) coding, is investigated. We have extracted exact and estimated values of symbol-error rates (SERs) respectively. The probability density function (PDF)’s closed-form formula is expressed for our designed system. Thanks to the use of DF multihop MIMO FSO configuration and MPPM signaling, atmospheric turbulence is combatted; hence the transmitted signal quality is improved.

Keywords: free space optical, gamma gamma channel, radio frequency, decode and forward, multiple-input multiple-output, M-ary pulse position modulation, symbol error rate

Procedia PDF Downloads 236
13834 Narrative Psychology and Its Role in Illuminating the Experience of Suffering

Authors: Maureen Gibney

Abstract:

The examination of narrative in psychology has a long tradition, starting with psychoanalytic theory and embracing over time cognitive, social, and personality psychology, among others. Narrative use has been richly detailed as well in medicine, nursing, and social service. One aspect of narrative that has ready utility in higher education and in clinical work is the exploration of suffering and its meaning. Because it is such a densely examined topic, suffering provides a window into identity, sense of purpose, and views of humanity and of the divine. Storytelling analysis permits an exploration of a host of specific manifestations of suffering such as pain and illness, moral injury, and the impact of prolonged suffering on love and relationships. This presentation will review the origins and current understandings of narrative theory in general, and will draw from psychology, medicine, ethics, nursing, and social service in exploring the topic of suffering in particular. It is suggested that the use of narrative themes such as meaning making, agency and communion, generativity, and loss and redemption allows for a finely grained analysis of common and more atypical sources of suffering, their resolution, and the acceptance of their continuation when resolution is not possible. Such analysis, used in professional work and in higher education, can enrich one’s empathy and one’s sense of both the fragility and strength of everyday life.

Keywords: meaning making, narrative theory, suffering, teaching

Procedia PDF Downloads 253
13833 Wear Measuring and Wear Modelling Based On Archard, ASTM, and Neural Network Models

Authors: A. Shebani, C. Pislaru

Abstract:

Wear of materials is an everyday experience and has been observed and studied for long time. The prediction of wear is a fundamental problem in the industrial field, mainly correlated to the planning of maintenance interventions and economy. Pin-on-disc test is the most common test which is used to study the wear behaviour. In this paper, the pin-on-disc (AEROTECH UNIDEX 11) is used for the investigation of the effects of normal load and hardness of material on the wear under dry and sliding conditions. In the pin-on-disc rig, two specimens were used; one, a pin which is made of steel with a tip, is positioned perpendicular to the disc, where the disc is made of aluminium. The pin wear and disc wear were measured by using the following instruments: The Talysurf instrument, a digital microscope, and the alicona instrument; where the Talysurf profilometer was used to measure the pin/disc wear scar depth, and the alicona was used to measure the volume loss for pin and disc. After that, the Archard model, American Society for Testing and Materials model (ASTM), and neural network model were used for pin/disc wear modelling and the simulation results are implemented by using the Matlab program. This paper focuses on how the alicona can be considered as a powerful tool for wear measurements and how the neural network is an effective algorithm for wear estimation.

Keywords: wear modelling, Archard Model, ASTM Model, Neural Networks Model, Pin-on-disc Test, Talysurf, digital microscope, Alicona

Procedia PDF Downloads 438
13832 Hydroxyapatite from Biowaste for the Reinforcement of Polymer

Authors: John O. Akindoyo, M. D. H. Beg, Suriati Binti Ghazali, Nitthiyah Jeyaratnam

Abstract:

Regeneration of bone due to the many health challenges arising from traumatic effects of bone loss, bone tumours and other bone infections is fast becoming indispensable. Over the period of time, some approaches have been undertaken to mitigate this challenge. This includes but not limited to xenografts, allografts, autografts as well as artificial substitutions like bioceramics, synthetic cements and metals. However, most of these techniques often come along with peculiar limitation and problems such as morbidity, availability, disease transmission, collateral site damage or absolute rejection by the body as the case may be. Hydroxyapatite (HA) is very compatible and suitable for this application. However, most of the common methods for HA synthesis are expensive and environmentally unfriendly. Extraction of HA from bio-wastes have been perceived not only to be cost effective, but also environment-friendly. In this research, HA was produced from bio-waste: namely bovine bones through a combination of hydrothermal chemical processes and ordinary calcination techniques. Structure and property of the HA was carried out through different characterization techniques (such as TGA, FTIR, DSC, XRD and BET). The synthesized HA was found to possess similar properties to stoichiometric HA with highly desirable thermal, degradation, structural and porous properties. This material is unique for its potential minimal cost, environmental friendliness and property controllability. It is also perceived to be suitable for tissue and bone engineering applications.

Keywords: biomaterial, biopolymer, bone, hydroxyapatite

Procedia PDF Downloads 307
13831 Going beyond Stakeholder Participation

Authors: Florian Engel

Abstract:

Only with a radical change to an intrinsically motivated project team, through giving the employees the freedom for autonomy, mastery and purpose, it is then possible to develop excellent products. With these changes, combined with using a rapid application development approach, the group of users serves as an important indicator to test the market needs, rather than only as the stakeholders for requirements.

Keywords: intrinsic motivation, requirements elicitation, self-directed work, stakeholder participation

Procedia PDF Downloads 325
13830 Statistical Pattern Recognition for Biotechnological Process Characterization Based on High Resolution Mass Spectrometry

Authors: S. Fröhlich, M. Herold, M. Allmer

Abstract:

Early stage quantitative analysis of host cell protein (HCP) variations is challenging yet necessary for comprehensive bioprocess development. High resolution mass spectrometry (HRMS) provides a high-end technology for accurate identification alongside with quantitative information. Hereby we describe a flexible HRMS assay platform to quantify HCPs relevant in microbial expression systems such as E. Coli in both up and downstream development by means of MVDA tools. Cell pellets were lysed and proteins extracted, purified samples not further treated before applying the SMART tryptic digest kit. Peptides separation was optimized using an RP-UHPLC separation platform. HRMS-MSMS analysis was conducted on an Orbitrap Velos Elite applying CID. Quantification was performed label-free taking into account ionization properties and physicochemical peptide similarities. Results were analyzed using SIEVE 2.0 (Thermo Fisher Scientific) and SIMCA (Umetrics AG). The developed HRMS platform was applied to an E. Coli expression set with varying productivity and the corresponding downstream process. Selected HCPs were successfully quantified within the fmol range. Analysing HCP networks based on pattern analysis facilitated low level quantification and enhanced validity. This approach is of high relevance for high-throughput screening experiments during upstream development, e.g. for titer determination, dynamic HCP network analysis or product characterization. Considering the downstream purification process, physicochemical clustering of identified HCPs is of relevance to adjust buffer conditions accordingly. However, the technology provides an innovative approach for label-free MS based quantification relying on statistical pattern analysis and comparison. Absolute quantification based on physicochemical properties and peptide similarity score provides a technological approach without the need of sophisticated sample preparation strategies and is therefore proven to be straightforward, sensitive and highly reproducible in terms of product characterization.

Keywords: process analytical technology, mass spectrometry, process characterization, MVDA, pattern recognition

Procedia PDF Downloads 237
13829 Nonparametric Specification Testing for the Drift of the Short Rate Diffusion Process Using a Panel of Yields

Authors: John Knight, Fuchun Li, Yan Xu

Abstract:

Based on a new method of the nonparametric estimator of the drift function, we propose a consistent test for the parametric specification of the drift function in the short rate diffusion process using observations from a panel of yields. The test statistic is shown to follow an asymptotic normal distribution under the null hypothesis that the parametric drift function is correctly specified, and converges to infinity under the alternative. Taking the daily 7-day European rates as a proxy of the short rate, we use our test to examine whether the drift of the short rate diffusion process is linear or nonlinear, which is an unresolved important issue in the short rate modeling literature. The testing results indicate that none of the drift functions in this literature adequately captures the dynamics of the drift, but nonlinear specification performs better than the linear specification.

Keywords: diffusion process, nonparametric estimation, derivative security price, drift function and volatility function

Procedia PDF Downloads 354
13828 Comparison of Number of Waves Surfed and Duration Using Global Positioning System and Inertial Sensors

Authors: João Madureira, Ricardo Lagido, Inês Sousa, Fraunhofer Portugal

Abstract:

Surf is an increasingly popular sport and its performance evaluation is often qualitative. This work aims at using a smartphone to collect and analyze the GPS and inertial sensors data in order to obtain quantitative metrics of the surfing performance. Two approaches are compared for detection of wave rides, computing the number of waves rode in a surfing session, the starting time of each wave and its duration. The first approach is based on computing the velocity from the Global Positioning System (GPS) signal and finding the velocity thresholds that allow identifying the start and end of each wave ride. The second approach adds information from the Inertial Measurement Unit (IMU) of the smartphone, to the velocity thresholds obtained from the GPS unit, to determine the start and end of each wave ride. The two methods were evaluated using GPS and IMU data from two surfing sessions and validated with similar metrics extracted from video data collected from the beach. The second method, combining GPS and IMU data, was found to be more accurate in determining the number of waves, start time and duration. This paper shows that it is feasible to use smartphones for quantification of performance metrics during surfing. In particular, detection of the waves rode and their duration can be accurately determined using the smartphone GPS and IMU.

Keywords: inertial measurement unit (IMU), global positioning system (GPS), smartphone, surfing performance

Procedia PDF Downloads 389
13827 The Scattering in Flexible Reactive Silencer Containing Rigid Partitioning

Authors: Muhammad Afzal, Junaid Uzair Satti

Abstract:

The noise emanating from the ducting of heating, ventilation, and air-conditioning (HVAC) system is often attenuated by using the dissipative silencers. Such devices work well for the high-frequency noise but are less operative in the low-frequency noise range. The present study analyzes a reactive silencer comprising expansion chamber of the elastic membranes partitioned symmetrically by a rigid plate. The Mode-Matching scheme has been developed to solve the governing boundary value problem. The orthogonal and non-orthogonal duct modes of acoustic pressures and normal velocities are matched at interfaces. It enables to recast the differential system into the infinite system of linear algebraic of equations, which is, then truncated and inverted for the solution. The truncated solution is validated through the conservation of energy and reconstruction of matching conditions. The results for scattering energy flux and transmission loss are shown against frequency and the dimensions of the chamber. It is seen that the stop-band of the silencer can be shifted to the broadband by changing the dimensions of the chamber and the properties of the elastic membranes. The modeled reactive silencer is more efficient in low frequency regime where the passive devices are least effective.

Keywords: acoustic scattering, elastic membranes mode-matching, reactive silencer

Procedia PDF Downloads 134
13826 Effects of Cattaneo-Christov Heat Flux on 3D Magnetohydrodynamic Viscoelastic Fluid Flow with Variable Thermal Conductivity

Authors: Muhammad Ramzan

Abstract:

A mathematical model has been envisaged to discuss three-dimensional Viscoelastic fluid flow with an effect of Cattaneo-Christov heat flux in attendance of magnetohydrodynamic (MHD). Variable thermal conductivity with the impact of homogeneous-heterogeneous reactions and convective boundary condition is also taken into account. Homotopy analysis method is engaged to obtain series solutions. Graphical illustrations depicting behaviour of sundry parameters on skin friction coefficient and all involved distributions are also given. It is observed that velocity components are decreasing functions of Viscoelastic fluid parameter. Furthermore, strength of homogeneous and heterogeneous reactions have opposite effects on concentration distribution. A comparison with a published paper has also been established and an excellent agreement is obtained; hence reliable results are being presented.

Keywords: Cattaneo Christov heat flux, homogenous-heterogeneous reactions, magnetic field, variable thermal conductivity

Procedia PDF Downloads 186
13825 Phytoremediation Waste Processing of Coffee in Various Concentration of Organic Materials Plant Using Kiambang

Authors: Siti Aminatu Zuhria

Abstract:

On wet coffee processing can improve the quality of coffee, but the coffee liquid waste that can pollute the environment. Liquid waste a lot of coffee resulting from the stripping and washing the coffee. This research will be carried out the process of handling liquid waste stripping coffee from the coffee skin with media phytoremediation using plants kiambang. The purpose of this study was to determine the characteristics of the coffee liquid waste and plant phytoremediation kiambang as agent in various concentrations of liquid waste coffee as well as determining the most optimal concentration in the improved quality of waste water quality standard approach. This research will be conducted through two stages, namely the preliminary study and the main study. In a preliminary study aims to determine the ability of the plant life kiambang as phytoremediation agent in the media well water, distilled water and liquid waste coffee. The main study will be conducted wastewater dilution and coffee will be obtained COD concentration variations. Results are expected at this research that can determine the ability of plants kiambang as an agent for phytoremediation in wastewater treatment with various concentrations of waste and the most optimal concentration in the improved quality of waste water quality standard approach.

Keywords: wet coffee processing, phytoremediation, Kiambang plant, variation concentration liquid waste

Procedia PDF Downloads 290
13824 A Fuzzy-Rough Feature Selection Based on Binary Shuffled Frog Leaping Algorithm

Authors: Javad Rahimipour Anaraki, Saeed Samet, Mahdi Eftekhari, Chang Wook Ahn

Abstract:

Feature selection and attribute reduction are crucial problems, and widely used techniques in the field of machine learning, data mining and pattern recognition to overcome the well-known phenomenon of the Curse of Dimensionality. This paper presents a feature selection method that efficiently carries out attribute reduction, thereby selecting the most informative features of a dataset. It consists of two components: 1) a measure for feature subset evaluation, and 2) a search strategy. For the evaluation measure, we have employed the fuzzy-rough dependency degree (FRFDD) of the lower approximation-based fuzzy-rough feature selection (L-FRFS) due to its effectiveness in feature selection. As for the search strategy, a modified version of a binary shuffled frog leaping algorithm is proposed (B-SFLA). The proposed feature selection method is obtained by hybridizing the B-SFLA with the FRDD. Nine classifiers have been employed to compare the proposed approach with several existing methods over twenty two datasets, including nine high dimensional and large ones, from the UCI repository. The experimental results demonstrate that the B-SFLA approach significantly outperforms other metaheuristic methods in terms of the number of selected features and the classification accuracy.

Keywords: binary shuffled frog leaping algorithm, feature selection, fuzzy-rough set, minimal reduct

Procedia PDF Downloads 201
13823 Aspiring to Achieve a Fairer Society

Authors: Bintou Jobe

Abstract:

Background: The research is focused on the concept of equality, diversity, and inclusion (EDI) and the need to achieve equity by treating individuals according to their circumstances and needs. The research is rooted in the UK Equality Act 2010, which emphasizes the importance of equal opportunities for all individuals regardless of their background and social life. However, inequality persists in society, particularly for those from minority backgrounds who face discrimination. Research Aim: The aim of this research is to promote equality, diversity, and inclusion by encouraging the regeneration of minds and the eradication of stereotypes. The focus is on promoting good Equality, Diversity and Inclusion practices in various settings, including schools, colleges, universities, and workplaces, to create environments where every individual feels a sense of belonging. Methodology: The research utilises a literature review approach to gather information on promoting inclusivity, diversity, and inclusion. Findings: The research highlights the significance of promoting equality, diversity, and inclusion practices to ensure that individuals receive the respect and dignity they deserve. It emphasises the importance of treating individuals based on their unique circumstances and needs rather than relying on stereotypes. The research also emphasises the benefits of diversity and inclusion in enhancing innovation, creativity, and productivity. The theoretical importance of this research is to raise awareness about the importance of regenerating minds, challenging stereotypes, and promoting equality, diversity, and inclusion. The emphasis is on treating individuals based on their circumstances and needs rather than relying on generalizations. Diversity and inclusion are beneficial in different settings, as highlighted by the research. By raising awareness about the importance of mind regeneration, eradicating stereotypes, and promoting equality, diversity, and inclusion, this research makes a significant contribution to the subject area. It emphasizes the necessity of treating individuals based on their unique circumstances instead of relying on generalizations. However, the methodology could be strengthened by incorporating primary research to complement the literature review approach. Data Collection and Analysis Procedures: The research utilised a literature review approach to gather relevant information on promoting inclusivity, diversity, and inclusion. NVivo software application was used to analysed and synthesize the findings to identify themes and support the research aim and objectives. Question Addressed: This research addresses the question of how to promote inclusivity, diversity, and inclusion and reduce the prevalence of stereotypes and prejudice. It explores the need to treat individuals based on their unique circumstances and needs rather than relying on generic assumptions. Encourage individuals to adopt a more inclusive approach. Provide managers with responsibility and training that helps them understand the importance of their roles in shaping the workplace culture. Have an equality, diversity, and inclusion manager from a majority background at the senior level who can speak up for underrepresented groups and flag any issues that need addressing. Conclusion: The research emphasizes the importance of promoting equality, diversity, and inclusion practices to create a fairer society. It highlights the need to challenge stereotypes, treat individuals according to their circumstances and needs, and promote a culture of respect and dignity.

Keywords: equality, fairer society, inclusion, diversity

Procedia PDF Downloads 37
13822 Automated Evaluation Approach for Time-Dependent Question Answering Pairs on Web Crawler Based Question Answering System

Authors: Shraddha Chaudhary, Raksha Agarwal, Niladri Chatterjee

Abstract:

This work demonstrates a web crawler-based generalized end-to-end open domain Question Answering (QA) system. An efficient QA system requires a significant amount of domain knowledge to answer any question with the aim to find an exact and correct answer in the form of a number, a noun, a short phrase, or a brief piece of text for the user's questions. Analysis of the question, searching the relevant document, and choosing an answer are three important steps in a QA system. This work uses a web scraper (Beautiful Soup) to extract K-documents from the web. The value of K can be calibrated on the basis of a trade-off between time and accuracy. This is followed by a passage ranking process using the MS-Marco dataset trained on 500K queries to extract the most relevant text passage, to shorten the lengthy documents. Further, a QA system is used to extract the answers from the shortened documents based on the query and return the top 3 answers. For evaluation of such systems, accuracy is judged by the exact match between predicted answers and gold answers. But automatic evaluation methods fail due to the linguistic ambiguities inherent in the questions. Moreover, reference answers are often not exhaustive or are out of date. Hence correct answers predicted by the system are often judged incorrect according to the automated metrics. One such scenario arises from the original Google Natural Question (GNQ) dataset which was collected and made available in the year 2016. Use of any such dataset proves to be inefficient with respect to any questions that have time-varying answers. For illustration, if the query is where will be the next Olympics? Gold Answer for the above query as given in the GNQ dataset is “Tokyo”. Since the dataset was collected in the year 2016, and the next Olympics after 2016 were in 2020 that was in Tokyo which is absolutely correct. But if the same question is asked in 2022 then the answer is “Paris, 2024”. Consequently, any evaluation based on the GNQ dataset will be incorrect. Such erroneous predictions are usually given to human evaluators for further validation which is quite expensive and time-consuming. To address this erroneous evaluation, the present work proposes an automated approach for evaluating time-dependent question-answer pairs. In particular, it proposes a metric using the current timestamp along with top-n predicted answers from a given QA system. To test the proposed approach GNQ dataset has been used and the system achieved an accuracy of 78% for a test dataset comprising 100 QA pairs. This test data was automatically extracted using an analysis-based approach from 10K QA pairs of the GNQ dataset. The results obtained are encouraging. The proposed technique appears to have the possibility of developing into a useful scheme for gathering precise, reliable, and specific information in a real-time and efficient manner. Our subsequent experiments will be guided towards establishing the efficacy of the above system for a larger set of time-dependent QA pairs.

Keywords: web-based information retrieval, open domain question answering system, time-varying QA, QA evaluation

Procedia PDF Downloads 89