Search results for: exact controllability
38 Beyond Black Friday: The Value of Collaborative Research on Seasonal Shopping Events and Behavior
Authors: Jasmin H. Kwon , Thomas M. Brinthaupt
Abstract:
There is a general lack of consumer behavior research on seasonal shopping events. Studying these kinds of events is interesting and important for several reasons. First, global shopping opportunities have implications for cross-cultural shopping events and effects on seasonal events in other countries. Second, seasonal shopping events are subject to economic conditions and may wane in popularity, especially with e-commerce options. Third, retailers can expand the success of their seasonal shopping events by taking advantage of cross-cultural opportunities. Fourth, it is interesting to consider how consumers from other countries might take advantage of different countries’ seasonal shopping events. Many countries have seasonal shopping events such as Black Friday. Research on these kinds of events can lead to the identification of cross-cultural similarities and differences in consumer behavior. We compared shopping motivations of college students who did (n=36) and did not (n=81) shop on Cyber Monday. The results showed that the groups did not differ significantly on any of the shopping motivation subscales. The Cyber Monday shoppers reported being significantly more likely to agree than disagree that their online shopping experience was enjoyable and exciting. They were more likely to disagree than agree that their experience was overwhelming. In addition, they agreed that they shopped only for deals, purchased the exact items they wanted, and thought that their efforts were worth it. Finally, they intended to shop again at next year’s Cyber Monday. It appears that there are many positive aspects to online seasonal shopping, independent of one’s typical shopping motivations. Different countries have seasonal events similar to the Black Friday and Cyber Monday shopping holiday (e.g., Boxing Day, Fukubukuro, China’s Singles Day). In Korea, there is increasing interest in taking advantage of U.S. Black Friday and Cyber Monday opportunities. Government officials are interested in adapting the U.S. holiday to Korean retailers, essentially recreating the Black Friday/Cyber Monday holiday there. Similarly, the Japanese Fukubukuro ('Lucky Bag') holiday is being adapted by other countries such as Korea and the U.S. International shipping support companies are also emerging that help customers to identify and receive products from other countries. U.S. department stores also provide free shipping on international orders for certain items. As these structural changes are occurring and new options for global shopping emerge, the need to understand the role of shoppers’ motivations becomes even more important. For example, the Cyber Monday results are particularly relevant to the new landscape with e-commerce and cross-cultural opportunities, since many of these events involve e-commerce. Within today’s global market, physical location of a retail store is no longer a limitation to growing one’s market share. From a consumer perspective, it is important to investigate how shopping motivations are related to e-commerce seasonal events. From a retail perspective, understanding the shopping motivations of international customers would help retailers to expand and better tailor their seasonal shopping events beyond the boundaries of their own countries. From a collaborative perspective, research on this topic can include interdisciplinary researchers, including those from fashion merchandising, marketing, retailing, and psychology.Keywords: Black Friday, cross-cultural research, Cyber Monday, seasonal shopping behavior
Procedia PDF Downloads 39937 Temperature-Dependent Post-Mortem Changes in Human Cardiac Troponin-T (cTnT): An Approach in Determining Postmortem Interval
Authors: Sachil Kumar, Anoop Kumar Verma, Wahid Ali, Uma Shankar Singh
Abstract:
Globally approximately 55.3 million people die each year. In the India there were 95 lakh annual deaths in 2013. The number of deaths resulted from homicides, suicides and unintentional injuries in the same period was about 5.7 lakh. The ever-increasing crime rate necessitated the development of methods for determining time since death. An erroneous time of death window can lead investigators down the wrong path or possibly focus a case on an innocent suspect. In this regard a research was carried out by analyzing the temperature dependent degradation of a Cardiac Troponin-T protein (cTnT) in the myocardium postmortem as a marker for time since death. Cardiac tissue samples were collected from (n=6) medico-legal autopsies, (in the Department of Forensic Medicine and Toxicology, King George’s Medical University, Lucknow India) after informed consent from the relatives and studied post-mortem degradation by incubation of the cardiac tissue at room temperature (20±2 OC), 12 0C, 25 0C and 37 0C for different time periods ((~5, 26, 50, 84, 132, 157, 180, 205, and 230 hours). The cases included were the subjects of road traffic accidents (RTA) without any prior history of disease who died in the hospital and their exact time of death was known. The analysis involved extraction of the protein, separation by denaturing gel electrophoresis (SDS-PAGE) and visualization by Western blot using cTnT specific monoclonal antibodies. The area of the bands within a lane was quantified by scanning and digitizing the image using Gel Doc. The data shows a distinct temporal profile corresponding to the degradation of cTnT by proteases found in cardiac muscle. The disappearance of intact cTnT and the appearance of lower molecular weight bands are easily observed. Western blot data clearly showed the intact protein at 42 kDa, two major (27 kDa, 10kDa) fragments, two additional minor fragments (32 kDa) and formation of low molecular weight fragments as time increases. At 12 0C the intensity of band (intact cTnT) decreased steadily as compared to RT, 25 0C and 37 0C. Overall, both PMI and temperature had a statistically significant effect where the greatest amount of protein breakdown was observed within the first 38 h and at the highest temperature, 37 0C. The combination of high temperature (37 0C) and long Postmortem interval (105.15 hrs) had the most drastic effect on the breakdown of cTnT. If the percent intact cTnT is calculated from the total area integrated within a Western blot lane, then the percent intact cTnT shows a pseudo-first order relationship when plotted against the log of the time postmortem. These plots show a good coefficient of correlation of r = 0.95 (p=0.003) for the regression of the human heart at different temperature conditions. The data presented demonstrates that this technique can provide an extended time range during which Postmortem interval can be more accurately estimated.Keywords: degradation, postmortem interval, proteolysis, temperature, troponin
Procedia PDF Downloads 38636 Validation of Asymptotic Techniques to Predict Bistatic Radar Cross Section
Authors: M. Pienaar, J. W. Odendaal, J. C. Smit, J. Joubert
Abstract:
Simulations are commonly used to predict the bistatic radar cross section (RCS) of military targets since characterization measurements can be expensive and time consuming. It is thus important to accurately predict the bistatic RCS of targets. Computational electromagnetic (CEM) methods can be used for bistatic RCS prediction. CEM methods are divided into full-wave and asymptotic methods. Full-wave methods are numerical approximations to the exact solution of Maxwell’s equations. These methods are very accurate but are computationally very intensive and time consuming. Asymptotic techniques make simplifying assumptions in solving Maxwell's equations and are thus less accurate but require less computational resources and time. Asymptotic techniques can thus be very valuable for the prediction of bistatic RCS of electrically large targets, due to the decreased computational requirements. This study extends previous work by validating the accuracy of asymptotic techniques to predict bistatic RCS through comparison with full-wave simulations as well as measurements. Validation is done with canonical structures as well as complex realistic aircraft models instead of only looking at a complex slicy structure. The slicy structure is a combination of canonical structures, including cylinders, corner reflectors and cubes. Validation is done over large bistatic angles and at different polarizations. Bistatic RCS measurements were conducted in a compact range, at the University of Pretoria, South Africa. The measurements were performed at different polarizations from 2 GHz to 6 GHz. Fixed bistatic angles of β = 30.8°, 45° and 90° were used. The measurements were calibrated with an active calibration target. The EM simulation tool FEKO was used to generate simulated results. The full-wave multi-level fast multipole method (MLFMM) simulated results together with the measured data were used as reference for validation. The accuracy of physical optics (PO) and geometrical optics (GO) was investigated. Differences relating to amplitude, lobing structure and null positions were observed between the asymptotic, full-wave and measured data. PO and GO were more accurate at angles close to the specular scattering directions and the accuracy seemed to decrease as the bistatic angle increased. At large bistatic angles PO did not perform well due to the shadow regions not being treated appropriately. PO also did not perform well for canonical structures where multi-bounce was the main scattering mechanism. PO and GO do not account for diffraction but these inaccuracies tended to decrease as the electrical size of objects increased. It was evident that both asymptotic techniques do not properly account for bistatic structural shadowing. Specular scattering was calculated accurately even if targets did not meet the electrically large criteria. It was evident that the bistatic RCS prediction performance of PO and GO depends on incident angle, frequency, target shape and observation angle. The improved computational efficiency of the asymptotic solvers yields a major advantage over full-wave solvers and measurements; however, there is still much room for improvement of the accuracy of these asymptotic techniques.Keywords: asymptotic techniques, bistatic RCS, geometrical optics, physical optics
Procedia PDF Downloads 25835 Critical Evaluation of Long Chain Hydrocarbons with Biofuel Potential from Marine Diatoms Isolated from the West Coast of India
Authors: Indira K., Valsamma Joseph, I. S. Bright
Abstract:
Introduction :Biofuels could replace fossil fuels and reduce our carbon footprint on the planet by technological advancements needed for sustainable and economic fuel production. Micro algae have proven to be a promising source to meet the current energy demand because of high lipid content and production of high biomass rapidly. Marine diatoms, which are key contributors in the biofuel sector and also play a significant role in primary productivity and ecology with high biodiversity and genetic and chemical diversity, are less well understood than other microalgae for producing hydrocarbons. Method :The marine diatom samples selected for hydrocarbon analysis were a total of eleven, out of which 9 samples were from the culture collection of NCAAH, and the remaining two of them were isolated by serial dilution method to get a pure culture from a mixed culture of microalgae obtained from the various cruise stations (350&357) FORV Sagar Sampada along the west coast of India. These diatoms were mass cultured in F/2 media, and the biomass harvested. The crude extract was obtained from the biomass by homogenising with n-hexane, and the hydrocarbons was further obtained by passing the crude extract through 500mg Bonna Agela SPE column and the quantitative analysis was done by GCHRMS analysis using HP-5 column and Helium gas was used as a carrier gas(1ml/min). The injector port temperature was 2400C, the detector temperature was 2500C, and the oven was initially kept at 600C for 1 minute and increased to 2200C at the rate of 60C per minute, and the analysis of a mixture of long chain hydrocarbons was done .Results:In the qualitative analysis done, the most potent hydrocarbon was found to be Psammodictyon Panduriforme (NCAAH-9) with a hydrocarbon mass of 37.27mg/g of the biomass and 2.1% of the total biomass 0f 1.395g and the other potent producer is Biddulphia(NCAAH 6) with hydrocarbon mass of 25.4mg/g of biomass and percentage of hydrocarbon is 1.03%. In the quantitative analysis by GCHRMS, the long chain hydrocarbons found in most of the marine diatoms were undecane, hexadecane, octadecane 3ethyl 5,2 ethyl butyl, Eicosane7hexyl, hexacosane, heptacosane, heneicosane, octadecane 3 methyl, triacontane. The exact mass of the long chain hydrocarbons in all the marine diatom samples was found to be Nonadecane 12C191H40, Tritriacontane,13-decyl-13-heptyl 12C501H102, Octadecane,3ethyl-5-(2-ethylbutyl 12C261H54, tetratetracontane 12C441H89, Eicosane, 7-hexyl 12C261H54. Conclusion:All the marine diatoms screened produced long chain hydrocarbons which can be used as diesel fuel with good cetane value example, hexadecane, undecane. All the long chain hydrocarbons can further undergo catalytic cracking to produce short chain alkanes which can give good octane values and can be used as gasoline. Optimisation of hydrocarbon production with the most potent marine diatom yielded long chain hydrocarbons of good fuel quality.Keywords: biofuel, hydrocarbons, marine diatoms, screening
Procedia PDF Downloads 7634 Effect of Supplementation with Fresh Citrus Pulp on Growth Performance, Slaughter Traits and Mortality in Guinea Pigs
Authors: Carlos Minguez, Christian F. Sagbay, Erika E. Ordoñez
Abstract:
Guinea pigs (Cavia porcellus) play prominent roles as experimental models for medical research and as pets. However, in developing countries like South America, the Philippines, and sub-Saharan Africa, the meat of guinea pigs is an economic source of animal protein for the poor and malnourished humans because guinea pigs are mainly fed with forage and do not compete directly with human beings for food resources, such as corn or wheat. To achieve efficient production of guinea pigs, it is essential to provide insurance against vitamin C deficiency. The objective of this research was to investigate the effect of the partial replacement of alfalfa with fresh citrus pulp (Citrus sinensis) in a diet of guinea pigs on the growth performance, slaughter traits and mortality during the fattening period (between 20 and 74 days of age). A total of 300 guinea pigs were housed in collective cages of about ten animals (2 x 1 x 0.4 m) and were distributed into two completely randomized groups. Guinea pigs in both groups were fed ad libitum, with a standard commercial pellet diet (10 MJ of digestible energy/kg, 17% crude protein, 11% crude fiber, and 4.5% crude fat). Control group was supplied with fresh alfalfa as forage. In the treatment group, 30% of alfalfa was replaced by fresh citrus pulp. Growth traits, including body weight (BW), average daily gain (ADG), feed intake (FI), and feed conversion ratio (FCR), were measured weekly. On day 74, the animals were slaughtered, and slaughter traits, including live weight at slaughter (LWS), full gastrointestinal tract weight (FGTW), hot carcass weight (with head; HCW), cold carcass weight (with head; CCW), drip loss percentage (DLP) and dressing out carcass yield percentage (DCY), were evaluated. Contrasts between groups were obtained by calculated generalized least squares values. Mortality was evaluated by Fisher's exact test due to low numbers in some cells. In the first week, there were significant differences in the growth traits BW, ADG, FI, and FCR, which were superior in control group. These differences may have been due to the origin of the young guinea pigs, which, before weaning, were all raised without fresh citrus pulp, and they were not familiarized with the new supplement. In the second week, treatment group had significantly increased ADG compared with control group, which may have been the result of a process of compensatory growth. During subsequent weeks, no significant differences were observed between animals raised in the two groups. Neither were any significant differences observed across the total fattening period. No significant differences in slaughter traits or mortality rate were observed between animals from the two groups. In conclusion, although there were no significant differences in growth performance, slaughter traits, or mortality, the use of fresh citrus pulp is recommended. Fresh citrus pulp is a by-product of orange juice industry and it is cheap or free. Forage made with fresh citrus pulp could reduce about of 30 % the quantity of alfalfa in guinea pig for meat and as consequence, reduce the production costs.Keywords: fresh citrus, growth, Guinea pig, mortality
Procedia PDF Downloads 19233 Gender Bias and the Role It Plays in Student Evaluation of Instructors
Authors: B. Garfolo, L. Kelpsh, R. Roak, R. Kuck
Abstract:
Often, student ratings of instructors play a significant role in the career path of an instructor in higher education. So then, how does a student view the effectiveness of instructor teaching? This question has been address by literally thousands of studies found in the literature. Yet, why does this question still persist? A literature review reveals that while it is true that student evaluations of instructors can be biased, there is still a considerable amount of work that needs to be done in understanding why. As student evaluations of instructors can be used in a variety of settings (formative or summative) it is critical to understand the nature of the bias. The authors believe that not only is some bias possible in student evaluations, it should be expected for the simple reason that a student evaluation is a human activity and as such, relies upon perception and interpersonal judgment. As such, student ratings are affected by the same factors that can potentially affect any rater’s judgment, such as stereotypes based on gender, culture, race, etc. Previous study findings suggest that student evaluations of teacher effectiveness differ between male and female raters. However, even though studies have shown that instructor gender does play an important role in influencing student ratings, the exact nature and extent of that role remains the subject of debate. Researchers, in their attempt to define good teaching, have looked for differences in student evaluations based on a variety of characteristics such as course type, class size, ability level of the student and grading practices in addition to instructor and student characteristics (gender, age, etc.) with inconsistent results. If a student evaluation represents more than an instructor’s teaching ability, for example, a physical characteristic such as gender, then this information must be taken into account if the evaluation is to have meaning with respect to instructor assessment. While the authors concede that it is difficult or nearly impossible to separate gender from student perception of teaching practices in person, it is, however, possible to shield an instructor’s gender identity with respect to an online teaching experience. The online teaching modality presents itself as a unique opportunity to experiment directly with gender identity. The analysis of the differences of online behavior of individuals when they perceive that they are interacting with a male or female could provide a wealth of data on how gender influences student perceptions of teaching effectiveness. Given the importance of the role student ratings play in hiring, retention, promotion, tenure, and salary deliberations in academic careers, this question warrants further attention as it is important to be aware of possible bias in student evaluations if they are to be used at all with respect to any academic considerations. For experimental purposes, the author’s constructed and online class where each instructors operate under two different gender identities. In this study, each instructor taught multiple sections of the same class using both a male identity and a female identity. The study examined student evaluations of teaching based on certain student and instructor characteristics in order to determine if and where male and female students might differ in their ratings of instructors based on instructor gender. Additionally, the authors examined if there are differences between undergraduate and graduate students' ratings with respect to the experimental criteria.Keywords: gender bias, ethics, student evaluations, student perceptions, online instruction
Procedia PDF Downloads 26632 Kidney Supportive Care in Canada: A Constructivist Grounded Theory of Dialysis Nurses’ Practice Engagement
Authors: Jovina Concepcion Bachynski, Lenora Duhn, Idevania G. Costa, Pilar Camargo-Plazas
Abstract:
Kidney failure is a life-limiting condition for which treatment, such as dialysis (hemodialysis and peritoneal dialysis), can exact a tremendously high physical and psychosocial symptom burden. Kidney failure can be severe enough to require a palliative approach to care. The term supportive care can be used in lieu of palliative care to avoid the misunderstanding that palliative care is synonymous with end-of-life or hospice care. Kidney supportive care, encompassing advance care planning, is an approach to care that improves the quality of life for people receiving dialysis through early identification and treatment of symptoms throughout the disease trajectory. Advanced care planning involves ongoing conversations about the values, goals, and preferences for future care between individuals and their healthcare teams. Kidney supportive care is underutilized and often initiated late in this population. There is evidence to indicate nurses are not providing the necessary elements of supportive kidney care. Dialysis nurses’ delay or lack of engagement in supportive care until close to the end of life may result in people dying without receiving optimal palliative care services. Using Charmaz’s constructivist grounded theory, the purpose of this doctoral study is to develop a substantive theory that explains the process of engagement in supportive care by nurses working in dialysis settings in Canada. Through initial purposeful and subsequent theoretical sampling, 23 nurses with current or recent work experience in outpatient hemodialysis, home hemodialysis, and peritoneal dialysis settings drawn from across Canada were recruited to participate in two intensive interviews using the Zoom© teleconferencing platform. Concurrent data collection and data analysis, constant comparative analysis of initial and focused codes until the attainment of theoretical saturation, and memo-writing, as well as researcher reflexivity, have been undertaken to aid the emergence of concepts, categories, and, ultimately, the constructed theory. At the time of abstract submission, data analysis is currently at the second level of coding (i.e., focused coding stage) of the research study. Preliminary categories include: (a) focusing on biomedical care; (b) multi-dimensional challenges to having the conversation; (c) connecting and setting boundaries with patients; (d) difficulty articulating kidney-supportive care; and (e) unwittingly practising kidney-supportive care. For the conference, the resulting theory will be presented. Nurses working in dialysis are well-positioned to ensure the delivery of quality kidney-supportive care. This study will help to determine the process and the factors enabling and impeding nurse engagement in supportive care in dialysis to effect change for normalizing advance care planning conversations in the clinical setting. This improved practice will have substantive beneficial implications for the many individuals living with kidney failure and their supporting loved ones.Keywords: dialysis, kidney failure, nursing, supportive care
Procedia PDF Downloads 10231 ExactData Smart Tool For Marketing Analysis
Authors: Aleksandra Jonas, Aleksandra Gronowska, Maciej Ścigacz, Szymon Jadczak
Abstract:
Exact Data is a smart tool which helps with meaningful marketing content creation. It helps marketers achieve this by analyzing the text of an advertisement before and after its publication on social media sites like Facebook or Instagram. In our research we focus on four areas of natural language processing (NLP): grammar correction, sentiment analysis, irony detection and advertisement interpretation. Our research has identified a considerable lack of NLP tools for the Polish language, which specifically aid online marketers. In light of this, our research team has set out to create a robust and versatile NLP tool for the Polish language. The primary objective of our research is to develop a tool that can perform a range of language processing tasks in this language, such as sentiment analysis, text classification, text correction and text interpretation. Our team has been working diligently to create a tool that is accurate, reliable, and adaptable to the specific linguistic features of Polish, and that can provide valuable insights for a wide range of marketers needs. In addition to the Polish language version, we are also developing an English version of the tool, which will enable us to expand the reach and impact of our research to a wider audience. Another area of focus in our research involves tackling the challenge of the limited availability of linguistically diverse corpora for non-English languages, which presents a significant barrier in the development of NLP applications. One approach we have been pursuing is the translation of existing English corpora, which would enable us to use the wealth of linguistic resources available in English for other languages. Furthermore, we are looking into other methods, such as gathering language samples from social media platforms. By analyzing the language used in social media posts, we can collect a wide range of data that reflects the unique linguistic characteristics of specific regions and communities, which can then be used to enhance the accuracy and performance of NLP algorithms for non-English languages. In doing so, we hope to broaden the scope and capabilities of NLP applications. Our research focuses on several key NLP techniques including sentiment analysis, text classification, text interpretation and text correction. To ensure that we can achieve the best possible performance for these techniques, we are evaluating and comparing different approaches and strategies for implementing them. We are exploring a range of different methods, including transformers and convolutional neural networks (CNNs), to determine which ones are most effective for different types of NLP tasks. By analyzing the strengths and weaknesses of each approach, we can identify the most effective techniques for specific use cases, and further enhance the performance of our tool. Our research aims to create a tool, which can provide a comprehensive analysis of advertising effectiveness, allowing marketers to identify areas for improvement and optimize their advertising strategies. The results of this study suggest that a smart tool for advertisement analysis can provide valuable insights for businesses seeking to create effective advertising campaigns.Keywords: NLP, AI, IT, language, marketing, analysis
Procedia PDF Downloads 8630 Estimation of Effective Mechanical Properties of Linear Elastic Materials with Voids Due to Volume and Surface Defects
Authors: Sergey A. Lurie, Yury O. Solyaev, Dmitry B. Volkov-Bogorodsky, Alexander V. Volkov
Abstract:
The media with voids is considered and the method of the analytical estimation of the effective mechanical properties in the theory of elastic materials with voids is proposed. The variational model of the porous media is discussed, which is based on the model of the media with fields of conserved dislocations. It is shown that this model is fully consistent with the known model of the linear elastic materials with voids. In the present work, the generalized model of the porous media is proposed in which the specific surface properties are associated with the field of defects-pores in the volume of the deformed body. Unlike typical surface elasticity model, the strain energy density of the considered model includes the special part of the surface energy with the quadratic form of the free distortion tensor. In the result, the non-classical boundary conditions take modified form of the balance equations of volume and surface stresses. The analytical approach is proposed in the present work which allows to receive the simple enough engineering estimations for effective characteristics of the media with free dilatation. In particular, the effective flexural modulus and Poisson's ratio are determined for the problem of a beam pure bending. Here, the known voids elasticity solution was expanded on the generalized model with the surface effects. Received results allow us to compare the deformed state of the porous beam with the equivalent classic beam to introduce effective bending rigidity. Obtained analytical expressions for the effective properties depend on the thickness of the beam as a parameter. It is shown that the flexural modulus of the porous beam is decreased with an increasing of its thickness and the effective Poisson's ratio of the porous beams can take negative values for the certain values of the model parameters. On the other hand, the effective shear modulus is constant under variation of all values of the non-classical model parameters. Solutions received for a beam pure bending and the hydrostatic loading of the porous media are compared. It is shown that an analytical estimation for the bulk modulus of the porous material under hydrostatic compression gives an asymptotic value for the effective bulk modulus of the porous beam in the case of beam thickness increasing. Additionally, it is shown that the scale effects appear due to the surface properties of the porous media. Obtained results allow us to offer the procedure of an experimental identification of the non-classical parameters in the theory of the linear elastic materials with voids based on the bending tests for samples with different thickness. Finally, the problem of implementation of the Saint-Venant hypothesis for the transverse stresses in the porous beam are discussed. These stresses are different from zero in the solution of the voids elasticity theory, but satisfy the integral equilibrium equations. In this work, the exact value of the introduced surface parameter was found, which provides the vanishing of the transverse stresses on the free surfaces of a beam.Keywords: effective properties, scale effects, surface defects, voids elasticity
Procedia PDF Downloads 41929 Alternate Optical Coherence Tomography Technologies in Use for Corneal Diseases Diagnosis in Dogs and Cats
Authors: U. E. Mochalova, A. V. Demeneva, Shilkin A. G., J. Yu. Artiushina
Abstract:
Objective. In medical ophthalmology OCT has been actively used in the last decade. It is a modern non-invasive method of high-precision hardware examination, which gives a detailed cross-sectional image of eye tissues structure with a high level of resolution, which provides in vivo morphological information at the microscopic level about corneal tissue, structures of the anterior segment, retina and optic nerve. The purpose of this study was to explore the possibility of using the OCT technology in complex ophthalmological examination in dogs and cats, to characterize the revealed pathological structural changes in corneal tissue in cats and dogs with some of the most common corneal diseases. Procedures. Optical coherence tomography of the cornea was performed in 112 animals: 68 dogs and 44 cats. In total, 224 eyes were examined. Pathologies of the organ of vision included: dystrophy and degeneration of the cornea, endothelial corneal dystrophy, dry eye syndrome, chronic superficial vascular keratitis, pigmented keratitis, corneal erosion, ulcerative stromal keratitis, corneal sequestration, chronic glaucoma and also postoperative period after performed keratoplasty. When performing OCT, we used certified medical devices: "Huvitz HOCT-1/1F», «Optovue iVue 80» and "SOCT Copernicus Revo (60)". Results. The results of a clinical study on the use of optical coherence tomography (OCT)of the cornea in cats and dogs, performed by the authors of the article in the complex diagnosis of keratopathies of variousorigins: endothelial corneal dystrophy, pigmented keratitis, chronic keratoconjunctivitis, chronic herpetic keratitis, ulcerative keratitis, traumatic corneal damage, sequestration of the cornea of cats, chronic keratitis, complicating the course of glaucoma. The characteristics of the OCT scans are givencorneas of cats and dogs that do not have corneal pathologies. OCT scans of various corneal pathologies in dogs and cats with a description of the revealed pathological changes are presented. Of great clinical interest are the data obtained during OCT of the cornea of animals undergoing keratoplasty operations using various forms of grafts. Conclusions. OCT makes it possible to assess the thickness and pathological structural changes of the corneal surface epithelium, corneal stroma and descemet membrane. We can measure them, determine the exact localization, and record pathological changes. Clinical observation of the dynamics of the pathological process in the cornea using OCT makes it possible to evaluate the effectiveness of drug treatment. In case of negative dynamics of corneal disease, it is necessary to determine the indications for surgical treatment (to assess the thickness of the cornea, the localization of its thinning zones, to characterize the depth and area of pathological changes). According to the OCT of the cornea, it is possible to choose the optimal surgical treatment for the patient, the technique and depth of optically constructive surgery (penetrating or anterior lamellar keratoplasty).; determine the depth and diameter of the planned microsurgical trepanation of corneal tissue, which will ensure good adaptation of the edges of the donor material.Keywords: optical coherence tomography, corneal sequestration, optical coherence tomography of the cornea, corneal transplantation, cat, dog
Procedia PDF Downloads 6828 FracXpert: Ensemble Machine Learning Approach for Localization and Classification of Bone Fractures in Cricket Athletes
Authors: Madushani Rodrigo, Banuka Athuraliya
Abstract:
In today's world of medical diagnosis and prediction, machine learning stands out as a strong tool, transforming old ways of caring for health. This study analyzes the use of machine learning in the specialized domain of sports medicine, with a focus on the timely and accurate detection of bone fractures in cricket athletes. Failure to identify bone fractures in real time can result in malunion or non-union conditions. To ensure proper treatment and enhance the bone healing process, accurately identifying fracture locations and types is necessary. When interpreting X-ray images, it relies on the expertise and experience of medical professionals in the identification process. Sometimes, radiographic images are of low quality, leading to potential issues. Therefore, it is necessary to have a proper approach to accurately localize and classify fractures in real time. The research has revealed that the optimal approach needs to address the stated problem and employ appropriate radiographic image processing techniques and object detection algorithms. These algorithms should effectively localize and accurately classify all types of fractures with high precision and in a timely manner. In order to overcome the challenges of misidentifying fractures, a distinct model for fracture localization and classification has been implemented. The research also incorporates radiographic image enhancement and preprocessing techniques to overcome the limitations posed by low-quality images. A classification ensemble model has been implemented using ResNet18 and VGG16. In parallel, a fracture segmentation model has been implemented using the enhanced U-Net architecture. Combining the results of these two implemented models, the FracXpert system can accurately localize exact fracture locations along with fracture types from the available 12 different types of fracture patterns, which include avulsion, comminuted, compressed, dislocation, greenstick, hairline, impacted, intraarticular, longitudinal, oblique, pathological, and spiral. This system will generate a confidence score level indicating the degree of confidence in the predicted result. Using ResNet18 and VGG16 architectures, the implemented fracture segmentation model, based on the U-Net architecture, achieved a high accuracy level of 99.94%, demonstrating its precision in identifying fracture locations. Simultaneously, the classification ensemble model achieved an accuracy of 81.0%, showcasing its ability to categorize various fracture patterns, which is instrumental in the fracture treatment process. In conclusion, FracXpert has become a promising ML application in sports medicine, demonstrating its potential to revolutionize fracture detection processes. By leveraging the power of ML algorithms, this study contributes to the advancement of diagnostic capabilities in cricket athlete healthcare, ensuring timely and accurate identification of bone fractures for the best treatment outcomes.Keywords: multiclass classification, object detection, ResNet18, U-Net, VGG16
Procedia PDF Downloads 12027 Evaluation of Sustained Improvement in Trauma Education Approaches for the College of Emergency Nursing Australasia Trauma Nursing Program
Authors: Pauline Calleja, Brooke Alexander
Abstract:
In 2010 the College of Emergency Nursing Australasia (CENA) undertook sole administration of the Trauma Nursing Program (TNP) across Australia. The original TNP was developed from recommendations by the Review of Trauma and Emergency Services-Victoria. While participant and faculty feedback about the program was positive, issues were identified that were common for industry training programs in Australia. These issues included didactic approaches, with many lectures and little interaction/activity for participants. Participants were not necessarily encouraged to undertake deep learning due to the teaching and learning principles underpinning the course, and thus participants described having to learn by rote, and only gain a surface understanding of principles that were not always applied to their working context. In Australia, a trauma or emergency nurse may work in variable contexts that impact on practice, especially where resources influence scope and capacity of hospitals to provide trauma care. In 2011, a program review was undertaken resulting in major changes to the curriculum, teaching, learning and assessment approaches. The aim was to improve learning including a greater emphasis on pre-program preparation for participants, the learning environment and clinically applicable contextualized outcomes participants experienced. Previously if participants wished to undertake assessment, they were given a take home examination. The assessment had poor uptake and return, and provided no rigor since assessment was not invigilated. A new assessment structure was enacted with an invigilated examination during course hours. These changes were implemented in early 2012 with great improvement in both faculty and participant satisfaction. This presentation reports on a comparison of participant evaluations collected from courses post implementation in 2012 and in 2015 to evaluate if positive changes were sustained. Methods: Descriptive statistics were applied in analyzing evaluations. Since all questions had more than 20% of cells with a count of <5, Fisher’s Exact Test was used to identify significance (p = <0.05) between groups. Results: A total of fourteen group evaluations were included in this analysis, seven CENA TNP groups from 2012 and seven from 2015 (randomly chosen). A total of 173 participant evaluations were collated (n = 81 from 2012 and 92 from 2015). All course evaluations were anonymous, and nine of the original 14 questions were applicable for this evaluation. All questions were rated by participants on a five-point Likert scale. While all items showed improvement from 2012 to 2015, significant improvement was noted in two items. These were in regard to the content being delivered in a way that met participant learning needs and satisfaction with the length and pace of the program. Evaluation of written comments supports these results. Discussion: The aim of redeveloping the CENA TNP was to improve learning and satisfaction for participants. These results demonstrate that initial improvements in 2012 were able to be maintained and in two essential areas significantly improved. Changes that increased participant engagement, support and contextualization of course materials were essential for CENA TNP evolution.Keywords: emergency nursing education, industry training programs, teaching and learning, trauma education
Procedia PDF Downloads 27226 Recurrent Neural Networks for Classifying Outliers in Electronic Health Record Clinical Text
Authors: Duncan Wallace, M-Tahar Kechadi
Abstract:
In recent years, Machine Learning (ML) approaches have been successfully applied to an analysis of patient symptom data in the context of disease diagnosis, at least where such data is well codified. However, much of the data present in Electronic Health Records (EHR) are unlikely to prove suitable for classic ML approaches. Furthermore, as scores of data are widely spread across both hospitals and individuals, a decentralized, computationally scalable methodology is a priority. The focus of this paper is to develop a method to predict outliers in an out-of-hours healthcare provision center (OOHC). In particular, our research is based upon the early identification of patients who have underlying conditions which will cause them to repeatedly require medical attention. OOHC act as an ad-hoc delivery of triage and treatment, where interactions occur without recourse to a full medical history of the patient in question. Medical histories, relating to patients contacting an OOHC, may reside in several distinct EHR systems in multiple hospitals or surgeries, which are unavailable to the OOHC in question. As such, although a local solution is optimal for this problem, it follows that the data under investigation is incomplete, heterogeneous, and comprised mostly of noisy textual notes compiled during routine OOHC activities. Through the use of Deep Learning methodologies, the aim of this paper is to provide the means to identify patient cases, upon initial contact, which are likely to relate to such outliers. To this end, we compare the performance of Long Short-Term Memory, Gated Recurrent Units, and combinations of both with Convolutional Neural Networks. A further aim of this paper is to elucidate the discovery of such outliers by examining the exact terms which provide a strong indication of positive and negative case entries. While free-text is the principal data extracted from EHRs for classification, EHRs also contain normalized features. Although the specific demographical features treated within our corpus are relatively limited in scope, we examine whether it is beneficial to include such features among the inputs to our neural network, or whether these features are more successfully exploited in conjunction with a different form of a classifier. In this section, we compare the performance of randomly generated regression trees and support vector machines and determine the extent to which our classification program can be improved upon by using either of these machine learning approaches in conjunction with the output of our Recurrent Neural Network application. The output of our neural network is also used to help determine the most significant lexemes present within the corpus for determining high-risk patients. By combining the confidence of our classification program in relation to lexemes within true positive and true negative cases, with an inverse document frequency of the lexemes related to these cases, we can determine what features act as the primary indicators of frequent-attender and non-frequent-attender cases, providing a human interpretable appreciation of how our program classifies cases.Keywords: artificial neural networks, data-mining, machine learning, medical informatics
Procedia PDF Downloads 13125 Novel Numerical Technique for Dusty Plasma Dynamics (Yukawa Liquids): Microfluidic and Role of Heat Transport
Authors: Aamir Shahzad, Mao-Gang He
Abstract:
Currently, dusty plasmas motivated the researchers' widespread interest. Since the last two decades, substantial efforts have been made by the scientific and technological community to investigate the transport properties and their nonlinear behavior of three-dimensional and two-dimensional nonideal complex (dusty plasma) liquids (NICDPLs). Different calculations have been made to sustain and utilize strongly coupled NICDPLs because of their remarkable scientific and industrial applications. Understanding of the thermophysical properties of complex liquids under various conditions is of practical interest in the field of science and technology. The determination of thermal conductivity is also a demanding question for thermophysical researchers, due to some reasons; very few results are offered for this significant property. Lack of information of the thermal conductivity of dense and complex liquids at different parameters related to the industrial developments is a major barrier to quantitative knowledge of the heat flux flow from one medium to another medium or surface. The exact numerical investigation of transport properties of complex liquids is a fundamental research task in the field of thermophysics, as various transport data are closely related with the setup and confirmation of equations of state. A reliable knowledge of transport data is also important for an optimized design of processes and apparatus in various engineering and science fields (thermoelectric devices), and, in particular, the provision of precise data for the parameters of heat, mass, and momentum transport is required. One of the promising computational techniques, the homogenous nonequilibrium molecular dynamics (HNEMD) simulation, is over viewed with a special importance on the application to transport problems of complex liquids. This proposed work is particularly motivated by the FIRST TIME to modify the problem of heat conduction equations leads to polynomial velocity and temperature profiles algorithm for the investigation of transport properties with their nonlinear behaviors in the NICDPLs. The aim of proposed work is to implement a NEMDS algorithm (Poiseuille flow) and to delve the understanding of thermal conductivity behaviors in Yukawa liquids. The Yukawa system is equilibrated through the Gaussian thermostat in order to maintain the constant system temperature (canonical ensemble ≡ NVT)). The output steps will be developed between 3.0×105/ωp and 1.5×105/ωp simulation time steps for the computation of λ data. The HNEMD algorithm shows that the thermal conductivity is dependent on plasma parameters and the minimum value of lmin shifts toward higher G with an increase in k, as expected. New investigations give more reliable simulated data for the plasma conductivity than earlier known simulation data and generally the plasma λ0 by 2%-20%, depending on Γ and κ. It has been shown that the obtained results at normalized force field are in satisfactory agreement with various earlier simulation results. This algorithm shows that the new technique provides more accurate results with fast convergence and small size effects over a wide range of plasma states.Keywords: molecular dynamics simulation, thermal conductivity, nonideal complex plasma, Poiseuille flow
Procedia PDF Downloads 27424 Numerical Investigation on Design Method of Timber Structures Exposed to Parametric Fire
Authors: Robert Pečenko, Karin Tomažič, Igor Planinc, Sabina Huč, Tomaž Hozjan
Abstract:
Timber is favourable structural material due to high strength to weight ratio, recycling possibilities, and green credentials. Despite being flammable material, it has relatively high fire resistance. Everyday engineering practice around the word is based on an outdated design of timber structures considering standard fire exposure, while modern principles of performance-based design enable use of advanced non-standard fire curves. In Europe, standard for fire design of timber structures EN 1995-1-2 (Eurocode 5) gives two methods, reduced material properties method and reduced cross-section method. In the latter, fire resistance of structural elements depends on the effective cross-section that is a residual cross-section of uncharred timber reduced additionally by so called zero strength layer. In case of standard fire exposure, Eurocode 5 gives a fixed value of zero strength layer, i.e. 7 mm, while for non-standard parametric fires no additional comments or recommendations for zero strength layer are given. Thus designers often implement adopted 7 mm rule also for parametric fire exposure. Since the latest scientific evidence suggests that proposed value of zero strength layer can be on unsafe side for standard fire exposure, its use in the case of a parametric fire is also highly questionable and more numerical and experimental research in this field is needed. Therefore, the purpose of the presented study is to use advanced calculation methods to investigate the thickness of zero strength layer and parametric charring rates used in effective cross-section method in case of parametric fire. Parametric studies are carried out on a simple solid timber beam that is exposed to a larger number of parametric fire curves Zero strength layer and charring rates are determined based on the numerical simulations which are performed by the recently developed advanced two step computational model. The first step comprises of hygro-thermal model which predicts the temperature, moisture and char depth development and takes into account different initial moisture states of timber. In the second step, the response of timber beam simultaneously exposed to mechanical and fire load is determined. The mechanical model is based on the Reissner’s kinematically exact beam model and accounts for the membrane, shear and flexural deformations of the beam. Further on, material non-linear and temperature dependent behaviour is considered. In the two step model, the char front temperature is, according to Eurocode 5, assumed to have a fixed temperature of around 300°C. Based on performed study and observations, improved levels of charring rates and new thickness of zero strength layer in case of parametric fires are determined. Thus, the reduced cross section method is substantially improved to offer practical recommendations for designing fire resistance of timber structures. Furthermore, correlations between zero strength layer thickness and key input parameters of the parametric fire curve (for instance, opening factor, fire load, etc.) are given, representing a guideline for a more detailed numerical and also experimental research in the future.Keywords: advanced numerical modelling, parametric fire exposure, timber structures, zero strength layer
Procedia PDF Downloads 16823 An Epidemiological Study on Cutaneous Melanoma, Basocellular and Epidermoid Carcinomas Diagnosed in a Sunny City in Southeast Brazil in a Five-Year Period
Authors: Carolina L. Cerdeira, Julia V. F. Cortes, Maria E. V. Amarante, Gersika B. Santos
Abstract:
Skin cancer is the most common cancer in several parts of the world; in a tropical country like Brazil, the situation isn’t different. The Brazilian population is exposed to high levels of solar radiation, increasing the risk of developing cutaneous carcinoma. Aimed at encouraging prevention measures and the early diagnosis of these tumors, a study was carried out that analyzed data on cutaneous melanomas, basal cell, and epidermoid carcinomas, using as primary data source the medical records of 161 patients registered in one pathology service, which performs skin biopsies in a city of Minas Gerais, Brazil. All patients diagnosed with skin cancer at this service from January 2015 to December 2019 were included. The incidence of skin carcinoma cases was correlated with the identification of histological type, sex, age group, and topographic location. Correlation between variables was verified by Fisher's exact test at a nominal significance level of 5%, with statistical analysis performed by R® software. A significant association was observed between age group and type of cancer (p=0.0085); age group and sex (0.0298); and type of cancer and body region affected (p < 0.01). Those 161 cases analyzed comprised 93 basal cell carcinomas, 66 epidermoid carcinomas, and only two cutaneous melanomas. In the group aged 19 to 30 years, the epidermoid form was most prevalent; from 31 to 45 and from 46 to 59 years, the basal cell prevailed; in 60-year-olds or over, both types had higher frequencies. Associating age group and sex, in groups aged 18 to 30 and 46 to 59 years, women were most affected. In the 31-to 45-year-old group, men predominated. There was a gender balance in the age group 60-year-olds or over. As for topography, there was a high prevalence in the head and neck, followed by upper limbs. Relating histological type and topography, there was a prevalence of basal cell and epidermoid carcinomas in the head and neck. In the chest, the basal cell form was most prevalent; in upper limbs, the epidermoid form prevailed. Cutaneous melanoma affected only the chest and upper limbs. About 82% of patients 60-year-olds or over had head and neck cancer; from 46 to 59 and 60-year-olds or over, the head and neck region and upper limbs were predominantly affected; the distribution was balanced in the 31-to 45-year-old group. In conclusion, basal cell carcinoma was predominant, whereas cutaneous melanoma was the rarest among the types analyzed. Patients 60-year-olds or over were most affected, showing gender balance. In young adults, there was a prevalence of the epidermoid form; in middle-aged patients, basal cell carcinoma was predominant; in the elderly, both forms presented with higher frequencies. There was a higher incidence of head and neck cancers, followed by malignancies affecting the upper limbs. The epidermoid type manifested significantly in the upper limbs. Body regions such as the thorax and lower limbs were less affected, which is justified by the lower exposure of these areas to incident solar radiation.Keywords: basal cell carcinoma, cutaneous melanoma, skin cancer, squamous cell carcinoma, topographic location
Procedia PDF Downloads 12922 Comparative Study of Outcome of Patients with Wilms Tumor Treated with Upfront Chemotherapy and Upfront Surgery in Alexandria University Hospitals
Authors: Golson Mohamed, Yasmine Gamasy, Khaled EL-Khatib, Anas Al-Natour, Shady Fadel, Haytham Rashwan, Haytham Badawy, Nadia Farghaly
Abstract:
Introduction: Wilm's tumor is the most common malignant renal tumor in children. Much progress has been made in the management of patients with this malignancy over the last 3 decades. Today treatments are based on several trials and studies conducted by the International Society of Pediatric Oncology (SIOP) in Europe and National Wilm's Tumor Study Group (NWTS) in the USA. It is necessary for us to understand why do we follow either of the protocols, NWTS which follows the upfront surgery principle or the SIOP which follows the upfront chemotherapy principle in all stages of the disease. Objective: The aim of is to assess outcome in patients treated with preoperative chemotherapy and patients treated with upfront surgery to compare their effect on overall survival. Study design: to decide which protocol to follow, study was carried out on records for patients aged 1 day to 18 years old suffering from Wilm's tumor who were admitted to Alexandria University Hospital, pediatric oncology, pediatric urology and pediatric surgery departments, with a retrospective survey records from 2010 to 2015, Design and editing of the transfer sheet with a (PRISMA flow study) Preferred Reporting Items for Systematic Reviews and Meta-Analyses. Data were fed to the computer and analyzed using IBM SPSS software package version 20.0. (11) Qualitative data were described using number and percent. Quantitative data were described using Range (minimum and maximum), mean, standard deviation and median. Comparison between different groups regarding categorical variables was tested using Chi-square test. When more than 20% of the cells have expected count less than 5, correction for chi-square was conducted using Fisher’s Exact test or Monte Carlo correction. The distributions of quantitative variables were tested for normality using Kolmogorov-Smirnov test, Shapiro-Wilk test, and D'Agstino test, if it reveals normal data distribution, parametric tests were applied. If the data were abnormally distributed, non-parametric tests were used. For normally distributed data, a comparison between two independent populations was done using independent t-test. For abnormally distributed data, comparison between two independent populations was done using Mann-Whitney test. Significance of the obtained results was judged at the 5% level. Results: A significantly statistical difference was observed for survival between the two studied groups favoring the upfront chemotherapy(86.4%)as compared to the upfront surgery group (59.3%) where P=0.009. As regard complication, 20 cases (74.1%) out of 27 were complicated in the group of patients treated with upfront surgery. Meanwhile, 30 cases (68.2%) out of 44 had complications in patients treated with upfront chemotherapy. Also, the incidence of intraoperative complication (rupture) was less in upfront chemotherapy group as compared to upfront surgery group. Conclusion: Upfront chemotherapy has superiority over upfront surgery.As the patient who started with upfront chemotherapy shown, higher survival rate, less percent in complication, less percent needed for radiotherapy, and less rate in recurrence.Keywords: Wilm's tumor, renal tumor, chemotherapy, surgery
Procedia PDF Downloads 31721 Rheolaser: Light Scattering Characterization of Viscoelastic Properties of Hair Cosmetics That Are Related to Performance and Stability of the Respective Colloidal Soft Materials
Authors: Heitor Oliveira, Gabriele De-Waal, Juergen Schmenger, Lynsey Godfrey, Tibor Kovacs
Abstract:
Rheolaser MASTER™ makes use of multiple scattering of light, caused by scattering objects in a continuous medium (such as droplets and particles in colloids), to characterize the viscoelasticity of soft materials. It offers an alternative to conventional rheometers to characterize viscoelasticity of products such as hair cosmetics. Up to six simultaneous measurements at controlled temperature can be carried out simultaneously (10-15 min), and the method requires only minor sample preparation work. Conversely to conventional rheometer based methods, no mechanical stress is applied to the material during the measurements. Therefore, the properties of the exact same sample can be monitored over time, like in aging and stability studies. We determined the elastic index (EI) of water/emulsion mixtures (1 ≤ fat alcohols (FA) ≤ 5 wt%) and emulsion/gel-network mixtures (8 ≤ FA ≤ 17 wt%) and compared with the elastic/sorage mudulus (G’) for the respective samples using a TA conventional rheometer with flat plates geometry. As expected, it was found that log(EI) vs log(G’) presents a linear behavior. Moreover, log(EI) increased in a linear fashion with solids level in the entire range of compositions (1 ≤ FA ≤ 17 wt%), while rheometer measurements were limited to samples down to 4 wt% solids level. Alternatively, a concentric cilinder geometry would be required for more diluted samples (FA > 4 wt%) and rheometer results from different sample holder geometries are not comparable. The plot of the rheolaser output parameters solid-liquid balance (SLB) vs EI were suitable to monitor product aging processes. These data could quantitatively describe some observations such as formation of lumps over aging time. Moreover, this method allowed to identify that the different specifications of a key raw material (RM < 0.4 wt%) in the respective gel-network (GN) product has minor impact on product viscoelastic properties and it is not consumer perceivable after a short aging time. Broadening of a RM spec range typically has a positive impact on cost savings. Last but not least, the photon path length (λ*)—proportional to droplet size and inversely proportional to volume fraction of scattering objects, accordingly to the Mie theory—and the EI were suitable to characterize product destabilization processes (e.g., coalescence and creaming) and to predict product stability about eight times faster than our standard methods. Using these parameters we could successfully identify formulation and process parameters that resulted in unstable products. In conclusion, Rheolaser allows quick and reliable characterization of viscoelastic properties of hair cosmetics that are related to their performance and stability. It operates in a broad range of product compositions and has applications spanning from the formulation of our hair cosmetics to fast release criteria in our production sites. Last but not least, this powerful tool has positive impact on R&D development time—faster delivery of new products to the market—and consequently on cost savings.Keywords: colloids, hair cosmetics, light scattering, performance and stability, soft materials, viscoelastic properties
Procedia PDF Downloads 17220 Transparency of Algorithmic Decision-Making: Limits Posed by Intellectual Property Rights
Authors: Olga Kokoulina
Abstract:
Today, algorithms are assuming a leading role in various areas of decision-making. Prompted by a promise to provide increased economic efficiency and fuel solutions for pressing societal challenges, algorithmic decision-making is often celebrated as an impartial and constructive substitute for human adjudication. But in the face of this implied objectivity and efficiency, the application of algorithms is also marred with mounting concerns about embedded biases, discrimination, and exclusion. In Europe, vigorous debates on risks and adverse implications of algorithmic decision-making largely revolve around the potential of data protection laws to tackle some of the related issues. For example, one of the often-cited venues to mitigate the impact of potentially unfair decision-making practice is a so-called 'right to explanation'. In essence, the overall right is derived from the provisions of the General Data Protection Regulation (‘GDPR’) ensuring the right of data subjects to access and mandating the obligation of data controllers to provide the relevant information about the existence of automated decision-making and meaningful information about the logic involved. Taking corresponding rights and obligations in the context of the specific provision on automated decision-making in the GDPR, the debates mainly focus on efficacy and the exact scope of the 'right to explanation'. In essence, the underlying logic of the argued remedy lies in a transparency imperative. Allowing data subjects to acquire as much knowledge as possible about the decision-making process means empowering individuals to take control of their data and take action. In other words, forewarned is forearmed. The related discussions and debates are ongoing, comprehensive, and, often, heated. However, they are also frequently misguided and isolated: embracing the data protection law as ultimate and sole lenses are often not sufficient. Mandating the disclosure of technical specifications of employed algorithms in the name of transparency for and empowerment of data subjects potentially encroach on the interests and rights of IPR holders, i.e., business entities behind the algorithms. The study aims at pushing the boundaries of the transparency debate beyond the data protection regime. By systematically analysing legal requirements and current judicial practice, it assesses the limits of the transparency requirement and right to access posed by intellectual property law, namely by copyrights and trade secrets. It is asserted that trade secrets, in particular, present an often-insurmountable obstacle for realising the potential of the transparency requirement. In reaching that conclusion, the study explores the limits of protection afforded by the European Trade Secrets Directive and contrasts them with the scope of respective rights and obligations related to data access and portability enshrined in the GDPR. As shown, the far-reaching scope of the protection under trade secrecy is evidenced both through the assessment of its subject matter as well as through the exceptions from such protection. As a way forward, the study scrutinises several possible legislative solutions, such as flexible interpretation of the public interest exception in trade secrets as well as the introduction of the strict liability regime in case of non-transparent decision-making.Keywords: algorithms, public interest, trade secrets, transparency
Procedia PDF Downloads 12419 A Case Report: The Role of Gut Directed Hypnotherapy in Resolution of Irritable Bowel Syndrome in a Medication Refractory Pediatric Male Patient
Authors: Alok Bapatla, Pamela Lutting, Mariastella Serrano
Abstract:
Background: Irritable Bowel Syndrome (IBS) is a functional gastrointestinal disorder characterized by abdominal pain associated with altered bowel habits in the absence of an underlying organic cause. Although the exact etiology of IBS is not fully understood, one of the leading theories postulates a pathology within the Brain-Gut Axis that leads to an overall increase in gastrointestinal sensitivity and pejorative changes in gastrointestinal motility. Research and clinical practice have shown that Gut Directed Hypnotherapy (GDH) has a beneficial clinical role in improving Mind-Gut control and thereby comorbid conditions such as anxiety, abdominal pain, constipation, and diarrhea. Aims: This study presents a 17-year old male with underlying anxiety and a one-year history of IBS-Constipation Predominant Subtype (IBS-C), who has demonstrated impressive improvement of symptoms following GDH treatment following refractory trials with medications including bisacodyl, senna, docusate, magnesium citrate, lubiprostone, linaclotide. Method: The patient was referred to a licensed clinical psychologist specializing in clinical hypnosis and cognitive-behavioral therapy (CBT), who implemented “The Standardized Hypnosis Protocol for IBS” developed by Dr. Olafur S. Palsson, Psy.D at the University of North Carolina at Chapel Hill. The hypnotherapy protocol consisted of a total of seven weekly 45-minute sessions supplemented with a 20-minute audio recording to be listened to once daily. Outcome variables included the GAD-7, PHQ-9 and DCI-2, as well as self-ratings (ranging 0-10) for pain (intensity and frequency), emotional distress about IBS symptoms, and overall emotional distress. All variables were measured at intake prior to administration of the hypnosis protocol and at the conclusion of the hypnosis treatment. A retrospective IBS Questionnaire (IBS Severity Scoring System) was also completed at the conclusion of the GDH treatment for pre-and post-test ratings of clinical symptoms. Results: The patient showed improvement in all outcome variables and self-ratings, including abdominal pain intensity, frequency of abdominal pain episodes, emotional distress relating to gut issues, depression, and anxiety. The IBS Questionnaire showed a significant improvement from a severity score of 400 (defined as severe) prior to GDH intervention compared to 55 (defined as complete resolution) at four months after the last session. IBS Questionnaire subset questions that showed a significant score improvement included abdominal pain intensity, days of pain experienced per 10 days, satisfaction with bowel habits, and overall interference of life affected by IBS symptoms. Conclusion: This case supports the existing research literature that GDH has a significantly beneficial role in improving symptoms in patients with IBS. Emphasis is placed on the numerical results of the IBS Questionnaire scoring, which reflects a patient who initially suffered from severe IBS with failed response to multiple medications, who subsequently showed full and sustained resolutionKeywords: pediatrics, constipation, irritable bowel syndrome, hypnotherapy, gut-directed hypnosis
Procedia PDF Downloads 19818 Approximate-Based Estimation of Single Event Upset Effect on Statistic Random-Access Memory-Based Field-Programmable Gate Arrays
Authors: Mahsa Mousavi, Hamid Reza Pourshaghaghi, Mohammad Tahghighi, Henk Corporaal
Abstract:
Recently, Statistic Random-Access Memory-based (SRAM-based) Field-Programmable Gate Arrays (FPGAs) are widely used in aeronautics and space systems where high dependability is demanded and considered as a mandatory requirement. Since design’s circuit is stored in configuration memory in SRAM-based FPGAs; they are very sensitive to Single Event Upsets (SEUs). In addition, the adverse effects of SEUs on the electronics used in space are much higher than in the Earth. Thus, developing fault tolerant techniques play crucial roles for the use of SRAM-based FPGAs in space. However, fault tolerance techniques introduce additional penalties in system parameters, e.g., area, power, performance and design time. In this paper, an accurate estimation of configuration memory vulnerability to SEUs is proposed for approximate-tolerant applications. This vulnerability estimation is highly required for compromising between the overhead introduced by fault tolerance techniques and system robustness. In this paper, we study applications in which the exact final output value is not necessarily always a concern meaning that some of the SEU-induced changes in output values are negligible. We therefore define and propose Approximate-based Configuration Memory Vulnerability Factor (ACMVF) estimation to avoid overestimating configuration memory vulnerability to SEUs. In this paper, we assess the vulnerability of configuration memory by injecting SEUs in configuration memory bits and comparing the output values of a given circuit in presence of SEUs with expected correct output. In spite of conventional vulnerability factor calculation methods, which accounts any deviations from the expected value as failures, in our proposed method a threshold margin is considered depending on user-case applications. Given the proposed threshold margin in our model, a failure occurs only when the difference between the erroneous output value and the expected output value is more than this margin. The ACMVF is subsequently calculated by acquiring the ratio of failures with respect to the total number of SEU injections. In our paper, a test-bench for emulating SEUs and calculating ACMVF is implemented on Zynq-7000 FPGA platform. This system makes use of the Single Event Mitigation (SEM) IP core to inject SEUs into configuration memory bits of the target design implemented in Zynq-7000 FPGA. Experimental results for 32-bit adder show that, when 1% to 10% deviation from correct output is considered, the counted failures number is reduced 41% to 59% compared with the failures number counted by conventional vulnerability factor calculation. It means that estimation accuracy of the configuration memory vulnerability to SEUs is improved up to 58% in the case that 10% deviation is acceptable in output results. Note that less than 10% deviation in addition result is reasonably tolerable for many applications in approximate computing domain such as Convolutional Neural Network (CNN).Keywords: fault tolerance, FPGA, single event upset, approximate computing
Procedia PDF Downloads 19817 On the Bias and Predictability of Asylum Cases
Authors: Panagiota Katsikouli, William Hamilton Byrne, Thomas Gammeltoft-Hansen, Tijs Slaats
Abstract:
An individual who demonstrates a well-founded fear of persecution or faces real risk of being subjected to torture is eligible for asylum. In Danish law, the exact legal thresholds reflect those established by international conventions, notably the 1951 Refugee Convention and the 1950 European Convention for Human Rights. These international treaties, however, remain largely silent when it comes to how states should assess asylum claims. As a result, national authorities are typically left to determine an individual’s legal eligibility on a narrow basis consisting of an oral testimony, which may itself be hampered by several factors, including imprecise language interpretation, insecurity or lacking trust towards the authorities among applicants. The leaky ground, on which authorities must assess their subjective perceptions of asylum applicants' credibility, questions whether, in all cases, adjudicators make the correct decision. Moreover, the subjective element in these assessments raises questions on whether individual asylum cases could be afflicted by implicit biases or stereotyping amongst adjudicators. In fact, recent studies have uncovered significant correlations between decision outcomes and the experience and gender of the assigned judge, as well as correlations between asylum outcomes and entirely external events such as weather and political elections. In this study, we analyze a publicly available dataset containing approximately 8,000 summaries of asylum cases, initially rejected, and re-tried by the Refugee Appeals Board (RAB) in Denmark. First, we look for variations in the recognition rates, with regards to a number of applicants’ features: their country of origin/nationality, their identified gender, their identified religion, their ethnicity, whether torture was mentioned in their case and if so, whether it was supported or not, and the year the applicant entered Denmark. In order to extract those features from the text summaries, as well as the final decision of the RAB, we applied natural language processing and regular expressions, adjusting for the Danish language. We observed interesting variations in recognition rates related to the applicants’ country of origin, ethnicity, year of entry and the support or not of torture claims, whenever those were made in the case. The appearance (or not) of significant variations in the recognition rates, does not necessarily imply (or not) bias in the decision-making progress. None of the considered features, with the exception maybe of the torture claims, should be decisive factors for an asylum seeker’s fate. We therefore investigate whether the decision can be predicted on the basis of these features, and consequently, whether biases are likely to exist in the decisionmaking progress. We employed a number of machine learning classifiers, and found that when using the applicant’s country of origin, religion, ethnicity and year of entry with a random forest classifier, or a decision tree, the prediction accuracy is as high as 82% and 85% respectively. tentially predictive properties with regards to the outcome of an asylum case. Our analysis and findings call for further investigation on the predictability of the outcome, on a larger dataset of 17,000 cases, which is undergoing.Keywords: asylum adjudications, automated decision-making, machine learning, text mining
Procedia PDF Downloads 9516 Interactive Virtual Patient Simulation Enhances Pharmacology Education and Clinical Practice
Authors: Lyndsee Baumann-Birkbeck, Sohil A. Khan, Shailendra Anoopkumar-Dukie, Gary D. Grant
Abstract:
Technology-enhanced education tools are being rapidly integrated into health programs globally. These tools provide an interactive platform for students and can be used to deliver topics in various modes including games and simulations. Simulations are of particular interest to healthcare education, where they are employed to enhance clinical knowledge and help to bridge the gap between theory and practice. Simulations will often assess competencies for practical tasks, yet limited research examines the effects of simulation on student perceptions of their learning. The aim of this study was to determine the effects of an interactive virtual patient simulation for pharmacology education and clinical practice on student knowledge, skills and confidence. Ethics approval for the study was obtained from Griffith University Research Ethics Committee (PHM/11/14/HREC). The simulation was intended to replicate the pharmacy environment and patient interaction. The content was designed to enhance knowledge of proton-pump inhibitor pharmacology, role in therapeutics and safe supply to patients. The tool was deployed into a third-year clinical pharmacology and therapeutics course. A number of core practice areas were examined including the competency domains of questioning, counselling, referral and product provision. Baseline measures of student self-reported knowledge, skills and confidence were taken prior to the simulation using a specifically designed questionnaire. A more extensive questionnaire was deployed following the virtual patient simulation, which also included measures of student engagement with the activity. A quiz assessing student factual and conceptual knowledge of proton-pump inhibitor pharmacology and related counselling information was also included in both questionnaires. Sixty-one students (response rate >95%) from two cohorts (2014 and 2015) participated in the study. Chi-square analyses were performed and data analysed using Fishers exact test. Results demonstrate that student knowledge, skills and confidence within the competency domains of questioning, counselling, referral and product provision, show improvement following the implementation of the virtual patient simulation. Statistically significant (p<0.05) improvement occurred in ten of the possible twelve self-reported measurement areas. Greatest magnitude of improvement occurred in the area of counselling (student confidence p<0.0001). Student confidence in all domains (questioning, counselling, referral and product provision) showed a marked increase. Student performance in the quiz also improved, demonstrating a 10% improvement overall for pharmacology knowledge and clinical practice following the simulation. Overall, 85% of students reported the simulation to be engaging and 93% of students felt the virtual patient simulation enhanced learning. The data suggests that the interactive virtual patient simulation developed for clinical pharmacology and therapeutics education enhanced students knowledge, skill and confidence, with respect to the competency domains of questioning, counselling, referral and product provision. These self-reported measures appear to translate to learning outcomes, as demonstrated by the improved student performance in the quiz assessment item. Future research of education using virtual simulation should seek to incorporate modern quantitative measures of student learning and engagement, such as eye tracking.Keywords: clinical simulation, education, pharmacology, simulation, virtual learning
Procedia PDF Downloads 33815 Transcriptional Differences in B cell Subpopulations over the Course of Preclinical Autoimmunity Development
Authors: Aleksandra Bylinska, Samantha Slight-Webb, Kevin Thomas, Miles Smith, Susan Macwana, Nicolas Dominguez, Eliza Chakravarty, Joan T. Merrill, Judith A. James, Joel M. Guthridge
Abstract:
Background: Systemic Lupus Erythematosus (SLE) is an interferon-related autoimmune disease characterized by B cell dysfunction. One of the main hallmarks is a loss of tolerance to self-antigens leading to increased levels of autoantibodies against nuclear components (ANAs). However, up to 20% of healthy ANA+ individuals will not develop clinical illness. SLE is more prevalent among women and minority populations (African, Asian American and Hispanics). Moreover, African Americans have a stronger interferon (IFN) signature and develop more severe symptoms. The exact mechanisms involved in ethnicity-dependent B cell dysregulation and the progression of autoimmune disease from ANA+ healthy individuals to clinical disease remains unclear. Methods: Peripheral blood mononuclear cells (PBMCs) from African (AA) and European American (EA) ANA- (n=12), ANA+ (n=12) and SLE (n=12) individuals were assessed by multimodal scRNA-Seq/CITE-Seq methods to examine differential gene signatures in specific B cell subsets. Library preparation was done with a 10X Genomics Chromium according to established protocols and sequenced on Illumina NextSeq. The data were further analyzed for distinct cluster identification and differential gene signatures in the Seurat package in R and pathways analysis was performed using Ingenuity Pathways Analysis (IPA). Results: Comparing all subjects, 14 distinct B cell clusters were identified using a community detection algorithm and visualized with Uniform Manifold Approximation Projection (UMAP). The proportion of each of those clusters varied by disease status and ethnicity. Transitional B cells trended higher in ANA+ healthy individuals, especially in AA. Ribonucleoprotein high population (HNRNPH1 elevated, heterogeneous nuclear ribonucleoprotein, RNP-Hi) of proliferating Naïve B cells were more prevalent in SLE patients, specifically in EA. Interferon-induced protein high population (IFIT-Hi) of Naive B cells are increased in EA ANA- individuals. The proportion of memory B cells and plasma cells clusters tend to be expanded in SLE patients. As anticipated, we observed a higher signature of cytokine-related pathways, especially interferon, in SLE individuals. Pathway analysis among AA individuals revealed an NRF2-mediated Oxidative Stress response signature in the transitional B cell cluster, not seen in EA individuals. TNFR1/2 and Sirtuin Signaling pathway genes were higher in AA IFIT-Hi Naive B cells, whereas they were not detected in EA individuals. Interferon signaling was observed in B cells in both ethnicities. Oxidative phosphorylation was found in age-related B cells (ABCs) for both ethnicities, whereas Death Receptor Signaling was found only in EA patients in these cells. Interferon-related transcription factors were elevated in ABCs and IFIT-Hi Naive B cells in SLE subjects of both ethnicities. Conclusions: ANA+ healthy individuals have altered gene expression pathways in B cells that might drive apoptosis and subsequent clinical autoimmune pathogenesis. Increases in certain regulatory pathways may delay progression to SLE. Further, AA individuals have more elevated activation pathways that may make them more susceptible to SLE. Procedia PDF Downloads 17514 The Use of STIMULAN Resorbable Antibiotic Beads in Conjunction with Autologous Tissue Transfer to Treat Recalcitrant Infections and Osteomyelitis in Diabetic Foot Wounds
Authors: Hayden R Schott, John M Felder III
Abstract:
Introduction: Chronic lower extremity wounds in the diabetic and vasculopathic populations are associated with a high degree of morbidity.When wounds require more extensive treatment than can be offered by wound care centers, more aggressive solutions involve local tissue transfer and microsurgical free tissue transfer for achieving definitive soft tissue coverage. These procedures of autologous tissue transfer (ATT) offer resilient, soft tissue coverage of limb-threatening wounds and confer promising limb salvage rates. However, chronic osteomyelitis and recalcitrant soft tissue infections are common in severe diabetic foot wounds and serve to significantly complicate ATT procedures. Stimulan is a resorbable calcium sulfate antibiotic carrier. The use of stimulan antibiotic beads to treat chronic osteomyelitis is well established in the orthopedic and plastic surgery literature. In these procedures, the beads are placed beneath the skin flap to directly deliver antibiotics to the infection site. The purpose of this study was to quantify the success of Stimulan antibiotic beads in treating recalcitrant infections in patients with diabetic foot wounds receiving ATT. Methods: A retrospective review of clinical and demographic information was performed on patients who underwent ATT with the placement of Stimulan antibiotic beads for attempted limb salvage from 2018-21. Patients were analyzed for preoperative wound characteristics, demographics, infection recurrence, and adverse outcomes as a result of product use. The primary endpoint was 90 day infection recurrence, with secondary endpoints including 90 day complications. Outcomes were compared using basic statistics and Fisher’s exact tests. Results: In this time span, 14 patients were identified. At the time of surgery, all patients exhibited clinical signs of active infection, including positive cultures and erythema. 57% of patients (n=8) exhibited chronic osteomyelitis prior to surgery, and 71% (n=10) had exposed bone at the wound base. In 57% of patients (n=8), Stimulan beads were placed beneath a free tissue flap and beneath a pedicle tissue flap in 42% of patients (n=6). In all patients, Stimulan beads were only applied once. Recurrent infections were observed in 28% of patients (n=4) at 90 days post-op, and flap nonadherence was observed in 7% (n=1). These were the only Stimulan related complications observed. Ultimately, lower limb salvage was successful in 85% of patients (n=12). Notably, there was no significant association between the preoperative presence of osteomyelitis and recurrent infections. Conclusions: The use of Stimulanantiobiotic beads to treat recalcitrant infections in patients receiving definitive skin coverage of diabetic foot wounds does not appear to demonstrate unnecessary risk. Furthermore, the lack of significance between the preoperative presence of osteomyelitis and recurrent infections indicates the successful use of Stimulan to dampen infection in patients with osteomyelitis, as is consistent with the literature. Further research is needed to identify Stimulan as the significant contributor to infection treatment using future cohort and case control studies with more patients. Nonetheless, the use of Stimulan antibiotic beads in patients with diabetic foot wounds demonstrates successful infection suppression and maintenance of definitive soft tissue coverage.Keywords: wound care, stimulan antibiotic beads, free tissue transfer, plastic surgery, wound, infection
Procedia PDF Downloads 9013 Librarian Liaisons: Facilitating Multi-Disciplinary Research for Academic Advancement
Authors: Tracey Woods
Abstract:
In the ever-evolving landscape of academia, the traditional role of the librarian has undergone a remarkable transformation. Once considered as custodians of books and gatekeepers of information, librarians have the potential to take on the vital role of facilitators of cross and inter-disciplinary projects. This shift is driven by the growing recognition of the value of interdisciplinary collaboration in addressing complex research questions in pursuit of novel solutions to real-world problems. This paper shall explore the potential of the academic librarian’s role in facilitating innovative, multi-disciplinary projects, both recognising and validating the vital role that the librarian plays in a somewhat underplayed profession. Academic libraries support teaching, the strengthening of knowledge discourse, and, potentially, the development of innovative practices. As the role of the library gradually morphs from a quiet repository of books to a community-based information hub, a potential opportunity arises. The academic librarian’s role is to build knowledge across a wide span of topics, from the advancement of AI to subject-specific information, and, whilst librarians are generally not offered the research opportunities and funding that the traditional academic disciplines enjoy, they are often invited to help build research in support of the academic. This identifies that one of the primary skills of any 21st-century librarian must be the ability to collaborate and facilitate multi-disciplinary projects. In universities seeking to develop research diversity and academic performance, there is an increasing awareness of the need for collaboration between faculties to enable novel directions and advancements. This idea has been documented and discussed by several researchers; however, there is not a great deal of literature available from recent studies. Having a team based in the library that is adept at creating effective collaborative partnerships is valuable for any academic institution. This paper outlines the development of such a project, initiated within and around an identified library-specific need: the replication of fragile special collections for object-based learning. The research was developed as a multi-disciplinary project involving the faculties of engineering (digital twins lab), architecture, design, and education. Centred around methods for developing a fragile archive into a series of tactile objects furthers knowledge and understanding in both the role of the library as a facilitator of projects, chairing and supporting, alongside contributing to the research process and innovating ideas through the bank of knowledge found amongst the staff and their liaising capabilities. This paper shall present the method of project development from the initiation of ideas to the development of prototypes and dissemination of the objects to teaching departments for analysis. The exact replication of artefacts is also balanced with the adaptation and evolutionary speculations initiated by the design team when adapted as a teaching studio method. The dynamic response required from the library to generate and facilitate these multi-disciplinary projects highlights the information expertise and liaison skills that the librarian possesses. As academia embraces this evolution, the potential for groundbreaking discoveries and innovative solutions across disciplines becomes increasingly attainable.Keywords: Liaison librarian, multi-disciplinary collaborations, library innovations, librarian stakeholders
Procedia PDF Downloads 7212 Multifield Problems in 3D Structural Analysis of Advanced Composite Plates and Shells
Authors: Salvatore Brischetto, Domenico Cesare
Abstract:
Major improvements in future aircraft and spacecraft could be those dependent on an increasing use of conventional and unconventional multilayered structures embedding composite materials, functionally graded materials, piezoelectric or piezomagnetic materials, and soft foam or honeycomb cores. Layers made of such materials can be combined in different ways to obtain structures that are able to fulfill several structural requirements. The next generation of aircraft and spacecraft will be manufactured as multilayered structures under the action of a combination of two or more physical fields. In multifield problems for multilayered structures, several physical fields (thermal, hygroscopic, electric and magnetic ones) interact each other with different levels of influence and importance. An exact 3D shell model is here proposed for these types of analyses. This model is based on a coupled system including 3D equilibrium equations, 3D Fourier heat conduction equation, 3D Fick diffusion equation and electric and magnetic divergence equations. The set of partial differential equations of second order in z is written using a mixed curvilinear orthogonal reference system valid for spherical and cylindrical shell panels, cylinders and plates. The order of partial differential equations is reduced to the first one thanks to the redoubling of the number of variables. The solution in the thickness z direction is obtained by means of the exponential matrix method and the correct imposition of interlaminar continuity conditions in terms of displacements, transverse stresses, electric and magnetic potentials, temperature, moisture content and transverse normal multifield fluxes. The investigated structures have simply supported sides in order to obtain a closed form solution in the in-plane directions. Moreover, a layerwise approach is proposed which allows a 3D correct description of multilayered anisotropic structures subjected to field loads. Several results will be proposed in tabular and graphical formto evaluate displacements, stresses and strains when mechanical loads, temperature gradients, moisture content gradients, electric potentials and magnetic potentials are applied at the external surfaces of the structures in steady-state conditions. In the case of inclusions of piezoelectric and piezomagnetic layers in the multilayered structures, so called smart structures are obtained. In this case, a free vibration analysis in open and closed circuit configurations and a static analysis for sensor and actuator applications will be proposed. The proposed results will be useful to better understand the physical and structural behaviour of multilayered advanced composite structures in the case of multifield interactions. Moreover, these analytical results could be used as reference solutions for those scientists interested in the development of 3D and 2D numerical shell/plate models based, for example, on the finite element approach or on the differential quadrature methodology. The correct impositions of boundary geometrical and load conditions, interlaminar continuity conditions and the zigzag behaviour description due to transverse anisotropy will be also discussed and verified.Keywords: composite structures, 3D shell model, stress analysis, multifield loads, exponential matrix method, layer wise approach
Procedia PDF Downloads 6711 SWOT Analysis on the Prospects of Carob Use in Human Nutrition: Crete, Greece
Authors: Georgios A. Fragkiadakis, Antonia Psaroudaki, Theodora Mouratidou, Eirini Sfakianaki
Abstract:
Research: Within the project "Actions for the optimal utilization of the potential of carob in the Region of Crete" which is financed-supervised by the Region, with collaboration of Crete University and Hellenic Mediterranean University, a SWOT (strengths, weaknesses, opportunities, threats) survey was carried out, to evaluate the prospects of carob in human nutrition, in Crete. Results and conclusions: 1). Strengths: There exists a local production of carob for human consumption, based on international reports, and local-product reports. The data on products in the market (over 100 brands of carob food), indicates a sufficiency of carob materials offered in Crete. The variety of carob food products retailed in Crete indicates a strong demand-production-consumption trend. There is a stable number (core) of businesses that invest significantly (Creta carob, Cretan mills, etc.). The great majority of the relevant food stores (bakery, confectionary etc.) do offer carob products. The presence of carob products produced in Crete is strong on the internet (over 20 main professionally designed websites). The promotion of the carob food-products is based on their variety and on a few historical elements connected with the Cretan diet. 2). Weaknesses: The international prices for carob seed affect the sector; the seed had an international price of €20 per kg in 2021-22 and fell to €8 in 2022, causing losses to carob traders. The local producers do not sort the carobs they deliver for processing, causing 30-40% losses of the product in the industry. The occasional high price triggers the collection of degraded raw material; large losses may emerge due to the action of insects. There are many carob trees whose fruits are not collected, e.g. in Apokoronas, Chania. The nutritional and commercial value of the wild carob fruits is very low. Carob trees-production is recorded by Greek statistical services as "other cultures" in combination with prickly pear i.e., creating difficulties in retrieving data. The percentage of carob used for human nutrition, in contrast to animal feeding, is not known. The exact imports of carob are not closely monitored. We have no data on the recycling of carob by-products in Crete. 3). Opportunities: The development of a culture of respect for carob trade may improve professional relations in the sector. Monitoring carob market and connecting production with retailing-industry needs may allow better market-stability. Raw material evaluation procedures may be implemented to maintain carob value-chain. The state agricultural services may be further involved in carob-health protection. The education of farmers on carob cultivation/management, can improve the quality of the product. The selection of local productive varieties, may improve the sustainability of the culture. Connecting the consumption of carob with health-food products, may create added value in the sector. The presence and extent of wild carob threes in Crete, represents, potentially, a target for grafting. 4). Threats: The annual fluctuation of carob yield challenges the programming of local food industry activities. Carob is a forest species also - there is danger of wrong classification of crops as forest areas, where land ownership is not clear.Keywords: human nutrition, carob food, SWOT analysis, crete, greece
Procedia PDF Downloads 9210 Workflow Based Inspection of Geometrical Adaptability from 3D CAD Models Considering Production Requirements
Authors: Tobias Huwer, Thomas Bobek, Gunter Spöcker
Abstract:
Driving forces for enhancements in production are trends like digitalization and individualized production. Currently, such developments are restricted to assembly parts. Thus, complex freeform surfaces are not addressed in this context. The need for efficient use of resources and near-net-shape production will require individualized production of complex shaped workpieces. Due to variations between nominal model and actual geometry, this can lead to changes in operations in Computer-aided process planning (CAPP) to make CAPP manageable for an adaptive serial production. In this context, 3D CAD data can be a key to realizing that objective. Along with developments in the geometrical adaptation, a preceding inspection method based on CAD data is required to support the process planner by finding objective criteria to make decisions about the adaptive manufacturability of workpieces. Nowadays, this kind of decisions is depending on the experience-based knowledge of humans (e.g. process planners) and results in subjective decisions – leading to a variability of workpiece quality and potential failure in production. In this paper, we present an automatic part inspection method, based on design and measurement data, which evaluates actual geometries of single workpiece preforms. The aim is to automatically determine the suitability of the current shape for further machining, and to provide a basis for an objective decision about subsequent adaptive manufacturability. The proposed method is realized by a workflow-based approach, keeping in mind the requirements of industrial applications. Workflows are a well-known design method of standardized processes. Especially in applications like aerospace industry standardization and certification of processes are an important aspect. Function blocks, providing a standardized, event-driven abstraction to algorithms and data exchange, will be used for modeling and execution of inspection workflows. Each analysis step of the inspection, such as positioning of measurement data or checking of geometrical criteria, will be carried out by function blocks. One advantage of this approach is its flexibility to design workflows and to adapt algorithms specific to the application domain. In general, within the specified tolerance range it will be checked if a geometrical adaption is possible. The development of particular function blocks is predicated on workpiece specific information e.g. design data. Furthermore, for different product lifecycle phases, appropriate logics and decision criteria have to be considered. For example, tolerances for geometric deviations are different in type and size for new-part production compared to repair processes. In addition to function blocks, appropriate referencing systems are important. They need to support exact determination of position and orientation of the actual geometries to provide a basis for precise analysis. The presented approach provides an inspection methodology for adaptive and part-individual process chains. The analysis of each workpiece results in an inspection protocol and an objective decision about further manufacturability. A representative application domain is the product lifecycle of turbine blades containing a new-part production and a maintenance process. In both cases, a geometrical adaptation is required to calculate individual production data. In contrast to existing approaches, the proposed initial inspection method provides information to decide between different potential adaptive machining processes.Keywords: adaptive, CAx, function blocks, turbomachinery
Procedia PDF Downloads 2979 Modeling and Performance Evaluation of an Urban Corridor under Mixed Traffic Flow Condition
Authors: Kavitha Madhu, Karthik K. Srinivasan, R. Sivanandan
Abstract:
Indian traffic can be considered as mixed and heterogeneous due to the presence of various types of vehicles that operate with weak lane discipline. Consequently, vehicles can position themselves anywhere in the traffic stream depending on availability of gaps. The choice of lateral positioning is an important component in representing and characterizing mixed traffic. The field data provides evidence that the trajectory of vehicles in Indian urban roads have significantly varying longitudinal and lateral components. Further, the notion of headway which is widely used for homogeneous traffic simulation is not well defined in conditions lacking lane discipline. From field data it is clear that following is not strict as in homogeneous and lane disciplined conditions and neighbouring vehicles ahead of a given vehicle and those adjacent to it could also influence the subject vehicles choice of position, speed and acceleration. Given these empirical features, the suitability of using headway distributions to characterize mixed traffic in Indian cities is questionable, and needs to be modified appropriately. To address these issues, this paper attempts to analyze the time gap distribution between consecutive vehicles (in a time-sense) crossing a section of roadway. More specifically, to characterize the complex interactions noted above, the influence of composition, manoeuvre types, and lateral placement characteristics on time gap distribution is quantified in this paper. The developed model is used for evaluating various performance measures such as link speed, midblock delay and intersection delay which further helps to characterise the vehicular fuel consumption and emission on urban roads of India. Identifying and analyzing exact interactions between various classes of vehicles in the traffic stream is essential for increasing the accuracy and realism of microscopic traffic flow modelling. In this regard, this study aims to develop and analyze time gap distribution models and quantify it by lead lag pair, manoeuvre type and lateral position characteristics in heterogeneous non-lane based traffic. Once the modelling scheme is developed, this can be used for estimating the vehicle kilometres travelled for the entire traffic system which helps to determine the vehicular fuel consumption and emission. The approach to this objective involves: data collection, statistical modelling and parameter estimation, simulation using calibrated time-gap distribution and its validation, empirical analysis of simulation result and associated traffic flow parameters, and application to analyze illustrative traffic policies. In particular, video graphic methods are used for data extraction from urban mid-block sections in Chennai, where the data comprises of vehicle type, vehicle position (both longitudinal and lateral), speed and time gap. Statistical tests are carried out to compare the simulated data with the actual data and the model performance is evaluated. The effect of integration of above mentioned factors in vehicle generation is studied by comparing the performance measures like density, speed, flow, capacity, area occupancy etc under various traffic conditions and policies. The implications of the quantified distributions and simulation model for estimating the PCU (Passenger Car Units), capacity and level of service of the system are also discussed.Keywords: lateral movement, mixed traffic condition, simulation modeling, vehicle following models
Procedia PDF Downloads 342