Search results for: computational fluid dynamics "CFD"
912 From Makers to Maker Communities: A Survey on Turkish Makerspaces
Authors: Dogan Can Hatunoglu, Cengiz Hakan Gurkanlı, Hatice Merve Demirci
Abstract:
Today, the maker movement is regarded as a socio-cultural movement that represents designing and building objects for innovations. In these creativity-based activities of the movement, individuals from different backgrounds such as; inventors, programmers, craftspeople, DIY’ers, tinkerers, engineers, designers, and hackers, form a community and work collaboratively for mutual, open-source innovations. Today, with the accessibility of recently emerged technologies and digital fabrication tools, the Maker Movement is continuously expanding its scope and has evolved into a new experience, and for many, it is now considered as new kind of industrial revolution. In this new experience, makers create new things within their community by using new digital tools and technologies in spots called makerspaces. In these makerspaces, activities of learning, experience sharing, and mentoring are evolved into maker events. Makers who share common interests in making benefit from makerspaces as meeting and working spots. In literature, there are many sources on Maker Movement, maker communities, and their activities, especially in the field of business administration. However, there is a gap in the literature about the maker communities in Turkey. This research aims to be an information source on the dynamics and process design of “making” activities in Turkish maker communities and also aims to provide insights to sustain and enhance local maker communities in the future. Within this aim, semi-structured interviews were conducted with founders and facilitators from selected Turkish maker communities. (1) The perception towards Maker Movement, makers, activity of making, and current situation of maker communities, (2) motivations of individuals who participate the maker communities, and (3) key drivers (collaboration and decision-making in design processes) of maker activities from the perspectives of main actors (founders, facilitators) are all examined deeply with question on personal experiences and perspectives. After a qualitative approached data analysis concerning the maker communities in Turkey, this research reveals that there are two main conclusions regarding (1) the foundation of the Turkish maker mindset and (2) emergence of self-sustaining communities.Keywords: Maker Movement, maker community, makerspaces, open-source design, sustainability
Procedia PDF Downloads 144911 Inferring Thimlich Ohinga Gender Identity Through Ethnoarchaeological Analysis
Authors: David Maina Muthegethi
Abstract:
The Victoria Basin is associated with gateway for migration to Southern part of Africa. Different communities migrated through the region including the Bantus and Nilotic communities that occupy present day Kenya and Tanzania. A distinct culture of dry-stone technology emerged around 15th century current era, a period associated with peopling of the western Kenya region. One of the biggest dry-stone walls enclosure is Thimlich Ohinga archaeological site. The site was constructed around fourteenth century current era. Architectural design was oval shaped stone structures that were around 4 meters and 2 meters in length and width respectively. The main subsistence strategies of the community that was crop faming, pastoralism, fishing, hunting and gathering. This paper attempts to examine gender dynamics of Thimlich Ohinga society. At that end, attempts are made to infer gender roles as manifested in archaeological record. Therefore, the study entails examination of material evidence excavated from the site. Also, ethnoarchaeological study of contemporary Luo community was undertaken in order to make inferences and analogies concerning gender roles of Thimlich Ohinga society. Overall, the study involved examination of cultural materials excavated from Thimlich Ohinga, extensive survey of the site and ethnography of Luo community. In total, an extensive survey and interviews of 20 households was undertaken in South Kanyamkango ward, Migori County in Western Kenya. The key findings point out that Thimlich Ohinga gender identities were expressed in material forms through architecture, usage of spaces, subsistence strategies, dietary patterns and household organization. Also, gender as social identity was dynamic and responsive to diversification of subsistence strategies and intensification of regional trade as documented in contemporary Luo community. The paper reiterates importance of ethnoarchaeological methods in reconstruction of past social organization as manifested in material record.Keywords: ethnoarchaeological, gender, subsistence patterns, Thimlich Ohinga
Procedia PDF Downloads 75910 Self-Supervised Attributed Graph Clustering with Dual Contrastive Loss Constraints
Authors: Lijuan Zhou, Mengqi Wu, Changyong Niu
Abstract:
Attributed graph clustering can utilize the graph topology and node attributes to uncover hidden community structures and patterns in complex networks, aiding in the understanding and analysis of complex systems. Utilizing contrastive learning for attributed graph clustering can effectively exploit meaningful implicit relationships between data. However, existing attributed graph clustering methods based on contrastive learning suffer from the following drawbacks: 1) Complex data augmentation increases computational cost, and inappropriate data augmentation may lead to semantic drift. 2) The selection of positive and negative samples neglects the intrinsic cluster structure learned from graph topology and node attributes. Therefore, this paper proposes a method called self-supervised Attributed Graph Clustering with Dual Contrastive Loss constraints (AGC-DCL). Firstly, Siamese Multilayer Perceptron (MLP) encoders are employed to generate two views separately to avoid complex data augmentation. Secondly, the neighborhood contrastive loss is introduced to constrain node representation using local topological structure while effectively embedding attribute information through attribute reconstruction. Additionally, clustering-oriented contrastive loss is applied to fully utilize clustering information in global semantics for discriminative node representations, regarding the cluster centers from two views as negative samples to fully leverage effective clustering information from different views. Comparative clustering results with existing attributed graph clustering algorithms on six datasets demonstrate the superiority of the proposed method.Keywords: attributed graph clustering, contrastive learning, clustering-oriented, self-supervised learning
Procedia PDF Downloads 53909 Influence of Internal Topologies on Components Produced by Selective Laser Melting: Numerical Analysis
Authors: C. Malça, P. Gonçalves, N. Alves, A. Mateus
Abstract:
Regardless of the manufacturing process used, subtractive or additive, material, purpose and application, produced components are conventionally solid mass with more or less complex shape depending on the production technology selected. Aspects such as reducing the weight of components, associated with the low volume of material required and the almost non-existent material waste, speed and flexibility of production and, primarily, a high mechanical strength combined with high structural performance, are competitive advantages in any industrial sector, from automotive, molds, aviation, aerospace, construction, pharmaceuticals, medicine and more recently in human tissue engineering. Such features, properties and functionalities are attained in metal components produced using the additive technique of Rapid Prototyping from metal powders commonly known as Selective Laser Melting (SLM), with optimized internal topologies and varying densities. In order to produce components with high strength and high structural and functional performance, regardless of the type of application, three different internal topologies were developed and analyzed using numerical computational tools. The developed topologies were numerically submitted to mechanical compression and four point bending testing. Finite Element Analysis results demonstrate how different internal topologies can contribute to improve mechanical properties, even with a high degree of porosity relatively to fully dense components. Results are very promising not only from the point of view of mechanical resistance, but especially through the achievement of considerable variation in density without loss of structural and functional high performance.Keywords: additive manufacturing, internal topologies, porosity, rapid prototyping, selective laser melting
Procedia PDF Downloads 331908 Trajectory Optimization of Re-Entry Vehicle Using Evolutionary Algorithm
Authors: Muhammad Umar Kiani, Muhammad Shahbaz
Abstract:
Performance of any vehicle can be predicted by its design/modeling and optimization. Design optimization leads to efficient performance. Followed by horizontal launch, the air launch re-entry vehicle undergoes a launch maneuver by introducing a carefully selected angle of attack profile. This angle of attack profile is the basic element to complete a specified mission. Flight program of said vehicle is optimized under the constraints of the maximum allowed angle of attack, lateral and axial loads and with the objective of reaching maximum altitude. The main focus of this study is the endo-atmospheric phase of the ascent trajectory. A three degrees of freedom trajectory model is simulated in MATLAB. The optimization process uses evolutionary algorithm, because of its robustness and efficient capacity to explore the design space in search of the global optimum. Evolutionary Algorithm based trajectory optimization also offers the added benefit of being a generalized method that may work with continuous, discontinuous, linear, and non-linear performance matrix. It also eliminates the requirement of a starting solution. Optimization is particularly beneficial to achieve maximum advantage without increasing the computational cost and affecting the output of the system. For the case of launch vehicles we are immensely anxious to achieve maximum performance and efficiency under different constraints. In a launch vehicle, flight program means the prescribed variation of vehicle pitching angle during the flight which has substantial influence reachable altitude and accuracy of orbit insertion and aerodynamic loading. Results reveal that the angle of attack profile significantly affects the performance of the vehicle.Keywords: endo-atmospheric, evolutionary algorithm, efficient performance, optimization process
Procedia PDF Downloads 405907 Development of an Instrument for Measurement of Thermal Conductivity and Thermal Diffusivity of Tropical Fruit Juice
Authors: T. Ewetumo, K. D. Adedayo, Festus Ben
Abstract:
Knowledge of the thermal properties of foods is of fundamental importance in the food industry to establish the design of processing equipment. However, for tropical fruit juice, there is very little information in literature, seriously hampering processing procedures. This research work describes the development of an instrument for automated thermal conductivity and thermal diffusivity measurement of tropical fruit juice using a transient thermal probe technique based on line heat principle. The system consists of two thermocouple sensors, constant current source, heater, thermocouple amplifier, microcontroller, microSD card shield and intelligent liquid crystal. A fixed distance of 6.50mm was maintained between the two probes. When heat is applied, the temperature rise at the heater probe measured with time at time interval of 4s for 240s. The measuring element conforms as closely as possible to an infinite line source of heat in an infinite fluid. Under these conditions, thermal conductivity and thermal diffusivity are simultaneously measured, with thermal conductivity determined from the slope of a plot of the temperature rise of the heating element against the logarithm of time while thermal diffusivity was determined from the time it took the sample to attain a peak temperature and the time duration over a fixed diffusivity distance. A constant current source was designed to apply a power input of 16.33W/m to the probe throughout the experiment. The thermal probe was interfaced with a digital display and data logger by using an application program written in C++. Calibration of the instrument was done by determining the thermal properties of distilled water. Error due to convection was avoided by adding 1.5% agar to the water. The instrument has been used for measurement of thermal properties of banana, orange and watermelon. Thermal conductivity values of 0.593, 0.598, 0.586 W/m^o C and thermal diffusivity values of 1.053 ×〖10〗^(-7), 1.086 ×〖10〗^(-7), and 0.959 ×〖10〗^(-7) 〖m/s〗^2 were obtained for banana, orange and water melon respectively. Measured values were stored in a microSD card. The instrument performed very well as it measured the thermal conductivity and thermal diffusivity of the tropical fruit juice samples with statistical analysis (ANOVA) showing no significant difference (p>0.05) between the literature standards and estimated averages of each sample investigated with the developed instrument.Keywords: thermal conductivity, thermal diffusivity, tropical fruit juice, diffusion equation
Procedia PDF Downloads 357906 Analysis of Sickle Cell Disease and Maternal Mortality in United Kingdom
Authors: Basma Hassabo, Sarah Ahmed, Aisha Hameed
Abstract:
Aims and Objectives: To determine the incidence of maternal mortality amongst pregnant women with sickle cell disease (SCD) in the United Kingdom and to determine exact cause of death in these women. Background: SCD is caused by the ‘sickle’ gene and is characterized by episodes of severe bone pain and other complications like acute chest syndrome, chronic pulmonary hypertension, stroke, retinopathy, chronic renal failure, hepato-splenic crises, avascular bone necrosis, sepsis and leg ulcers. SCD is a continual cause of maternal mortality and fetal complications, and it comprises 1.5% of all Direct and Indirect deaths in the UK. Sepsis following premature rupture of membranes with ascending infection, post-partum infection and pre-labour overwhelming septic shock is one of its leading causes of death. Over the last fifty years of maternal mortality reports in UK, between 1 to 4 pregnant women died in each triennium. Material and Method: This is a retrospective study that involves pregnant women who died from SCD complications in the UK between 1952-2012. Data were collected from the UK Confidential Enquiries into Maternal Death and its causes between 1952–2012. Prior to 1985, exact cause of death in this cohort was not recorded. Results: 33 deaths reported between 1964 and 1984. 17 deaths were reported due to sickle cell disease between 1985 and 2012. Five women in this group died of sickle cell crisis, one woman had liver sequestration crisis, two women died of venous thromboembolism, two had myocardial fibrosis and three died of sepsis. Remaining women died of amniotic fluid embolism, SUDEP, myocardial ischemia and intracranial haemorrhage. Conclusion: The leading causes of death in sickle cell sick pregnant women are sickle cell crises, sepsis, venous thrombosis and thromboembolism. Prenatal care for women with SCD should be managed by a multidisciplinary team that includes an obstetrician, nutritionist, primary care physician, and haematologist. In every sick Sickle Cell woman Sickle Cell crises should be on the top of the list of differential diagnosis. Aggressive treatment of complications with low threshold to commence broad-spectrum antibiotics and LMWH contribute to better outcomes.Keywords: incidence, maternal mortality, sickle cell disease (SCD), uk
Procedia PDF Downloads 237905 A Normalized Non-Stationary Wavelet Based Analysis Approach for a Computer Assisted Classification of Laryngoscopic High-Speed Video Recordings
Authors: Mona K. Fehling, Jakob Unger, Dietmar J. Hecker, Bernhard Schick, Joerg Lohscheller
Abstract:
Voice disorders origin from disturbances of the vibration patterns of the two vocal folds located within the human larynx. Consequently, the visual examination of vocal fold vibrations is an integral part within the clinical diagnostic process. For an objective analysis of the vocal fold vibration patterns, the two-dimensional vocal fold dynamics are captured during sustained phonation using an endoscopic high-speed camera. In this work, we present an approach allowing a fully automatic analysis of the high-speed video data including a computerized classification of healthy and pathological voices. The approach bases on a wavelet-based analysis of so-called phonovibrograms (PVG), which are extracted from the high-speed videos and comprise the entire two-dimensional vibration pattern of each vocal fold individually. Using a principal component analysis (PCA) strategy a low-dimensional feature set is computed from each phonovibrogram. From the PCA-space clinically relevant measures can be derived that quantify objectively vibration abnormalities. In the first part of the work it will be shown that, using a machine learning approach, the derived measures are suitable to distinguish automatically between healthy and pathological voices. Within the approach the formation of the PCA-space and consequently the extracted quantitative measures depend on the clinical data, which were used to compute the principle components. Therefore, in the second part of the work we proposed a strategy to achieve a normalization of the PCA-space by registering the PCA-space to a coordinate system using a set of synthetically generated vibration patterns. The results show that owing to the normalization step potential ambiguousness of the parameter space can be eliminated. The normalization further allows a direct comparison of research results, which bases on PCA-spaces obtained from different clinical subjects.Keywords: Wavelet-based analysis, Multiscale product, normalization, computer assisted classification, high-speed laryngoscopy, vocal fold analysis, phonovibrogram
Procedia PDF Downloads 265904 Specification Requirements for a Combined Dehumidifier/Cooling Panel: A Global Scale Analysis
Authors: Damien Gondre, Hatem Ben Maad, Abdelkrim Trabelsi, Frédéric Kuznik, Joseph Virgone
Abstract:
The use of a radiant cooling solution would enable to lower cooling needs which is of great interest when the demand is initially high (hot climate). But, radiant systems are not naturally compatibles with humid climates since a low-temperature surface leads to condensation risks as soon as the surface temperature is close to or lower than the dew point temperature. A radiant cooling system combined to a dehumidification system would enable to remove humidity for the space, thereby lowering the dew point temperature. The humidity removal needs to be especially effective near the cooled surface. This requirement could be fulfilled by a system using a single desiccant fluid for the removal of both excessive heat and moisture. This task aims at providing an estimation of the specification requirements of such system in terms of cooling power and dehumidification rate required to fulfill comfort issues and to prevent any condensation risk on the cool panel surface. The present paper develops a preliminary study on the specification requirements, performances and behavior of a combined dehumidifier/cooling ceiling panel for different operating conditions. This study has been carried using the TRNSYS software which allows nodal calculations of thermal systems. It consists of the dynamic modeling of heat and vapor balances of a 5m x 3m x 2.7m office space. In a first design estimation, this room is equipped with an ideal heating, cooling, humidification and dehumidification system so that the room temperature is always maintained in between 21◦C and 25◦C with a relative humidity in between 40% and 60%. The room is also equipped with a ventilation system that includes a heat recovery heat exchanger and another heat exchanger connected to a heat sink. Main results show that the system should be designed to meet a cooling power of 42W.m−2 and a desiccant rate of 45 gH2O.h−1. In a second time, a parametric study of comfort issues and system performances has been achieved on a more realistic system (that includes a chilled ceiling) under different operating conditions. It enables an estimation of an acceptable range of operating conditions. This preliminary study is intended to provide useful information for the system design.Keywords: dehumidification, nodal calculation, radiant cooling panel, system sizing
Procedia PDF Downloads 175903 Physics-Based Earthquake Source Models for Seismic Engineering: Analysis and Validation for Dip-Slip Faults
Authors: Percy Galvez, Anatoly Petukhin, Paul Somerville, Ken Miyakoshi, Kojiro Irikura, Daniel Peter
Abstract:
Physics-based dynamic rupture modelling is necessary for estimating parameters such as rupture velocity and slip rate function that are important for ground motion simulation, but poorly resolved by observations, e.g. by seismic source inversion. In order to generate a large number of physically self-consistent rupture models, whose rupture process is consistent with the spatio-temporal heterogeneity of past earthquakes, we use multicycle simulations under the heterogeneous rate-and-state (RS) friction law for a 45deg dip-slip fault. We performed a parametrization study by fully dynamic rupture modeling, and then, a set of spontaneous source models was generated in a large magnitude range (Mw > 7.0). In order to validate rupture models, we compare the source scaling relations vs. seismic moment Mo for the modeled rupture area S, as well as average slip Dave and the slip asperity area Sa, with similar scaling relations from the source inversions. Ground motions were also computed from our models. Their peak ground velocities (PGV) agree well with the GMPE values. We obtained good agreement of the permanent surface offset values with empirical relations. From the heterogeneous rupture models, we analyzed parameters, which are critical for ground motion simulations, i.e. distributions of slip, slip rate, rupture initiation points, rupture velocities, and source time functions. We studied cross-correlations between them and with the friction weakening distance Dc value, the only initial heterogeneity parameter in our modeling. The main findings are: (1) high slip-rate areas coincide with or are located on an outer edge of the large slip areas, (2) ruptures have a tendency to initiate in small Dc areas, and (3) high slip-rate areas correlate with areas of small Dc, large rupture velocity and short rise-time.Keywords: earthquake dynamics, strong ground motion prediction, seismic engineering, source characterization
Procedia PDF Downloads 144902 In Silico Study of Antiviral Drugs Against Three Important Proteins of Sars-Cov-2 Using Molecular Docking Method
Authors: Alireza Jalalvand, Maryam Saleh, Somayeh Behjat Khatouni, Zahra Bahri Najafi, Foroozan Fatahinia, Narges Ismailzadeh, Behrokh Farahmand
Abstract:
Object: In the last two decades, the recent outbreak of Coronavirus (SARS-CoV-2) imposed a global pandemic in the world. Despite the increasing prevalence of the disease, there are no effective drugs to treat it. A suitable and rapid way to afford an effective drug and treat the global pandemic is a computational drug study. This study used molecular docking methods to examine the potential inhibition of over 50 antiviral drugs against three fundamental proteins of SARS-CoV-2. METHODS: Through a literature review, three important proteins (a key protease, RNA-dependent RNA polymerase (RdRp), and spike) were selected as drug targets. Three-dimensional (3D) structures of protease, spike, and RdRP proteins were obtained from the Protein Data Bank. Protein had minimal energy. Over 50 antiviral drugs were considered candidates for protein inhibition and their 3D structures were obtained from drug banks. The Autodock 4.2 software was used to define the molecular docking settings and run the algorithm. RESULTS: Five drugs, including indinavir, lopinavir, saquinavir, nelfinavir, and remdesivir, exhibited the highest inhibitory potency against all three proteins based on the binding energies and drug binding positions deduced from docking and hydrogen-bonding analysis. Conclusions: According to the results, among the drugs mentioned, saquinavir and lopinavir showed the highest inhibitory potency against all three proteins compared to other drugs. It may enter laboratory phase studies as a dual-drug treatment to inhibit SARS-CoV-2.Keywords: covid-19, drug repositioning, molecular docking, lopinavir, saquinavir
Procedia PDF Downloads 88901 Optimal Emergency Shipment Policy for a Single-Echelon Periodic Review Inventory System
Authors: Saeed Poormoaied, Zumbul Atan
Abstract:
Emergency shipments provide a powerful mechanism to alleviate the risk of imminent stock-outs and can result in substantial benefits in an inventory system. Customer satisfaction and high service level are immediate consequences of utilizing emergency shipments. In this paper, we consider a single-echelon periodic review inventory system consisting of a single local warehouse, being replenished from a central warehouse with ample capacity in an infinite horizon setting. Since the structure of the optimal policy appears to be complicated, we analyze this problem under an order-up-to-S inventory control policy framework, the (S, T) policy, with the emergency shipment consideration. In each period of the periodic review policy, there is a single opportunity at any point of time for the emergency shipment so that in case of stock-outs, an emergency shipment is requested. The goal is to determine the timing and amount of the emergency shipment during a period (emergency shipment policy) as well as the base stock periodic review policy parameters (replenishment policy). We show that how taking advantage of having an emergency shipment during periods improves the performance of the classical (S, T) policy, especially when fixed and unit emergency shipment costs are small. Investigating the structure of the objective function, we develop an exact algorithm for finding the optimal solution. We also provide a heuristic and an approximation algorithm for the periodic review inventory system problem. The experimental analyses indicate that the heuristic algorithm is computationally more efficient than the approximation algorithm, but in terms of the solution efficiency, the approximation algorithm performs very well. We achieve up to 13% cost savings in the (S, T) policy if we apply the proposed emergency shipment policy. Moreover, our computational results reveal that the approximated solution is often within 0.21% of the globally optimal solution.Keywords: emergency shipment, inventory, periodic review policy, approximation algorithm.
Procedia PDF Downloads 141900 Fast Bayesian Inference of Multivariate Block-Nearest Neighbor Gaussian Process (NNGP) Models for Large Data
Authors: Carlos Gonzales, Zaida Quiroz, Marcos Prates
Abstract:
Several spatial variables collected at the same location that share a common spatial distribution can be modeled simultaneously through a multivariate geostatistical model that takes into account the correlation between these variables and the spatial autocorrelation. The main goal of this model is to perform spatial prediction of these variables in the region of study. Here we focus on a geostatistical multivariate formulation that relies on sharing common spatial random effect terms. In particular, the first response variable can be modeled by a mean that incorporates a shared random spatial effect, while the other response variables depend on this shared spatial term, in addition to specific random spatial effects. Each spatial random effect is defined through a Gaussian process with a valid covariance function, but in order to improve the computational efficiency when the data are large, each Gaussian process is approximated to a Gaussian random Markov field (GRMF), specifically to the block nearest neighbor Gaussian process (Block-NNGP). This approach involves dividing the spatial domain into several dependent blocks under certain constraints, where the cross blocks allow capturing the spatial dependence on a large scale, while each individual block captures the spatial dependence on a smaller scale. The multivariate geostatistical model belongs to the class of Latent Gaussian Models; thus, to achieve fast Bayesian inference, it is used the integrated nested Laplace approximation (INLA) method. The good performance of the proposed model is shown through simulations and applications for massive data.Keywords: Block-NNGP, geostatistics, gaussian process, GRMF, INLA, multivariate models.
Procedia PDF Downloads 97899 Advanced Compound Coating for Delaying Corrosion of Fast-Dissolving Alloy in High Temperature and Corrosive Environment
Authors: Lei Zhao, Yi Song, Tim Dunne, Jiaxiang (Jason) Ren, Wenhan Yue, Lei Yang, Li Wen, Yu Liu
Abstract:
Fasting dissolving magnesium (DM) alloy technology has contributed significantly to the “Shale Revolution” in oil and gas industry. This application requires DM downhole tools dissolving initially at a slow rate, rapidly accelerating to a high rate after certain period of operation time (typically 8 h to 2 days), a contradicting requirement that can hardly be addressed by traditional Mg alloying or processing itself. Premature disintegration has been broadly reported in downhole DM tool from field trials. To address this issue, “temporary” thin polymers of various formulations are currently coated onto DM surface to delay its initial dissolving. Due to conveying parts, harsh downhole condition, and high dissolving rate of the base material, the current delay coatings relying on pure polymers are found to perform well only at low temperature (typical < 100 ℃) and parts without sharp edges or corners, as severe geometries prevent high quality thin film coatings from forming effectively. In this study, a coating technology combining Plasma Electrolytic Oxide (PEO) coatings with advanced thin film deposition has been developed, which can delay DM complex parts (with sharp corners) in corrosive fluid at 150 ℃ for over 2 days. Synergistic effects between porous hard PEO coating and chemical inert elastic-polymer sealing leads to its delaying dissolution improvement, and strong chemical/physical bonding between these two layers has been found to play essential role. Microstructure of this advanced coating and compatibility between PEO and various polymer selections has been thoroughly investigated and a model is also proposed to explain its delaying performance. This study could not only benefit oil and gas industry to unplug their High Temperature High Pressure (HTHP) unconventional resources inaccessible before, but also potentially provides a technical route for other industries (e.g., bio-medical, automobile, aerospace) where primer anti-corrosive protection on light Mg alloy is highly demanded.Keywords: dissolvable magnesium, coating, plasma electrolytic oxide, sealer
Procedia PDF Downloads 111898 Scheduling Jobs with Stochastic Processing Times or Due Dates on a Server to Minimize the Number of Tardy Jobs
Authors: H. M. Soroush
Abstract:
The problem of scheduling products and services for on-time deliveries is of paramount importance in today’s competitive environments. It arises in many manufacturing and service organizations where it is desirable to complete jobs (products or services) with different weights (penalties) on or before their due dates. In such environments, schedules should frequently decide whether to schedule a job based on its processing time, due-date, and the penalty for tardy delivery to improve the system performance. For example, it is common to measure the weighted number of late jobs or the percentage of on-time shipments to evaluate the performance of a semiconductor production facility or an automobile assembly line. In this paper, we address the problem of scheduling a set of jobs on a server where processing times or due-dates of jobs are random variables and fixed weights (penalties) are imposed on the jobs’ late deliveries. The goal is to find the schedule that minimizes the expected weighted number of tardy jobs. The problem is NP-hard to solve; however, we explore three scenarios of the problem wherein: (i) both processing times and due-dates are stochastic; (ii) processing times are stochastic and due-dates are deterministic; and (iii) processing times are deterministic and due-dates are stochastic. We prove that special cases of these scenarios are solvable optimally in polynomial time, and introduce efficient heuristic methods for the general cases. Our computational results show that the heuristics perform well in yielding either optimal or near optimal sequences. The results also demonstrate that the stochasticity of processing times or due-dates can affect scheduling decisions. Moreover, the proposed problem is general in the sense that its special cases reduce to some new and some classical stochastic single machine models.Keywords: number of late jobs, scheduling, single server, stochastic
Procedia PDF Downloads 497897 Flipped Learning in Interpreter Training: Technologies, Activities and Student Perceptions
Authors: Dohun Kim
Abstract:
Technological innovations have stimulated flipped learning in many disciplines, including language teaching. It is a specific type of blended learning, which combines onsite (i.e. face-to-face) with online experiences to produce effective, efficient and flexible learning. Flipped learning literally ‘flips’ conventional teaching and learning activities upside down: it leverages technologies to deliver a lecture and direct instruction—other asynchronous activities as well—outside the classroom to reserve onsite time for interaction and activities in the upper cognitive realms: applying, analysing, evaluating and creating. Unlike the conventional flipped approaches, which focused on video lecture, followed by face-to-face or on-site session, new innovative methods incorporate various means and structures to serve the needs of different academic disciplines and classrooms. In the light of such innovations, this study adopted ‘student-engaged’ approaches to interpreter training and contrasts them with traditional classrooms. To this end, students were also encouraged to engage in asynchronous activities online, and innovative technologies, such as Telepresence, were employed. Based on the class implementation, a thorough examination was conducted to examine how we can structure and implement flipped classrooms for language and interpreting training while actively engaging learners. This study adopted a quantitative research method, while complementing it with a qualitative one. The key findings suggest that the significance of the instructor’s role does not dwindle, but his/her role changes to a moderator and a facilitator. Second, we can apply flipped learning to both theory- and practice-oriented modules. Third, students’ integration into the community of inquiry is of significant importance to foster active and higher-order learning. Fourth, cognitive presence and competence can be enhanced through strengthened and integrated teaching and social presences. Well-orchestrated teaching presence stimulates students to find out the problems and voices the convergences and divergences, while fluid social presence facilitates the exchanges of knowledge and the adjustment of solutions, which eventually contributes to consolidating cognitive presence—a key ingredient that enables the application and testing of the solutions and reflection thereon.Keywords: blended learning, Community of Inquiry, flipped learning, interpreter training, student-centred learning
Procedia PDF Downloads 196896 Uranoplasty Using Tongue Flap for Bilateral Clefts
Authors: Saidasanov Saidazal Shokhmurodovich, Topolnickiy Orest Zinovyevich, Afaunova Olga Arturovna
Abstract:
Relevance: Bilateral congenital cleft is one of the most complex forms of all clefts, which makes it difficult to choose a surgical method of treatment. During primary operations to close the hard and soft palate, there is a shortage of soft tissues and their lack during standard uranoplasty, and these factors aggravate the period of rehabilitation of patients. Materials and methods: The results of surgical treatment of children with bilateral cleft, who underwent uranoplasty using a flap from the tongue, were analyzed. The study used methods: clinical and statistical, which allowed us to solve the tasks, based on the principles of evidence-based medicine. Results and discussion: in our study, 15 patients were studied, who underwent surgical treatment in the following volume: uranoplasty using a flap from the tongue in two stages. Of these, 9 boys and 6 girls aged 2.5 to 6 years. The first stage was surgical treatment in the volume: veloplasty. The second stage was a surgical intervention in volume: uranoplasty using a flap from the tongue. In all patients, the width of the cleft ranged from 1.6-2.8 cm. All patients in this group were orthodontically prepared. Using this method, the surgeon can achieve the following results: maximum narrowing of the palatopharyngeal ring, long soft palate, complete closure of the hard palate, alveolar process, and the mucous membrane of the nasal cavity is also sutured, which creates good conditions for the next stage of osteoplastic surgery. Based on the result obtained, patients have positive results of working with a speech therapist. In all patients, the dynamics were positive without complications. Conclusions: Based on our observation, tongue flap uranoplasty is one of the effective techniques for patients with wide clefts of the hard and soft palate. The use of a flap from the tongue makes it possible to reduce the number of repeated reoperations and improve the quality of social adaptation of this group of patients, which is one of the important stages of rehabilitation. Upon completion of the stages of rehabilitation, all patients had the maximum improvement in functional, anatomical and social indicators.Keywords: congenital cleft lips and palate, bilateral cleft, child surgery, maxillofacial surgery
Procedia PDF Downloads 120895 Gender Justice and Empowerment: A Study of Chhara Bootlegger Women of Ahmedabad
Authors: Neeta Khurana, Ritu Sharma
Abstract:
This paper is an impact assessment study of the rehabilitation work done for Chhara women in the rural precincts of Ahmedabad. The Chharas constitute a denotified tribe and live in abject poverty. The women of this community are infamous absconders of law and active bootleggers of locally made liquor. As part of a psychological study with a local NGO, the authors headed a training program aimed at rehabilitating and providing these women alternate modes of employment, thereby driving them away from a life of crime. The paper centers on the idea of women entrepreneurship and women empowerment. It notes the importance of handholding in a conflict situation. Most of the research on Chharas is either focused on victimising them for state-sponsored violence or mostly makes a plea on reconditioning them in the mainstream. Going against this trend, this paper which documents the study argues that making these poor women self-dependent is a panacea for their sluggish development. The alienation caused due to the demonisation of the community has made them abandon traditional modes of employment. This has further led the community astray into making illegal country liquor causing further damage to their reputation. Women are at the centre of this vicious circle facing much repression and ostracisation. The study conducted by the PDPU team was an attempt to change this dogmatic alienation of these poor women. It was found that with consistent support and reformist approach towards law, it is possible to drive these women away from a life of penury repression and crime. The aforementioned study uses empirical tools to verify this claim. Placed at the confluence of the sociology of gender and psychology, this paper is a good way to argue that law enforcement cannot be effective without sensitisation to the ground realities of conflict. The study conducted from which the paper borrows was a scientific survey focused on markers of gender and caste realities of the Chharas. The paper mentions various dynamics involved in the training program that paved the way for the successful employment of the women. In an attempt to explain its uniqueness, the paper also has a section on comparing similar social experiments.Keywords: employment, gender, handholding, rehabilitation
Procedia PDF Downloads 131894 Hardy Type Inequalities of Two-Dimensional on Time Scales via Steklov Operator
Authors: Wedad Albalawi
Abstract:
The mathematical inequalities have been the core of mathematical study and used in almost all branches of mathematics as well in various areas of science and engineering. The inequalities by Hardy, Littlewood and Polya were the first significant composition of several science. This work presents fundamental ideas, results and techniques and it has had much influence on research in various branches of analysis. Since 1934, various inequalities have been produced and studied in the literature. Furthermore, some inequalities have been formulated by some operators; in 1989, weighted Hardy inequalities have been obtained for integration operators. Then, they obtained weighted estimates for Steklov operators that were used in the solution of the Cauchy problem for the wave equation. They were improved upon in 2011 to include the boundedness of integral operators from the weighted Sobolev space to the weighted Lebesgue space. Some inequalities have been demonstrated and improved using the Hardy–Steklov operator. Recently, a lot of integral inequalities have been improved by differential operators. Hardy inequality has been one of the tools that is used to consider integrity solutions of differential equations. Then dynamic inequalities of Hardy and Coposon have been extended and improved by various integral operators. These inequalities would be interesting to apply in different fields of mathematics (functional spaces, partial differential equations, mathematical modeling). Some inequalities have been appeared involving Copson and Hardy inequalities on time scales to obtain new special version of them. A time scale is defined as a closed subset contains real numbers. Then the inequalities of time scales version have received a lot of attention and has had a major field in both pure and applied mathematics. There are many applications of dynamic equations on time scales to quantum mechanics, electrical engineering, neural networks, heat transfer, combinatorics, and population dynamics. This study focuses on double integrals to obtain new time-scale inequalities of Copson driven by Steklov operator. They will be applied in the solution of the Cauchy problem for the wave equation. The proof can be done by introducing restriction on the operator in several cases. In addition, the obtained inequalities done by using some concepts in time scale version such as time scales calculus, theorem of Fubini and the inequality of H¨older.Keywords: time scales, inequality of Hardy, inequality of Coposon, Steklov operator
Procedia PDF Downloads 76893 Faster Pedestrian Recognition Using Deformable Part Models
Authors: Alessandro Preziosi, Antonio Prioletti, Luca Castangia
Abstract:
Deformable part models achieve high precision in pedestrian recognition, but all publicly available implementations are too slow for real-time applications. We implemented a deformable part model algorithm fast enough for real-time use by exploiting information about the camera position and orientation. This implementation is both faster and more precise than alternative DPM implementations. These results are obtained by computing convolutions in the frequency domain and using lookup tables to speed up feature computation. This approach is almost an order of magnitude faster than the reference DPM implementation, with no loss in precision. Knowing the position of the camera with respect to horizon it is also possible prune many hypotheses based on their size and location. The range of acceptable sizes and positions is set by looking at the statistical distribution of bounding boxes in labelled images. With this approach it is not needed to compute the entire feature pyramid: for example higher resolution features are only needed near the horizon. This results in an increase in mean average precision of 5% and an increase in speed by a factor of two. Furthermore, to reduce misdetections involving small pedestrians near the horizon, input images are supersampled near the horizon. Supersampling the image at 1.5 times the original scale, results in an increase in precision of about 4%. The implementation was tested against the public KITTI dataset, obtaining an 8% improvement in mean average precision over the best performing DPM-based method. By allowing for a small loss in precision computational time can be easily brought down to our target of 100ms per image, reaching a solution that is faster and still more precise than all publicly available DPM implementations.Keywords: autonomous vehicles, deformable part model, dpm, pedestrian detection, real time
Procedia PDF Downloads 281892 A Review Study on the Importance and Correlation of Crisis Literacy and Media Communications for Vulnerable Marginalized People During Crisis
Authors: Maryam Jabeen
Abstract:
In recent times, there has been a notable surge in attention towards diverse literacy concepts such as media literacy, information literacy, and digital literacy. These concepts have garnered escalating interest, spurring the emergence of novel approaches, particularly in the aftermath of the Covid-19 crisis. However, amidst discussions of crises, the domain of crisis literacy remains largely uncharted within academic exploration. Crisis literacy, also referred to as disaster literacy, denotes an individual's aptitude to not only comprehend but also effectively apply information, enabling well-informed decision-making and adherence to instructions about disaster mitigation, preparedness, response, and recovery. This theoretical and descriptive study seeks to transcend foundational literacy concepts, underscoring the urgency for an in-depth exploration of crisis literacy and its interplay with the realm of media communication. Given the profound impact of the pandemic experience and the looming uncertainty of potential future crises, there arises a pressing need to elevate crisis literacy, or disaster literacy, towards heightened autonomy and active involvement within the spheres of critical disaster preparedness, recovery initiatives, and media communication domains. This research paper is part of my ongoing Ph.D. research study, which explores on a broader level the Encoding and decoding of media communications in relation to crisis literacy. The primary objective of this research paper is to expound upon a descriptive, theoretical research endeavor delving into this domain. The emphasis lies in highlighting the paramount significance of media communications in literacy of crisis, coupled with an accentuated focus on its role in providing information to marginalized populations amidst crises. In conclusion, this research bridges the gap in crisis literacy correlation to media communications exploration, advocating for a comprehensive understanding of its dynamics and its symbiotic relationship with media communications. It intends to foster a heightened sense of crisis literacy, particularly within marginalized communities, catalyzing proactive participation in disaster preparedness, recovery processes, and adept media interactions.Keywords: covid-19, crisis literacy, crisis, marginalized, media and communications, pandemic, vulnerable people
Procedia PDF Downloads 62891 Performance Comparison of Deep Convolutional Neural Networks for Binary Classification of Fine-Grained Leaf Images
Authors: Kamal KC, Zhendong Yin, Dasen Li, Zhilu Wu
Abstract:
Intra-plant disease classification based on leaf images is a challenging computer vision task due to similarities in texture, color, and shape of leaves with a slight variation of leaf spot; and external environmental changes such as lighting and background noises. Deep convolutional neural network (DCNN) has proven to be an effective tool for binary classification. In this paper, two methods for binary classification of diseased plant leaves using DCNN are presented; model created from scratch and transfer learning. Our main contribution is a thorough evaluation of 4 networks created from scratch and transfer learning of 5 pre-trained models. Training and testing of these models were performed on a plant leaf images dataset belonging to 16 distinct classes, containing a total of 22,265 images from 8 different plants, consisting of a pair of healthy and diseased leaves. We introduce a deep CNN model, Optimized MobileNet. This model with depthwise separable CNN as a building block attained an average test accuracy of 99.77%. We also present a fine-tuning method by introducing the concept of a convolutional block, which is a collection of different deep neural layers. Fine-tuned models proved to be efficient in terms of accuracy and computational cost. Fine-tuned MobileNet achieved an average test accuracy of 99.89% on 8 pairs of [healthy, diseased] leaf ImageSet.Keywords: deep convolution neural network, depthwise separable convolution, fine-grained classification, MobileNet, plant disease, transfer learning
Procedia PDF Downloads 186890 Creating Smart and Healthy Cities by Exploring the Potentials of Emerging Technologies and Social Innovation for Urban Efficiency: Lessons from the Innovative City of Boston
Authors: Mohammed Agbali, Claudia Trillo, Yusuf Arayici, Terrence Fernando
Abstract:
The wide-spread adoption of the Smart City concept has introduced a new era of computing paradigm with opportunities for city administrators and stakeholders in various sectors to re-think the concept of urbanization and development of healthy cities. With the world population rapidly becoming urban-centric especially amongst the emerging economies, social innovation will assist greatly in deploying emerging technologies to address the development challenges in core sectors of the future cities. In this context, sustainable health-care delivery and improved quality of life of the people is considered at the heart of the healthy city agenda. This paper examines the Boston innovation landscape from the perspective of smart services and innovation ecosystem for sustainable development, especially in transportation and healthcare. It investigates the policy implementation process of the Healthy City agenda and eHealth economy innovation based on the experience of Massachusetts’s City of Boston initiatives. For this purpose, three emerging areas are emphasized, namely the eHealth concept, the innovation hubs, and the emerging technologies that drive innovation. This was carried out through empirical analysis on results of public sector and industry-wide interviews/survey about Boston’s current initiatives and the enabling environment. The paper highlights few potential research directions for service integration and social innovation for deploying emerging technologies in the healthy city agenda. The study therefore suggests the need to prioritize social innovation as an overarching strategy to build sustainable Smart Cities in order to avoid technology lock-in. Finally, it concludes that the Boston example of innovation economy is unique in view of the existing platforms for innovation and proper understanding of its dynamics, which is imperative in building smart and healthy cities where quality of life of the citizenry can be improved.Keywords: computing paradigm, emerging technologies, equitable healthcare, healthy cities, open data, smart city, social innovation
Procedia PDF Downloads 336889 Effects of Renin Angiotensin Pathway Inhibition on Efficacy of Anti-PD-1/PD-L1 Treatment in Metastatic Cancer
Authors: Philip Friedlander, John Rutledge, Jason Suh
Abstract:
Inhibition of programmed death-1 (PD-1) or its ligand PD-L1 confers therapeutic efficacy in a wide range of solid tumor malignancies. Primary or acquired resistance can develop through activation of immunosuppressive immune cells such as tumor-associated macrophages. The renin angiotensin system (RAS) systemically regulates fluid and sodium hemodynamics, but components are expressed on and regulate the activity of immune cells, particularly of myeloid lineage. We hypothesized that inhibition of RAS would improve the efficacy of PD-1/PD-L-1 treatment. A retrospective analysis was performed through a chart review of patients with solid metastatic malignancies treated with a PD-1/PD-L1 inhibitor between 1/2013 and 6/2019 at Valley Hospital, a community hospital in New Jersey, USA. Efficacy was determined by medical oncologist documentation of clinical benefit in visit notes and by the duration of time on immunotherapy treatment. The primary endpoint was the determination of efficacy differences in patients treated with an inhibitor of RAS ( ace inhibitor, ACEi, or angiotensin blocker, ARB) compared to patients not treated with these inhibitors. To control for broader antihypertensive effects, efficacy as a function of treatment with beta blockers was assessed. 173 patients treated with PD-1/PD-L-1 inhibitors were identified of whom 52 were also treated with an ACEi or ARB. Chi-square testing revealed a statistically significant relationship between being on an ACEi or ARB and efficacy to PD-1/PD-L-1 therapy (p=0.001). No statistically significant relationship was seen between patients taking or not taking beta blocker antihypertensives (p= 0.33). Kaplan-Meier analysis showed statistically significant improvement in the duration of therapy favoring patients concomitantly treated with ACEi or ARB compared to patients not exposed to antihypertensives and to those treated with beta blockers. Logistic regression analysis revealed that age, gender, and cancer type did not have significant effects on the odds of experiencing clinical benefit (p=0.74, p=0.75, and p=0.81, respectively). We conclude that retrospective analysis of the treatment of patients with solid metastatic tumors with anti-PD-1/PD-L1 in a community setting demonstrates greater clinical benefit in the context of concomitant ACEi or ARB inhibition, irrespective of gender or age. This data supports the development of prospective assessment through randomized clinical trials.Keywords: angiotensin, cancer, immunotherapy, PD-1, efficacy
Procedia PDF Downloads 76888 Prompt Design for Code Generation in Data Analysis Using Large Language Models
Authors: Lu Song Ma Li Zhi
Abstract:
With the rapid advancement of artificial intelligence technology, large language models (LLMs) have become a milestone in the field of natural language processing, demonstrating remarkable capabilities in semantic understanding, intelligent question answering, and text generation. These models are gradually penetrating various industries, particularly showcasing significant application potential in the data analysis domain. However, retraining or fine-tuning these models requires substantial computational resources and ample downstream task datasets, which poses a significant challenge for many enterprises and research institutions. Without modifying the internal parameters of the large models, prompt engineering techniques can rapidly adapt these models to new domains. This paper proposes a prompt design strategy aimed at leveraging the capabilities of large language models to automate the generation of data analysis code. By carefully designing prompts, data analysis requirements can be described in natural language, which the large language model can then understand and convert into executable data analysis code, thereby greatly enhancing the efficiency and convenience of data analysis. This strategy not only lowers the threshold for using large models but also significantly improves the accuracy and efficiency of data analysis. Our approach includes requirements for the precision of natural language descriptions, coverage of diverse data analysis needs, and mechanisms for immediate feedback and adjustment. Experimental results show that with this prompt design strategy, large language models perform exceptionally well in multiple data analysis tasks, generating high-quality code and significantly shortening the data analysis cycle. This method provides an efficient and convenient tool for the data analysis field and demonstrates the enormous potential of large language models in practical applications.Keywords: large language models, prompt design, data analysis, code generation
Procedia PDF Downloads 39887 The Relationship between Central Bank Independence and Inflation: Evidence from Africa
Authors: R. Bhattu Babajee, Marie Sandrine Estelle Benoit
Abstract:
The past decades have witnessed a considerable institutional shift towards Central Bank Independence across economies of the world. The motivation behind such a change is the acceptance that increased central bank autonomy has the power of alleviating inflation bias. Hence, studying whether Central Bank Independence acts as a significant factor behind the price stability in the African economies or whether this macroeconomic aim in these countries result from other economic, political or social factors is a pertinent issue. The main research objective of this paper is to assess the relationship between central bank autonomy and inflation in African economies where inflation has proved to be a serious problem. In this optic, we shall measure the degree of CBI in Africa by computing the turnover rates of central banks governors thereby studying whether decisions made by African central banks are affected by external forces. The purpose of this study is to investigate empirically the association between Central Bank Independence (CBI) and inflation for 10 African economies over a period of 17 years, from 1995 to 2012. The sample includes Botswana, Egypt, Ghana, Kenya, Madagascar, Mauritius, Mozambique, Nigeria, South Africa, and Uganda. In contrast to empirical research, we have not been using the usual static panel model for it is associated with potential mis specification arising from the absence of dynamics. To this issue a dynamic panel data model which integrates several control variables has been used. Firstly, the analysis includes dynamic terms to explain the tenacity of inflation. Given the confirmation of inflation inertia, that is very likely in African countries there exists the need for including lagged inflation in the empirical model. Secondly, due to known reverse causality between Central Bank Independence and inflation, the system generalized method of moments (GMM) is employed. With GMM estimators, the presence of unknown forms of heteroskedasticity is admissible as well as auto correlation in the error term. Thirdly, control variables have been used to enhance the efficiency of the model. The main finding of this paper is that central bank independence is negatively associated with inflation even after including control variables.Keywords: central bank independence, inflation, macroeconomic variables, price stability
Procedia PDF Downloads 364886 ADP Approach to Evaluate the Blood Supply Network of Ontario
Authors: Usama Abdulwahab, Mohammed Wahab
Abstract:
This paper presents the application of uncapacitated facility location problems (UFLP) and 1-median problems to support decision making in blood supply chain networks. A plethora of factors make blood supply-chain networks a complex, yet vital problem for the regional blood bank. These factors are rapidly increasing demand; criticality of the product; strict storage and handling requirements; and the vastness of the theater of operations. As in the UFLP, facilities can be opened at any of $m$ predefined locations with given fixed costs. Clients have to be allocated to the open facilities. In classical location models, the allocation cost is the distance between a client and an open facility. In this model, the costs are the allocation cost, transportation costs, and inventory costs. In order to address this problem the median algorithm is used to analyze inventory, evaluate supply chain status, monitor performance metrics at different levels of granularity, and detect potential problems and opportunities for improvement. The Euclidean distance data for some Ontario cities (demand nodes) are used to test the developed algorithm. Sitation software, lagrangian relaxation algorithm, and branch and bound heuristics are used to solve this model. Computational experiments confirm the efficiency of the proposed approach. Compared to the existing modeling and solution methods, the median algorithm approach not only provides a more general modeling framework but also leads to efficient solution times in general.Keywords: approximate dynamic programming, facility location, perishable product, inventory model, blood platelet, P-median problem
Procedia PDF Downloads 506885 Identification and Management of Septic Arthritis of the Untouched Glenohumeral Joint
Authors: Sumit Kanwar, Manisha Chand, Gregory Gilot
Abstract:
Background: Septic arthritis of the shoulder has infrequently been discussed. Focus on infection of the untouched shoulder has not heretofore been described. We present four patients with glenohumeral septic arthritis. Methods: Case 1: A 59 year old male with left shoulder pain in the anterior, posterior and superior aspects. Case 2: A 60 year old male with fever, chills, and generalized muscle aches. Case 3: A 70 year old male with right shoulder pain about the anterior and posterior aspects. Case 4: A 55 year old male with global right shoulder pain, swelling, and limited ROM. Results: In case 1, the left shoulder was affected. Physical examination, swelling was notable, there was global tenderness with a painful range of motion (ROM). The lab values indicated an erythrocyte sedimentation rate (ESR) of 96, and a C-reactive protein (CRP) of 304.30. Imaging studies were performed and MRI indicated a high suspicion for an abscess with osteomyelitis of the humeral head. Our second case’s left arm was affected. He had swelling, global tenderness and painful ROM. His ESR was 38, CRP was 14.9. X-ray showed severe arthritis. Case 3 differed with the right arm being affected. Again, global tenderness and painful ROM was observed. His ESR was 94, and CRP was 10.6. X-ray displayed an eroded glenoid space. Our fourth case’s right shoulder was affected. He had global tenderness and painful, limited ROM. ESR was 108 and CRP was 2.4. X-ray was non-significant. Discussion: Monoarticular septic arthritis of the virgin glenohumeral joint is seldom diagnosed in clinical practice. Common denominators include elevated ESR, painful, limited ROM, and involvement of the dominant arm. The male population is more frequently affected with an average age of 57. Septic arthritis is managed with incision and drainage or needle aspiration of synovial fluid supplemented with 3-6 weeks of intravenous antibiotics. Due to better irrigation and joint visualization, arthroscopy is preferred. Open surgical drainage may be indicated if the above methods fail. Conclusion: If a middle-aged male presents with vague anterior or posterior shoulder pain, elevated inflammatory markers and a low grade fever, an x-ray should be performed. If this displays degenerative joint disease, the complete further workup with advanced imaging, such as an MRI, CT scan, or an ultrasound. If these imaging modalities display anterior space joint effusion with soft tissue involvement, we can suspect septic arthritis of the untouched glenohumeral joint and surgery is indicated.Keywords: glenohumeral joint, identification, infection, septic arthritis, shoulder
Procedia PDF Downloads 422884 Rethinking the Value of Pancreatic Cyst CEA Levels from Endoscopic Ultrasound Fine-Needle Aspiration (EUS-FNA): A Longitudinal Analysis
Authors: Giselle Tran, Ralitza Parina, Phuong T. Nguyen
Abstract:
Background/Aims: Pancreatic cysts (PC) have recently become an increasingly common entity, often diagnosed as incidental findings on cross-sectional imaging. Clinically, management of the lesions is difficult because of uncertainties in their potential for malignant degeneration. Prior series have reported that carcinoembryonic antigen (CEA), a biomarker collected from cyst fluid aspiration, has a high diagnostic accuracy for discriminating between mucinous and non-mucinous lesions, at the patient’s initial presentation. To the author’s best knowledge, no prior studies have reported PC CEA levels obtained from endoscopic ultrasound fine-needle aspiration (EUS-FNA) over years of serial EUS surveillance imaging. Methods: We report a consecutive retrospective series of 624 patients who underwent EUS evaluation for a PC between 11/20/2009 and 11/13/2018. Of these patients, 401 patients had CEA values obtained at the point of entry. Of these, 157 patients had two or more CEA values obtained over the course of their EUS surveillance. Of the 157 patients (96 F, 61 M; mean age 68 [range, 62-76]), the mean interval of EUS follow-up was 29.7 months [3.5-128]. The mean number of EUS procedures was 3 [2-7]. To assess CEA value fluctuations, we defined an appreciable increase in CEA as "spikes" – two-times increase in CEA on a subsequent EUS-FNA of the same cyst, with the second CEA value being greater than 1000 ng/mL. Using this definition, cysts with a spike in CEA were compared to those without a spike in a bivariate analysis to determine if a CEA spike is associated with poorer outcomes and the presence of high-risk features. Results: Of the 157 patients analyzed, 29 had a spike in CEA. Of these 29 patients, 5 had a cyst with size increase >0.5cm (p=0.93); 2 had a large cyst, >3cm (p=0.77); 1 had a cyst that developed a new solid component (p=0.03); 7 had a cyst with a solid component at any time during surveillance (p=0.08); 21 had a complex cyst (p=0.34); 4 had a cyst categorized as "Statistically Higher Risk" based on molecular analysis (p=0.11); and 0 underwent surgical resection (p=0.28). Conclusion: With serial EUS imaging in the surveillance of PC, an increase in CEA level defined as a spike did not predict poorer outcomes. Most notably, a spike in CEA did not correlate with the number of patients sent to surgery or patients with an appreciable increase in cyst size. A spike in CEA did not correlate with the development of a solid nodule within the PC nor progression on molecular analysis. Future studies should focus on the selected use of CEA analysis when patients undergo EUS surveillance evaluation for PCs.Keywords: carcinoembryonic antigen (CEA), endoscopic ultrasound (EUS), fine-needle aspiration (FNA), pancreatic cyst, spike
Procedia PDF Downloads 142883 The Selectivities of Pharmaceutical Spending Containment: Social Profit, Incentivization Games and State Power
Authors: Ben Main Piotr Ozieranski
Abstract:
State government spending on pharmaceuticals stands at 1 trillion USD globally, promoting criticism of the pharmaceutical industry's monetization of drug efficacy, product cost overvaluation, and health injustice. This paper elucidates the mechanisms behind a state-institutional response to this problem through the sociological lens of the strategic relational approach to state power. To do so, 30 expert interviews, legal and policy documents are drawn on to explain how state elites in New Zealand have successfully contested a 30-year “pharmaceutical spending containment policy”. Proceeding from Jessop's notion of strategic “selectivity”, encompassing analyses of the enabling features of state actors' ability to harness state structures, a theoretical explanation is advanced. First, a strategic context is described that consists of dynamics around pharmaceutical dealmaking between the state bureaucracy, pharmaceutical pricing strategies (and their effects), and the industry. Centrally, the pricing strategy of "bundling" -deals for packages of drugs that combine older and newer patented products- reflect how state managers have instigated an “incentivization game” that is played by state and industry actors, including HTA professionals, over pharmaceutical products (both current and in development). Second, a protective context is described that is comprised of successive legislative-judicial responses to the strategic context and characterized by the regulation and the societalisation of commercial law. Third, within the policy, the achievement of increased pharmaceutical coverage (pharmaceutical “mix”) alongside contained spending is conceptualized as a state defence of a "social profit". As such, in contrast to scholarly expectations that political and economic cultures of neo-liberalism drive pharmaceutical policy-making processes, New Zealand's state elites' approach is shown to be antipathetic to neo-liberals within an overall capitalist economy. The paper contributes an analysis of state pricing strategies and how they are embedded in state regulatory structures. Additionally, through an analysis of the interconnections of state power and pharmaceutical value Abrahams's neo-liberal corporate bias model for pharmaceutical policy analysis is problematised.Keywords: pharmaceutical governance, pharmaceutical bureaucracy, pricing strategies, state power, value theory
Procedia PDF Downloads 70