Search results for: ramp type demand
1610 Freight Time and Cost Optimization in Complex Logistics Networks, Using a Dimensional Reduction Method and K-Means Algorithm
Authors: Egemen Sert, Leila Hedayatifar, Rachel A. Rigg, Amir Akhavan, Olha Buchel, Dominic Elias Saadi, Aabir Abubaker Kar, Alfredo J. Morales, Yaneer Bar-Yam
Abstract:
The complexity of providing timely and cost-effective distribution of finished goods from industrial facilities to customers makes effective operational coordination difficult, yet effectiveness is crucial for maintaining customer service levels and sustaining a business. Logistics planning becomes increasingly complex with growing numbers of customers, varied geographical locations, the uncertainty of future orders, and sometimes extreme competitive pressure to reduce inventory costs. Linear optimization methods become cumbersome or intractable due to a large number of variables and nonlinear dependencies involved. Here we develop a complex systems approach to optimizing logistics networks based upon dimensional reduction methods and apply our approach to a case study of a manufacturing company. In order to characterize the complexity in customer behavior, we define a “customer space” in which individual customer behavior is described by only the two most relevant dimensions: the distance to production facilities over current transportation routes and the customer's demand frequency. These dimensions provide essential insight into the domain of effective strategies for customers; direct and indirect strategies. In the direct strategy, goods are sent to the customer directly from a production facility using box or bulk trucks. In the indirect strategy, in advance of an order by the customer, goods are shipped to an external warehouse near a customer using trains and then "last-mile" shipped by trucks when orders are placed. Each strategy applies to an area of the customer space with an indeterminate boundary between them. Specific company policies determine the location of the boundary generally. We then identify the optimal delivery strategy for each customer by constructing a detailed model of costs of transportation and temporary storage in a set of specified external warehouses. Customer spaces help give an aggregate view of customer behaviors and characteristics. They allow policymakers to compare customers and develop strategies based on the aggregate behavior of the system as a whole. In addition to optimization over existing facilities, using customer logistics and the k-means algorithm, we propose additional warehouse locations. We apply these methods to a medium-sized American manufacturing company with a particular logistics network, consisting of multiple production facilities, external warehouses, and customers along with three types of shipment methods (box truck, bulk truck and train). For the case study, our method forecasts 10.5% savings on yearly transportation costs and an additional 4.6% savings with three new warehouses.Keywords: logistics network optimization, direct and indirect strategies, K-means algorithm, dimensional reduction
Procedia PDF Downloads 1391609 Antimicrobial Properties of SEBS Compounds with Zinc Oxide and Zinc Ions
Authors: Douglas N. Simões, Michele Pittol, Vanda F. Ribeiro, Daiane Tomacheski, Ruth M. C. Santana
Abstract:
The increasing demand of thermoplastic elastomers is related to the wide range of applications, such as automotive, footwear, wire and cable industries, adhesives and medical devices, cell phones, sporting goods, toys and others. These materials are susceptible to microbial attack. Moisture and organic matter present in some areas (such as shower area and sink), provide favorable conditions for microbial proliferation, which contributes to the spread of diseases and reduces the product life cycle. Compounds based on SEBS copolymers, poly(styrene-b-(ethylene-co-butylene)-b-styrene, are a class of thermoplastic elastomers (TPE), fully recyclable and largely used in domestic appliances like bath mats and tooth brushes (soft touch). Zinc oxide and zinc ions loaded in personal and home care products have become common in the last years due to its biocidal effect. In that sense, the aim of this study was to evaluate the effect of zinc as antimicrobial agent in compounds based on SEBS/polypropylene/oil/ calcite for use as refrigerator seals (gaskets), bath mats and sink squeegee. Two zinc oxides from different suppliers (ZnO-Pe and ZnO-WR) and one masterbatch of zinc ions (M-Zn-ion) were used in proportions of 0%, 1%, 3% and 5%. The compounds were prepared using a co-rotating double screw extruder (L/D ratio of 40/1 and 16 mm screw diameter). The extrusion parameters were kept constant for all materials. Tests specimens were prepared using the injection molding machine. A compound with no antimicrobial additive (standard) was also tested. Compounds were characterized by physical (density), mechanical (hardness and tensile properties) and rheological properties (melt flow rate - MFR). The Japan Industrial Standard (JIS) Z 2801:2010 was applied to evaluate antibacterial properties against Staphylococcus aureus (S. aureus) and Escherichia coli (E. coli). The Brazilian Association of Technical Standards (ABNT) NBR 15275:2014 were used to evaluate antifungal properties against Aspergillus niger (A. niger), Aureobasidium pullulans (A. pullulans), Candida albicans (C. albicans), and Penicillium chrysogenum (P. chrysogenum). The microbiological assay showed a reduction over 42% in E. coli and over 49% in S. aureus population. The tests with fungi showed inconclusive results because the sample without zinc also demonstrated an inhibition of fungal development when tested against A. pullulans, C. albicans and P. chrysogenum. In addition, the zinc loaded samples showed worse results than the standard sample when tested against A. niger. The zinc addition did not show significant variation in mechanical properties. However, the density values increased with the rise in ZnO additives concentration, and had a little decrease in M-Zn-ion samples. Also, there were differences in the MFR results in all compounds compared to the standard.Keywords: antimicrobial, home device, SEBS, zinc
Procedia PDF Downloads 3241608 A Review of Critical Framework Assessment Matrices for Data Analysis on Overheating in Buildings Impact
Authors: Martin Adlington, Boris Ceranic, Sally Shazhad
Abstract:
In an effort to reduce carbon emissions, changes in UK regulations, such as Part L Conservation of heat and power, dictates improved thermal insulation and enhanced air tightness. These changes were a direct response to the UK Government being fully committed to achieving its carbon targets under the Climate Change Act 2008. The goal is to reduce emissions by at least 80% by 2050. Factors such as climate change are likely to exacerbate the problem of overheating, as this phenomenon expects to increase the frequency of extreme heat events exemplified by stagnant air masses and successive high minimum overnight temperatures. However, climate change is not the only concern relevant to overheating, as research signifies, location, design, and occupation; construction type and layout can also play a part. Because of this growing problem, research shows the possibility of health effects on occupants of buildings could be an issue. Increases in temperature can perhaps have a direct impact on the human body’s ability to retain thermoregulation and therefore the effects of heat-related illnesses such as heat stroke, heat exhaustion, heat syncope and even death can be imminent. This review paper presents a comprehensive evaluation of the current literature on the causes and health effects of overheating in buildings and has examined the differing applied assessment approaches used to measure the concept. Firstly, an overview of the topic was presented followed by an examination of overheating research work from the last decade. These papers form the body of the article and are grouped into a framework matrix summarizing the source material identifying the differing methods of analysis of overheating. Cross case evaluation has identified systematic relationships between different variables within the matrix. Key areas focused on include, building types and country, occupants behavior, health effects, simulation tools, computational methods.Keywords: overheating, climate change, thermal comfort, health
Procedia PDF Downloads 3511607 Structural Evolution of Na6Mn(SO4)4 from High-Pressure Synchrotron Powder X-ray Diffraction
Authors: Monalisa Pradhan, Ajana Dutta, Irshad Kariyattuparamb Abbas, Boby Joseph, T. N. Guru Row, Diptikanta Swain, Gopal K. Pradhan
Abstract:
Compounds with the Vanthoffite crystal structure having general formula Na6M(SO₄)₄ (M= Mg, Mn, Ni , Co, Fe, Cu and Zn) display a variety of intriguing physical properties intimately related to their structural arrangements. The compound Na6Mn(SO4)4 shows antiferromagnetic ordering at low temperature where the in-plane Mn-O•••O-Mn interactions facilitates antiferromagnetic ordering via a super-exchange interaction between the Mn atoms through the oxygen atoms . The inter-atomic bond distances and angles can easily be tuned by applying external pressure and can be probed using high resolution X-ray diffraction. Moreover, because the magnetic interaction among the Mn atoms are super-exchange type via Mn-O•••O-Mn path, the variation of the Mn-O•••O-Mn dihedral angle and Mn-O bond distances under high pressure inevitably affects the magnetic properties. Therefore, it is evident that high pressure studies on the magnetically ordered materials would shed light on the interplay between their structural properties and magnetic ordering. This will indeed confirm the role of buckling of the Mn-O polyhedral in understanding the origin of anti-ferromagnetism. In this context, we carried out the pressure dependent X-ray diffraction measurement in a diamond anvil cell (DAC) up to a maximum pressure of 17 GPa to study the phase transition and determine equation of state from the volume compression data. Upon increasing the pressure, we didn’t observe any new diffraction peaks or sudden discontinuity in the pressure dependences of the d values up to the maximum achieved pressure of ~17 GPa. However, it is noticed that beyond 12 GPa the a and b lattice parameters become identical while there is a discontinuity in the β value around the same pressure. This indicates a subtle transition to a pseudo-monoclinic phase. Using the third order Birch-Murnaghan equation of state (EOS) to fit the volume compression data for the entire range, we found the bulk modulus (B0) to be 44 GPa. If we consider the subtle transition at 12 GPa, we tried to fit another equation state for the volume beyond 12 GPa using the second order Birch-Murnaghan EOS. This gives a bulk modulus of ~ 34 GPa for this phase.Keywords: mineral, structural phase transition, high pressure XRD, spectroscopy
Procedia PDF Downloads 871606 Comparison between the Roller-Foam and Neuromuscular Facilitation Stretching on Flexibility of Hamstrings Muscles
Authors: Paolo Ragazzi, Olivier Peillon, Paul Fauris, Mathias Simon, Raul Navarro, Juan Carlos Martin, Oriol Casasayas, Laura Pacheco, Albert Perez-Bellmunt
Abstract:
Introduction: The use of stretching techniques in the sports world is frequent and widely used for its many effects. One of the main benefits is the gain in flexibility, range of motion and facilitation of the sporting performance. Recently the use of Roller-Foam (RF) has spread in sports practice both at elite and recreational level for its benefits being similar to those observed in stretching. The objective of the following study is to compare the results of the Roller-Foam with the proprioceptive neuromuscular facilitation stretching (PNF) (one of the stretchings with more evidence) on the hamstring muscles. Study design: The design of the study is a single-blind, randomized controlled trial and the participants are 40 healthy volunteers. Intervention: The subjects are distributed randomly in one of the following groups; stretching (PNF) intervention group: 4 repetitions of PNF stretching (5seconds of contraction, 5 second of relaxation, 20 second stretch), Roller-Foam intervention group: 2 minutes of Roller-Foam was realized on the hamstring muscles. Main outcome measures: hamstring muscles flexibility was assessed at the beginning, during (30’’ of intervention) and the end of the session by using the Modified Sit and Reach test (MSR). Results: The baseline results data given in both groups are comparable to each other. The PNF group obtained an increase in flexibility of 3,1 cm at 30 seconds (first series) and of 5,1 cm at 2 minutes (the last of all series). The RF group obtained a 0,6 cm difference at 30 seconds and 2,4 cm after 2 minutes of application of roller foam. The results were statistically significant when comparing intragroups but not intergroups. Conclusions: Despite the fact that the use of roller foam is spreading in the sports and rehabilitation field, the results of the present study suggest that the gain of flexibility on the hamstrings is greater if PNF type stretches are used instead of RF. These results may be due to the fact that the use of roller foam intervened more in the fascial tissue, while the stretches intervene more in the myotendinous unit. Future studies are needed, increasing the sample number and diversifying the types of stretching.Keywords: hamstring muscle, stretching, neuromuscular facilitation stretching, roller foam
Procedia PDF Downloads 1861605 The Jurisprudential Evolution of Corruption Offenses in Spain: Before and after the Economic Crisis
Authors: Marta Fernandez Cabrera
Abstract:
The period of economic boom generated by the housing bubble created a climate of social indifference to the problem of corruption. This resulted in the persecution and conviction for these criminal offenses being low. After the economic recession, social awareness about the problem of corruption has increased. This has led to the Spanish citizenship requiring the public authorities to try to end the problem in the most effective way possible. In order to respond to the continuous social demands that require an exemplary punishment, the legislator has made changes in crimes against the public administration in the Spanish Criminal Code. However, from the point of view of criminal law, the social change has not served to modify only the law, but also the jurisprudence. After the recession, judges are punishing more severely these conducts than in the past. Before the crisis, it was usual for criminal judges to divert relevant behavior to other areas of the legal system such as administrative law and acquit in the criminal field. Criminal judges have considered that administrative law already has mechanisms that can effectively deal with this type of behavior in order to respect the principle of subsidiarity or ultima ratio. It has also been usual for criminal judges to acquit civil servants due to the absence of requirements unrelated to the applicable offense. For example, they have required an economic damage to the public administration when the offense in the criminal code does not require it. Nevertheless, for some years, these arguments have either partially disappeared or considerably transformed. Since 2010, a jurisprudential stream has been consolidated that aims to provide a more severe response to corruption than it had received until now. This change of opinion, together with greater prosecution of these behaviors by judges and prosecutors, has led to a significant increase in the number of individuals convicted of corruption crimes. This paper has two objectives. The first one is to show that even though judges apply the law impartially, they are flexible to social changes. The second one is to identify the erroneous arguments the courts have used up until now. To carry out the present paper, it has been done a detailed analysis of the judgments of the supreme court before and after the year 2010. Therefore, the jurisprudential analysis is complemented with the statistical data on corruption available.Keywords: corruption, public administration, social perception, ultima ratio principle
Procedia PDF Downloads 1461604 Physical and Mechanical Behavior of Compressed Earth Blocks Stabilized with Ca(OH)2 on Sub-Humid Warm Weather
Authors: D. Castillo T., Luis F. Jimenez
Abstract:
The compressed earth blocks (CEBs) constitute an alternative as a constructive element for building homes in regions with high levels of poverty and marginalization. Such is the case of Southeastern Mexico, where the population, predominantly indigene, build their houses with feeble materials like wood and palm, vulnerable to extreme weather in the area, because they do not have the financial resources to acquire concrete blocks. There are several advantages that can provide BTCs compared to traditional vibro-compressed concrete blocks, such as the availability of materials, low manufacturing cost and reduced CO2 emissions to the atmosphere for not be subjected to a burning process. However, to improve its mechanical properties and resistance to adverse weather conditions in terms of humidity and temperature of the sub-humid climate zones, it requires the use of a chemical stabilizer; in this case we chose Ca(OH)2. The stabilization method Eades-Grim was employed, according to ASTM C977-03. This method measures the optimum amount of lime required to stabilize the soil, increasing the pH to 12.4 or higher. The minimum amount of lime required in this experiment was 1% and the maximum was 10%. The employed material was clay unconsolidated low to medium plasticity (CL type according to the Unified Soil Classification System). Based on these results, the CEBs manufacturing process was determined. The obtained blocks were from 10x15x30 cm using a mixture of soil, water and lime in different proportions. Later these blocks were put to dry outdoors and subjected to several physical and mechanical tests, such as compressive strength, absorption and drying shrinkage. The results were compared with the limits established by the Mexican Standard NMX-C-404-ONNCCE-2005 for the construction of housing walls. In this manner an alternative and sustainable material was obtained for the construction of rural households in the region, with better security conditions, comfort and cost.Keywords: calcium hydroxide, chemical stabilization, compressed earth blocks, sub-humid warm weather
Procedia PDF Downloads 4011603 Methylglyoxal Induced Glycoxidation of Human Low Density Lipoprotein: A Biophysical Perspective and Its Role in Diabetes and Periodontitis
Authors: Minhal Abidi, Moinuddin
Abstract:
Diabetes mellitus (DM) induced metabolic abnormalities causes oxidative stress which leads to the pathogenesis of complications associated with diabetes like retinopathy, nephropathy periodontitis etc. Combination of glycation and oxidation 'glycoxidation' occurs when oxidative reactions affect the early state of glycation products. Low density lipoprotein (LDL) is prone to glycoxidative attack by sugars and methylglyoxal (MGO) being a strong glycating agent may have severe impact on its structure and consequent role in diabetes. Pro-inflammatory cytokines like IL1β and TNFα produced by the action of gram negative bacteria in periodontits (PD) can in turn lead to insulin resistance. This work discusses modifications to LDL as a result of glycoxidation. The changes in the protein molecule have been characterized by various physicochemical techniques and the immunogenicity of the modified molecules was also evaluated as they presented neo-epitopes. Binding of antibodies present in diabetes patients to the native and glycated LDL has been evaluated. Role of modified epitopes in the generation of antibodies in diabetes and periodontitis has been discussed. The structural perturbations induced in LDL were analyzed by UV–Vis, fluorescence, circular dichroism and FTIR spectroscopy, molecular docking studies, thermal denaturation studies, Thioflavin T assay, isothermal titration calorimetry, comet assay. MALDI-TOF, ketoamine moieties, carbonyl content and HMF content were also quantitated in native and glycated LDL. IL1β and TNFα levels were also measured in the type 2 DM and PD patients. We report increased carbonyl content, ketoamine moieties and HMF content in glycated LDL as compared to native analogue. The results substantiate that in hyperglycemic state MGO modification of LDL causes structural perturbations making the protein antigenic which could obstruct normal physiological functions and might contribute in the development of secondary complications in diabetic patients like periodontitis.Keywords: advanced glycation end products, diabetes mellitus, glycation, glycoxidation, low density lipoprotein, periodontitis
Procedia PDF Downloads 1911602 Species Distribution and Incidence of Inducible Clindamycin Resistance in Coagulase-Negative Staphylococci Isolated from Blood Cultures of Patients with True Bacteremia in Turkey
Authors: Fatma Koksal Cakirlar, Murat Gunaydin, Nevri̇ye Gonullu, Nuri Kiraz
Abstract:
During the last few decades, the increasing prevalence of methicillin resistant-CoNS isolates has become a common problem worldwide. Macrolide-lincosamide-streptogramin B (MLSB) antibiotics are effectively used for the treatment of CoNS infections. However, resistance to MLSB antibiotics is prevalent among staphylococci. The aim of this study is to determine species distribution and the incidence of inducible clindamycin resistance in CoNS isolates caused nosocomial bacteremia in our hospital. Between January 2014 and October 2015, a total of 484 coagulase-negative CoNS isolates were isolated from blood samples of patients with true bacteremia who were hospitalized in intensive care units and in other departments of Istanbul University Cerrahpasa Medical Hospital. Blood cultures were analyzed with the BACTEC 9120 system (Becton Dickinson, USA). The identification and antimicrobial resistance of isolates were determined by Phoenix automated system (BD Diagnostic Systems, Sparks, MD). Inducible clindamycin resistance was detected using D-test. The species distribution was as follows: Staphylococcus epidermidis 211 (43%), S. hominis 154 (32%), S. haemolyticus 69 (14%), S. capitis 28 (6%), S. saprophyticus 11 (2%), S. warnerii 7 (1%), S. schleiferi 5 (1%) and S. lugdunensis 1 (0.2%). Resistance to methicillin was detected in 74.6% of CoNS isolates. Methicillin resistance was highest in S.hemoliticus isolates (89%). Resistance rates of CoNS strains to the antibacterial agents, respectively, were as follows: ampicillin 77%, gentamicin 20%, erythromycin 71%, clindamycin 22%, trimethoprim-sulfamethoxazole 45%, ciprofloxacin 52%, tetracycline 34%, rifampicin 20%, daptomycin 0.2% and linezolid 0.2%. None of the strains were resistant to vancomycin and teicoplanin. Fifteen (3%) CoNS isolates were D-test positive, inducible MLSB resistance type (iMLSB-phenotype), 94 (19%) were constitutively resistant (cMLSB -phenotype), and 237 (46,76%) isolates were found D-test negative, indicating truly clindamycin-susceptible MS phenotype (M-phenotype resistance). The incidence of iMLSB-phenotypes was higher in S. epidermidis isolates (4,7%) compared to other CoNS isolates.Keywords: bacteremia, inducible MLSB resistance phenotype, methicillin-resistant, staphylococci
Procedia PDF Downloads 2391601 Assessment of Tidal Influence in Spatial and Temporal Variations of Water Quality in Masan Bay, Korea
Abstract:
Slack-tide sampling was carried out at seven stations at high and low tides for a tidal cycle, in summer (7, 8, 9) and fall (10), 2016 to determine the differences of water quality according to tides in Masan Bay. The data were analyzed by Pearson correlation and factor analysis. The mixing state of all the water quality components investigated is well explained by the correlation with salinity (SAL). Turbidity (TURB), dissolved silica (DSi), nitrite and nitrate nitrogen (NNN) and total nitrogen (TN), which find their way into the bay from the streams and have no internal source and sink reaction, showed a strong negative correlation with SAL at low tide, indicating the property of conservative mixing. On the contrary, in summer and fall, dissolved oxygen (DO), hydrogen sulfide (H2S) and chemical oxygen demand with KMnO4 (CODMn) of the surface and bottom water, which were sensitive to an internal source and sink reaction, showed no significant correlation with SAL at high and low tides. The remaining water quality parameters showed a conservative or a non-conservative mixing pattern depending on the mixing characteristics at high and low tides, determined by the functional relationship between the changes of the flushing time and the changes of the characteristics of water quality components of the end-members in the bay. Factor analysis performed on the concentration difference data sets between high and low tides helped in identifying the principal latent variables for them. The concentration differences varied spatially and temporally. Principal factors (PFs) scores plots for each monitoring situation showed high associations of the variations to the monitoring sites. At sampling station 1 (ST1), temperature (TEMP), SAL, DSi, TURB, NNN and TN of the surface water in summer, TEMP, SAL, DSi, DO, TURB, NNN, TN, reactive soluble phosphorus (RSP) and total phosphorus (TP) of the bottom water in summer, TEMP, pH, SAL, DSi, DO, TURB, CODMn, particulate organic carbon (POC), ammonia nitrogen (AMN), NNN, TN and fecal coliform (FC) of the surface water in fall, TEMP, pH, SAL, DSi, H2S, TURB, CODMn, AMN, NNN and TN of the bottom water in fall commonly showed up as the most significant parameters and the large concentration differences between high and low tides. At other stations, the significant parameters showed differently according to the spatial and temporal variations of mixing pattern in the bay. In fact, there is no estuary that always maintains steady-state flow conditions. The mixing regime of an estuary might be changed at any time from linear to non-linear, due to the change of flushing time according to the combination of hydrogeometric properties, inflow of freshwater and tidal action, And furthermore the change of end-member conditions due to the internal sinks and sources makes the occurrence of concentration difference inevitable. Therefore, when investigating the water quality of the estuary, it is necessary to take a sampling method considering the tide to obtain average water quality data.Keywords: conservative mixing, end-member, factor analysis, flushing time, high and low tide, latent variables, non-conservative mixing, slack-tide sampling, spatial and temporal variations, surface and bottom water
Procedia PDF Downloads 1301600 Evaluating the ‘Assembled Educator’ of a Specialized Postgraduate Engineering Course Using Activity Theory and Genre Ecologies
Authors: Simon Winberg
Abstract:
The landscape of professional postgraduate education is changing: the focus of these programmes is moving from preparing candidates for a life in academia towards a focus of training in expert knowledge and skills to support industry. This is especially pronounced in engineering disciplines where increasingly more complex products are drawing on a depth of knowledge from multiple fields. This connects strongly with the broader notion of Industry 4.0 – where technology and society are being brought together to achieve more powerful and desirable products, but products whose inner workings also are more complex than before. The changes in what we do, and how we do it, has a profound impact on what industry would like universities to provide. One such change is the increased demand for taught doctoral and Masters programmes. These programmes aim to provide skills and training for professionals, to expand their knowledge of state-of-the-art tools and technologies. This paper investigates one such course, namely a Software Defined Radio (SDR) Master’s degree course. The teaching support for this course had to be drawn from an existing pool of academics, none of who were specialists in this field. The paper focuses on the kind of educator, a ‘hybrid academic’, assembled from available academic staff and bolstered by research. The conceptual framework for this paper combines Activity Theory and Genre Ecology. Activity Theory is used to reason about learning and interactions during the course, and Genre Ecology is used to model building and sharing of technical knowledge related to using tools and artifacts. Data were obtained from meetings with students and lecturers, logs, project reports, and course evaluations. The findings show how the course, which was initially academically-oriented, metamorphosed into a tool-dominant peer-learning structure, largely supported by the sharing of technical tool-based knowledge. While the academic staff could address gaps in the participants’ fundamental knowledge of radio systems, the participants brought with them extensive specialized knowledge and tool experience which they shared with the class. This created a complicated dynamic in the class, which centered largely on engagements with technology artifacts, such as simulators, from which knowledge was built. The course was characterized by a richness of ‘epistemic objects’, which is to say objects that had knowledge-generating qualities. A significant portion of the course curriculum had to be adapted, and the learning methods changed to accommodate the dynamic interactions that occurred during classes. This paper explains the SDR Masters course in terms of conflicts and innovations in its activity system, as well as the continually hybridizing genre ecology to show how the structuring and resource-dependence of the course transformed from its initial ‘traditional’ academic structure to a more entangled arrangement over time. It is hoped that insights from this paper would benefit other educators involved in the design and teaching of similar types of specialized professional postgraduate taught programmes.Keywords: professional postgraduate education, taught masters, engineering education, software defined radio
Procedia PDF Downloads 921599 Histopathological Features of Basal Cell Carcinoma: A Ten Year Retrospective Statistical Study in Egypt
Authors: Hala M. El-hanbuli, Mohammed F. Darweesh
Abstract:
The incidence rates of any tumor vary hugely with geographical location. Basal Cell Carcinoma (BCC) is one of the most common skin cancer that has many histopathologic subtypes. Objective: The aim was to study the histopathological features of BCC cases that were received in the Pathology Department, Kasr El-Aini hospital, Cairo University, Egypt during the period from Jan 2004 to Dec 2013 and to evaluate the clinical characters through the patient data available in the request sheets. Methods: Slides and data of BCC cases were collected from the archives of the pathology department, Kasr El-Aini hospital. Revision of all available slides and histological classification of BCC according to WHO (2006) was done. Results: A total number of 310 cases of BCC representing about 65% from the total number of malignant skin tumors examined during the 10-years duration in the department. The age ranged from 8 to 84 years, the mean age was (55.7 ± 15.5). Most of the patients (85%) were above the age of 40 years. There was a slight male predominance (55%). Ulcerated BCC was the most common gross picture (60%), followed by nodular lesion (30%) and finally the ulcerated nodule (10%). Most of the lesions situated in the high-risk sites (77%) where the nose was the most common site (35%) followed by the periocular area (22%), then periauricular (15%) and finally perioral (5%). No lesion was reported outside the head. The tumor size was less than 2 centimeters in 65% of cases, and from 2-5 centimeters in the lesions' greatest dimension in the rest of cases. Histopathological reclassification revealed that the nodular BCC was the most common (68%) followed by the pigmented nodular (18.75%). The histologic high-risk groups represented (7.5%) about half of them (3.75%) being basosquamous carcinoma. The total incidence for multiple BCC and 2nd primary was 12%. Recurrent BCC represented 8%. All of the recurrent lesions of BCC belonged to the histologic high-risk group. Conclusion: Basal Cell Carcinoma is the most common skin cancer in the 10-year survey. Histopathological diagnosis and classification of BCC cases are essential for the determination of the tumor type and its biological behavior.Keywords: basal cell carcinoma, high risk, histopathological features, statistical analysis
Procedia PDF Downloads 1491598 Neural Network Mechanisms Underlying the Combination Sensitivity Property in the HVC of Songbirds
Authors: Zeina Merabi, Arij Dao
Abstract:
The temporal order of information processing in the brain is an important code in many acoustic signals, including speech, music, and animal vocalizations. Despite its significance, surprisingly little is known about its underlying cellular mechanisms and network manifestations. In the songbird telencephalic nucleus HVC, a subset of neurons shows temporal combination sensitivity (TCS). These neurons show a high temporal specificity, responding differently to distinct patterns of spectral elements and their combinations. HVC neuron types include basal-ganglia-projecting HVCX, forebrain-projecting HVCRA, and interneurons (HVC¬INT), each exhibiting distinct cellular, electrophysiological and functional properties. In this work, we develop conductance-based neural network models connecting the different classes of HVC neurons via different wiring scenarios, aiming to explore possible neural mechanisms that orchestrate the combination sensitivity property exhibited by HVCX, as well as replicating in vivo firing patterns observed when TCS neurons are presented with various auditory stimuli. The ionic and synaptic currents for each class of neurons that are presented in our networks and are based on pharmacological studies, rendering our networks biologically plausible. We present for the first time several realistic scenarios in which the different types of HVC neurons can interact to produce this behavior. The different networks highlight neural mechanisms that could potentially help to explain some aspects of combination sensitivity, including 1) interplay between inhibitory interneurons’ activity and the post inhibitory firing of the HVCX neurons enabled by T-type Ca2+ and H currents, 2) temporal summation of synaptic inputs at the TCS site of opposing signals that are time-and frequency- dependent, and 3) reciprocal inhibitory and excitatory loops as a potent mechanism to encode information over many milliseconds. The result is a plausible network model characterizing auditory processing in HVC. Our next step is to test the predictions of the model.Keywords: combination sensitivity, songbirds, neural networks, spatiotemporal integration
Procedia PDF Downloads 651597 New Coating Materials Based on Mixtures of Shellac and Pectin for Pharmaceutical Products
Authors: M. Kumpugdee-Vollrath, M. Tabatabaeifar, M. Helmis
Abstract:
Shellac is a natural polyester resin secreted by insects. Pectins are natural, non-toxic and water-soluble polysaccharides extracted from the peels of citrus fruits or the leftovers of apples. Both polymers are allowed for the use in the pharmaceutical industry and as a food additive. SSB Aquagold® is the aqueous solution of shellac and can be used for a coating process as an enteric or controlled drug release polymer. In this study, tablets containing 10 mg methylene blue as a model drug were prepared with a rotary press. Those tablets were coated with mixtures of shellac and one of the pectin different types (i.e. CU 201, CU 501, CU 701 and CU 020) mostly in a 2:1 ratio or with pure shellac in a small scale fluidized bed apparatus. A stable, simple and reproducible three-stage coating process was successfully developed. The drug contents of the coated tablets were determined using UV-VIS spectrophotometer. The characterization of the surface and the film thickness were performed with the scanning electron microscopy (SEM) and the light microscopy. Release studies were performed in a dissolution apparatus with a basket. Most of the formulations were enteric coated. The dissolution profiles showed a delayed or sustained release with a lagtime of at least 4 h. Dissolution profiles of coated tablets with pure shellac had a very long lagtime ranging from 13 to 17.5 h and the slopes were quite high. The duration of the lagtime and the slope of the dissolution profiles could be adjusted by adding the proper type of pectin to the shellac formulation and by variation of the coating amount. In order to apply a coating formulation as a colon delivery system, the prepared film should be resistant against gastric fluid for at least 2 h and against intestinal fluid for 4-6 h. The required delay time was gained with most of the shellac-pectin polymer mixtures. The release profiles were fitted with the modified model of the Korsmeyer-Peppas equation and the Hixson-Crowell model. A correlation coefficient (R²) > 0.99 was obtained by Korsmeyer-Peppas equation.Keywords: shellac, pectin, coating, fluidized bed, release, colon delivery system, kinetic, SEM, methylene blue
Procedia PDF Downloads 4071596 Impure Water, a Future Disaster: A Case Study of Lahore Ground Water Quality with GIS Techniques
Authors: Rana Waqar Aslam, Urooj Saeed, Hammad Mehmood, Hameed Ullah, Imtiaz Younas
Abstract:
This research has been conducted to assess the water quality in and around Lahore Metropolitan area on the basis of three different land uses, i.e. residential, commercial, and industrial land uses. For this, 29 sample sites have been selected on the basis of simple random sampling technique. Samples were collected at the source (WASA tube wells). The criteria for selecting sample sites are to have a maximum concentration of population in the selected land uses. The results showed that in the residential land use the proportion of nitrate and turbidity is at their highest level in the areas of Allama Iqbal Town and Samanabad Town. Commercial land use of Gulberg and Data Gunj Bakhsh Town have highest level of proportion of chlorides, calcium, TDS, pH, Mg, total hardness, arsenic and alkalinity. Whereas in industrial type of land use in Ravi and Wahga Town have the proportion of arsenic, Mg, nitrate, pH, and turbidity are at their highest level. The high rate of concentration of these parameters in these areas is basically due to the old and fractured pipelines that allow bacterial as well as physiochemical contaminants to contaminate the portable water at the sources. Furthermore, it is seen in most areas that waste water from domestic, industrial, as well as municipal sources may get easy discharge into open spaces and water bodies, like, cannels, rivers, lakes that seeps and become a part of ground water. In addition, huge dumps located in Lahore are becoming the cause of ground water contamination as when the rain falls, the water gets seep into the ground and impures the ground water quality. On the basis of the derived results with the help of Geo-spatial technology ACRGIS 9.3 Interpolation (IDW), it is recommended that water filtration plants must be installed with specific parameter control. A separate team for proper inspection has to be made for water quality check at the source. Old water pipelines must be replaced with the new pipelines, and safe water depth must be ensured at the source end.Keywords: GIS, remote sensing, pH, nitrate, disaster, IDW
Procedia PDF Downloads 2251595 Edmonton Urban Growth Model as a Support Tool for the City Plan Growth Scenarios Development
Authors: Sinisa J. Vukicevic
Abstract:
Edmonton is currently one of the youngest North American cities and has achieved significant growth over the past 40 years. Strong urban shift requires a new approach to how the city is envisioned, planned, and built. This approach is evidence-based scenario development, and an urban growth model was a key support tool in framing Edmonton development strategies, developing urban policies, and assessing policy implications. The urban growth model has been developed using the Metronamica software platform. The Metronamica land use model evaluated the dynamic of land use change under the influence of key development drivers (population and employment), zoning, land suitability, and land and activity accessibility. The model was designed following the Big City Moves ideas: become greener as we grow, develop a rebuildable city, ignite a community of communities, foster a healing city, and create a city of convergence. The Big City Moves were converted to three development scenarios: ‘Strong Central City’, ‘Node City’, and ‘Corridor City’. Each scenario has a narrative story that expressed scenario’s high level goal, scenario’s approach to residential and commercial activities, to transportation vision, and employment and environmental principles. Land use demand was calculated for each scenario according to specific density targets. Spatial policies were analyzed according to their level of importance within the policy set definition for the specific scenario, but also through the policy measures. The model was calibrated on the way to reproduce known historical land use pattern. For the calibration, we used 2006 and 2011 land use data. The validation is done independently, which means we used the data we did not use for the calibration. The model was validated with 2016 data. In general, the modeling process contain three main phases: ‘from qualitative storyline to quantitative modelling’, ‘model development and model run’, and ‘from quantitative modelling to qualitative storyline’. The model also incorporates five spatial indicators: distance from residential to work, distance from residential to recreation, distance to river valley, urban expansion and habitat fragmentation. The major finding of this research could be looked at from two perspectives: the planning perspective and technology perspective. The planning perspective evaluates the model as a tool for scenario development. Using the model, we explored the land use dynamic that is influenced by a different set of policies. The model enables a direct comparison between the three scenarios. We explored the similarities and differences of scenarios and their quantitative indicators: land use change, population change (and spatial allocation), job allocation, density (population, employment, and dwelling unit), habitat connectivity, proximity to objects of interest, etc. From the technology perspective, the model showed one very important characteristic: the model flexibility. The direction for policy testing changed many times during the consultation process and model flexibility in applying all these changes was highly appreciated. The model satisfied our needs as scenario development and evaluation tool, but also as a communication tool during the consultation process.Keywords: urban growth model, scenario development, spatial indicators, Metronamica
Procedia PDF Downloads 951594 Preparation and Properties of Chloroacetated Natural Rubber Rubber Foam Using Corn Starch as Curing Agent
Authors: Ploenpit Boochathum, Pitchayanad Kaolim, Phimjutha Srisangkaew
Abstract:
In general, rubber foam is produced based on the sulfur curing system. However, the remaining sulfur in the rubber product waste is burned to sulfur dioxide gas causing the environment pollution. To avoid using sulfur as curing agent in the rubber foam products, this research work proposes non-sulfur curing system by using corn starch as a curing agent. The ether crosslinks were proposed to be produced via the functional bonding between hydroxyl groups of the starch molecules and chloroacetate groups added on the natural rubber molecules. The chloroacetated natural rubber (CNR) latex was prepared via the epoxidation reaction of the concentrated natural rubber latex, subsequently, epoxy rings were attacked by chloroacetic acid to produce hydroxyl groups and chloroacetate groups on the rubber molecules. Foaming agent namely NaHCO3 was selected to add in the CNR latex due to the low decomposition temperature at about 50°C. The appropriate curing temperature was assigned to be 90°C that is above gelatinization temperature; 60-70°C, of starch. The effect of weight ratio of starch, i.e., 0 phr, 3 phr and 5 phr, on the physical properties of CNR rubber foam was investigated. It was found that density reduced from 0.81 g/cm3 for 0 phr to 0.75 g/cm3 for 3 phr and 0.79 g/cm3 for 5 phr. The ability to return to its original thickness after prolonged compressive stresses of CNR rubber foam cured with starch loading of 5 phr was found to be considerably better than that of CNR rubber foam cured with starch 3 phr and CNR rubber foam without addition of starch according to the compression set that was determined to decrease from 66.67% to 40% and 26.67% with the increase loading of starch. The mechanical properties including tensile strength and modulus of CNR rubber foams cured using starch were determined to increase except that the elongation at break was found to decrease. In addition, all mechanical properties of CNR rubber foams cured with the starch 3 phr and 5 phr were found to be slightly different and drastically higher than those of CNR rubber foam without the addition of starch. This research work indicates that starch can be applicable as a curing agent for CNR rubber. This is confirmed by the increase of the elastic modulus (G') of CNR rubber foams that was cured with the starch over the CNR rubber foam without curing agent. This type of rubber foam is believed to be one of the biodegradable and environment-friendly product that can be cured at low temperature of 90°C.Keywords: chloroacetated natural rubber, corn starch, non-sulfur curing system, rubber foam
Procedia PDF Downloads 3191593 Road Accident Blackspot Analysis: Development of Decision Criteria for Accident Blackspot Safety Strategies
Authors: Tania Viju, Bimal P., Naseer M. A.
Abstract:
This study aims to develop a conceptual framework for the decision support system (DSS), that helps the decision-makers to dynamically choose appropriate safety measures for each identified accident blackspot. An accident blackspot is a segment of road where the frequency of accident occurrence is disproportionately greater than other sections on roadways. According to a report by the World Bank, India accounts for the highest, that is, eleven percent of the global death in road accidents with just one percent of the world’s vehicles. Hence in 2015, the Ministry of Road Transport and Highways of India gave prime importance to the rectification of accident blackspots. To enhance road traffic safety and reduce the traffic accident rate, effectively identifying and rectifying accident blackspots is of great importance. This study helps to understand and evaluate the existing methods in accident blackspot identification and prediction that are used around the world and their application in Indian roadways. The decision support system, with the help of IoT, ICT and smart systems, acts as a management and planning tool for the government for employing efficient and cost-effective rectification strategies. In order to develop a decision criterion, several factors in terms of quantitative as well as qualitative data that influence the safety conditions of the road are analyzed. Factors include past accident severity data, occurrence time, light, weather and road conditions, visibility, driver conditions, junction type, land use, road markings and signs, road geometry, etc. The framework conceptualizes decision-making by classifying blackspot stretches based on factors like accident occurrence time, different climatic and road conditions and suggesting mitigation measures based on these identified factors. The decision support system will help the public administration dynamically manage and plan the necessary safety interventions required to enhance the safety of the road network.Keywords: decision support system, dynamic management, road accident blackspots, road safety
Procedia PDF Downloads 1441592 From Text to Data: Sentiment Analysis of Presidential Election Political Forums
Authors: Sergio V Davalos, Alison L. Watkins
Abstract:
User generated content (UGC) such as website post has data associated with it: time of the post, gender, location, type of device, and number of words. The text entered in user generated content (UGC) can provide a valuable dimension for analysis. In this research, each user post is treated as a collection of terms (words). In addition to the number of words per post, the frequency of each term is determined by post and by the sum of occurrences in all posts. This research focuses on one specific aspect of UGC: sentiment. Sentiment analysis (SA) was applied to the content (user posts) of two sets of political forums related to the US presidential elections for 2012 and 2016. Sentiment analysis results in deriving data from the text. This enables the subsequent application of data analytic methods. The SASA (SAIL/SAI Sentiment Analyzer) model was used for sentiment analysis. The application of SASA resulted with a sentiment score for each post. Based on the sentiment scores for the posts there are significant differences between the content and sentiment of the two sets for the 2012 and 2016 presidential election forums. In the 2012 forums, 38% of the forums started with positive sentiment and 16% with negative sentiment. In the 2016 forums, 29% started with positive sentiment and 15% with negative sentiment. There also were changes in sentiment over time. For both elections as the election got closer, the cumulative sentiment score became negative. The candidate who won each election was in the more posts than the losing candidates. In the case of Trump, there were more negative posts than Clinton’s highest number of posts which were positive. KNIME topic modeling was used to derive topics from the posts. There were also changes in topics and keyword emphasis over time. Initially, the political parties were the most referenced and as the election got closer the emphasis changed to the candidates. The performance of the SASA method proved to predict sentiment better than four other methods in Sentibench. The research resulted in deriving sentiment data from text. In combination with other data, the sentiment data provided insight and discovery about user sentiment in the US presidential elections for 2012 and 2016.Keywords: sentiment analysis, text mining, user generated content, US presidential elections
Procedia PDF Downloads 1921591 The Role of Supply Chain Agility in Improving Manufacturing Resilience
Authors: Maryam Ziaee
Abstract:
This research proposes a new approach and provides an opportunity for manufacturing companies to produce large amounts of products that meet their prospective customers’ tastes, needs, and expectations and simultaneously enable manufacturers to increase their profit. Mass customization is the production of products or services to meet each individual customer’s desires to the greatest possible extent in high quantities and at reasonable prices. This process takes place at different levels such as the customization of goods’ design, assembly, sale, and delivery status, and classifies in several categories. The main focus of this study is on one class of mass customization, called optional customization, in which companies try to provide their customers with as many options as possible to customize their products. These options could range from the design phase to the manufacturing phase, or even methods of delivery. Mass customization values customers’ tastes, but it is only one side of clients’ satisfaction; on the other side is companies’ fast responsiveness delivery. It brings the concept of agility, which is the ability of a company to respond rapidly to changes in volatile markets in terms of volume and variety. Indeed, mass customization is not effectively feasible without integrating the concept of agility. To gain the customers’ satisfaction, the companies need to be quick in responding to their customers’ demands, thus highlighting the significance of agility. This research offers a different method that successfully integrates mass customization and fast production in manufacturing industries. This research is built upon the hypothesis that the success key to being agile in mass customization is to forecast demand, cooperate with suppliers, and control inventory. Therefore, the significance of the supply chain (SC) is more pertinent when it comes to this stage. Since SC behavior is dynamic and its behavior changes constantly, companies have to apply one of the predicting techniques to identify the changes associated with SC behavior to be able to respond properly to any unwelcome events. System dynamics utilized in this research is a simulation approach to provide a mathematical model among different variables to understand, control, and forecast SC behavior. The final stage is delayed differentiation, the production strategy considered in this research. In this approach, the main platform of products is produced and stocked and when the company receives an order from a customer, a specific customized feature is assigned to this platform and the customized products will be created. The main research question is to what extent applying system dynamics for the prediction of SC behavior improves the agility of mass customization. This research is built upon a qualitative approach to bring about richer, deeper, and more revealing results. The data is collected through interviews and is analyzed through NVivo software. This proposed model offers numerous benefits such as reduction in the number of product inventories and their storage costs, improvement in the resilience of companies’ responses to their clients’ needs and tastes, the increase of profits, and the optimization of productivity with the minimum level of lost sales.Keywords: agility, manufacturing, resilience, supply chain
Procedia PDF Downloads 911590 Managing Expatriates' Return: Repatriation Practices in a Sample of Firms in Portugal
Authors: Ana Pinheiro, Fatima Suleman
Abstract:
Literature has revealed strong awareness of companies in regard of expatriation, but issues associated with repatriation of employees after an international assignment have been overlooked. Repatriation is one of the most challenging human resource practices that affect how companies benefit from acquired skills and high potential employees; and gain competitive advantage through network developed during expatriation. However, empirical evidence achieved so far suggests that expatriates have been disappointed because companies lack an effective repatriation strategy. Repatriates’ professional and emotional needs are often unrecognized, while repatriation is perceived as a non-issue by companies. The underlying assumption is that the return to parent company, and original country, culture and language does not demand for any particular support. Unfortunately, this basic view has non-negligible consequences on repatriates, especially on expatriate retention and turnover rates after expatriation. The goal of our study is to examine the specific policies and practices adopted by companies to support employees after an international assignment. We assume that expatriation is process which ends with repatriation. The latter is such a crucial issue as the expatriation and require due attention through appropriate design of human resource management policies and tools. For this purpose, we use data from a qualitative research based on interviews to a sample of firms operating in Portugal. We attempt to compare how firms accommodate the concerns with repatriation in their policies and practices. Therefore, the interviews collect data on both expatriation and repatriation process, namely the selection and skills of candidates to expatriation, training, mentoring, communication and pay policies. Portuguese labor market seems to be an interesting case study for mainly two reasons. On the one hand, Portuguese Government is encouraging companies to internationalize in the context of an external market-oriented growth model. On the other hand, expatriation is being perceived as a job opportunity in the context of high unemployment rates of both skilled and non-skilled. This is an ongoing research and the data collected until now indicate that companies follow the pattern described in the literature. The interviewed companies recognize the higher relevance of repatriation process than expatriation, but disregard specific human resource policies. They have perceived that unfavorable labor market conditions discourage mobility across companies. It should be stressed that companies underline that employees enhanced the relevance of stable jobs and attach far less importance to career development and other benefits after expatriation. However, there are still cases of turnover and difficulties of retention. Managers’ report non-negligible cases of turnover associated with lack of effective repatriation programs and non-recognition of good performance. Repatriates seem to having acquired entrepreneurial spirit and skills and often create their own company. These results suggest that even in the context of worsening labor market conditions, there should be greater awareness of the need to retain talents, experienced and highly skills employees. Ultimately, other companies poach invaluable assets, while internationalized companies risk being training providers.Keywords: expatriates, expatriation, international management, repatriation
Procedia PDF Downloads 3361589 Hematological Malignancies in Children and Parental Occupational Exposure
Authors: H. Kalboussi, A. Aloui, W. Boughattas, M. Maoua, A. Brahem, S. Chatti, O. El Maalel, F. Debbabi, N. Mrizak, Y. Ben Youssef, A. Khlif, I. Bougmiza
Abstract:
Background: In recent decades, the incidence of children's hematological malignancies has been increasing worldwide including Tunisia. Their severity is reflected in the importance of the medical, social and economic impact. This increase remains fully unexplained, and the involvement of genetic, environmental and occupational factors is strongly suspected. Materials and Methods: Our study is a cross-sectional survey of the type case-control conducted in the University Hospital of Farhat Hached of Sousse during the period ranging between 1 July 2011 and 30 June 2012,and which included children with acute leukemia compared to children unharmed by neoplastic disease . Cases and controls were matched by age and gender. Our objective was to: - Describe the socio-occupational characteristics of the parents of children with acute leukemia. - Identify potential occupational factors implicated in the genesis of acute leukemia. Result: The number of acute leukemia cases in the Hematology Service and day hospital of the University Hospital of Farhat Hached during the study period was 66 cases divided into in 40 boys and 26 girls with a sex ratio of 1.53. Our cases and controls were matched by age and gender. The risk of incidence of leukemia in children from smoking fathers was higher (p = 0.02, OR = 2.24, IC = [1.11 - 4.52]). The risk of incidence of leukemia in children from alcoholic fathers was higher with p = 0,009, OR = 3.9; CI = [1.33 - 11.39]. After adjusting different variables, the difference persisted significantly with pa = 0.03 and ORa = 3.5; ICa = [1.09 -11.6]. 25.7 % of cases had a family history of blood disease and neoplasia, whereas no control presented that. The difference was statistically significant (p = 0.006), OR = 1.46, IC = [1.38 - 1.56]. The parental occupational exposures associated to the occurrence of acute leukemia in children were: - Pesticides with a statistically significant difference (p = 0.03), OR = 2.94, IC = [1.06 - 8.13]. This difference persisted after adjustment with different variables pa = 0.01, ORa 3.75; ICa = [1.27 - 11.03]. - Cement without a statistically non-significant difference (p = 0.2). This difference has become significant after adjustment with the different variables pa = 0.03; ORa = 2.67; ICa = [1.06 - 6.7]. Conclusion: Parental exposure to occupational risk factors may play a role in the pathogenesis of acute leukemia in children.Keywords: hematological malignancies, children, parents, occupational exposure
Procedia PDF Downloads 3181588 The Role of Hypothalamus Mediators in Energy Imbalance
Authors: Maftunakhon Latipova, Feruza Khaydarova
Abstract:
Obesity is considered a chronic metabolic disease that occurs at any age. Regulation of body weight in the body is carried out through complex interaction of a complex of interrelated systems that control the body's energy system. Energy imbalance is the cause of obesity and overweight, in which the supply of energy from food exceeds the energy needs of the body. Obesity is closely related to impaired appetite regulation, and a hypothalamus is a key place for neural regulation of food consumption. The nucleus of the hypothalamus is connected and interdependent on receiving, integrating and sending hunger signals to regulate appetite. Purpose of the study: to identify markers of food behavior. Materials and methods: The screening was carried out to identify eating disorders in 200 men and women aged 18 to 35 years with overweight and obesity and to check the effects of Orexin A and Neuropeptide Y markers. A questionnaire and questionnaires were conducted with over 200 people aged 18 to 35 years. Questionnaires were for eating disorders and hidden depression (on the Zang scale). Anthropometry is measured by OT, OB, BMI, Weight, and Height. Based on the results of the collected data, 3 groups were divided: People with obesity, People with overweight, Control Group of Healthy People. Results: Of the 200 analysed persons, 86% had eating disorders. Of these, 60% of eating disorders were associated with childhood. According to the Zang test result: Normal condition was about 37%, mild depressive disorder 20%, moderate depressive disorder 25% and 18% of people suffered from severe depressive disorder without knowing it. One group of people with obesity had eating disorders and moderate and severe depressive disorder, and group 2 was overweight with mild depressive disorder. According to laboratory data, the first group had the lowest concentration of Orexin A and Neuropeptide U in blood serum. Conclusions: Being overweight and obese are the first signal of many diseases, and prevention and detection of these disorders will prevent various diseases, including type 2 diabetes. Obesity etiology is associated with eating disorders and signal transmission of the orexinorghetic system of the hypothalamus.Keywords: obesity, endocrinology, hypothalamus, overweight
Procedia PDF Downloads 761587 Assessment of On-Site Solar and Wind Energy at a Manufacturing Facility in Ireland
Authors: A. Sgobba, C. Meskell
Abstract:
The feasibility of on-site electricity production from solar and wind and the resulting load management for a specific manufacturing plant in Ireland are assessed. The industry sector accounts directly and indirectly for a high percentage of electricity consumption and global greenhouse gas emissions; therefore, it will play a key role in emission reduction and control. Manufacturing plants, in particular, are often located in non-residential areas since they require open spaces for production machinery, parking facilities for the employees, appropriate routes for supply and delivery, special connections to the national grid and other environmental impacts. Since they have larger spaces compared to commercial sites in urban areas, they represent an appropriate case study for evaluating the technical and economic viability of energy system integration with low power density technologies, such as solar and wind, for on-site electricity generation. The available open space surrounding the analysed manufacturing plant can be efficiently used to produce a discrete quantity of energy, instantaneously and locally consumed. Therefore, transmission and distribution losses can be reduced. The usage of storage is not required due to the high and almost constant electricity consumption profile. The energy load of the plant is identified through the analysis of gas and electricity consumption, both internally monitored and reported on the bills. These data are not often recorded and available to third parties since manufacturing companies usually keep track only of the overall energy expenditures. The solar potential is modelled for a period of 21 years based on global horizontal irradiation data; the hourly direct and diffuse radiation and the energy produced by the system at the optimum pitch angle are calculated. The model is validated using PVWatts and SAM tools. Wind speed data are available for the same period within one-hour step at a height of 10m. Since the hub of a typical wind turbine reaches a higher altitude, complementary data for a different location at 50m have been compared, and a model for the estimate of wind speed at the required height in the right location is defined. Weibull Statistical Distribution is used to evaluate the wind energy potential of the site. The results show that solar and wind energy are, as expected, generally decoupled. Based on the real case study, the percentage of load covered every hour by on-site generation (Level of Autonomy LA) and the resulting electricity bought from the grid (Expected Energy Not Supplied EENS) are calculated. The economic viability of the project is assessed through Net Present Value, and the influence the main technical and economic parameters have on NPV is presented. Since the results show that the analysed renewable sources can not provide enough electricity, the integration with a cogeneration technology is studied. Finally, the benefit to energy system integration of wind, solar and a cogeneration technology is evaluated and discussed.Keywords: demand, energy system integration, load, manufacturing, national grid, renewable energy sources
Procedia PDF Downloads 1291586 Connotation Reform and Problem Response of Rural Social Relations under the Influence of the Earthquake: With a Review of Wenchuan Decade
Abstract:
The occurrence of Wenchuan earthquake in 2008 has led to severe damage to the rural areas of Chengdu city, such as the rupture of the social network, the stagnation of economic production and the rupture of living space. The post-disaster reconstruction has become a sustainable issue. As an important link to maintain the order of rural social development, social network should be an important content of post-disaster reconstruction. Therefore, this paper takes rural reconstruction communities in earthquake-stricken areas of Chengdu as the research object and adopts sociological research methods such as field survey, observation and interview to try to understand the transformation of rural social relations network under the influence of earthquake and its impact on rural space. It has found that rural societies under the earthquake generally experienced three phases: the break of stable social relations, the transition of temporary non-normal state, and the reorganization of social networks. The connotation of phased rural social relations also changed accordingly: turn to a new division of labor on the social orientation, turn to a capital flow and redistribution in new production mode on the capital orientation, and turn to relative decentralization after concentration on the spatial dimension. Along with such changes, rural areas have emerged some social issues such as the alienation of competition in the new industry division, the low social connection, the significant redistribution of capital, and the lack of public space. Based on a comprehensive review of these issues, this paper proposes the corresponding response mechanism. First of all, a reasonable division of labor should be established within the villages to realize diversified commodity supply. Secondly, the villages should adjust the industrial type to promote the equitable participation of capital allocation groups. Finally, external public spaces should be added to strengthen the field of social interaction within the communities.Keywords: social relations, social support networks, industrial division, capital allocation, public space
Procedia PDF Downloads 1561585 Characterization of Ethanol-Air Combustion in a Constant Volume Combustion Bomb Under Cellularity Conditions
Authors: M. Reyes, R. Sastre, P. Gabana, F. V. Tinaut
Abstract:
In this work, an optical characterization of the ethanol-air laminar combustion is presented in order to investigate the origin of the instabilities developed during the combustion, the onset of the cellular structure and the laminar burning velocity. Experimental tests of ethanol-air have been developed in an optical cylindrical constant volume combustion bomb equipped with a Schlieren technique to record the flame development and the flame front surface wrinkling. With this procedure, it is possible to obtain the flame radius and characterize the time when the instabilities are visible through the cell's apparition and the cellular structure development. Ethanol is an aliphatic alcohol with interesting characteristics to be used as a fuel in Internal Combustion Engines and can be biologically synthesized from biomass. Laminar burning velocity is an important parameter used in simulations to obtain the turbulent flame speed, whereas the flame front structure and the instabilities developed during the combustion are important to understand the transition to turbulent combustion and characterize the increment in the flame propagation speed in premixed flames. The cellular structure is spontaneously generated by volume forces, diffusional-thermal and hydrodynamic instabilities. Many authors have studied the combustion of ethanol air and mixtures of ethanol with other fuels. However, there is a lack of works that investigate the instabilities and the development of a cellular structure in ethanol flames, a few works as characterized the ethanol-air combustion instabilities in spherical flames. In the present work, a parametrical study is made by varying the fuel/air equivalence ratio (0.8-1.4), initial pressure (0.15-0.3 MPa) and initial temperature (343-373K), using a design of experiments type I-optimal. In reach mixtures, it is possible to distinguish the cellular structure formed by the hydrodynamic effect and by from the thermo-diffusive. Results show that ethanol-air flames tend to stabilize as the equivalence ratio decreases in lean mixtures and develop a cellular structure with the increment of initial pressure and temperature.Keywords: ethanol, instabilities, premixed combustion, schlieren technique, cellularity
Procedia PDF Downloads 661584 Optimizing the Location of Parking Areas Adapted for Dangerous Goods in the European Road Transport Network
Authors: María Dolores Caro, Eugenio M. Fedriani, Ángel F. Tenorio
Abstract:
The transportation of dangerous goods by lorries throughout Europe must be done by using the roads conforming the European Road Transport Network. In this network, there are several parking areas where lorry drivers can park to rest according to the regulations. According to the "European Agreement concerning the International Carriage of Dangerous Goods by Road", parking areas where lorries transporting dangerous goods can park to rest, must follow several security stipulations to keep safe the rest of road users. At this respect, these lorries must be parked in adapted areas with strict and permanent surveillance measures. Moreover, drivers must satisfy several restrictions about resting and driving time. Under these facts, one may expect that there exist enough parking areas for the transport of this type of goods in order to obey the regulations prescribed by the European Union and its member countries. However, the already-existing parking areas are not sufficient to cover all the stops required by drivers transporting dangerous goods. Our main goal is, starting from the already-existing parking areas and the loading-and-unloading location, to provide an optimal answer to the following question: how many additional parking areas must be built and where must they be located to assure that lorry drivers can transport dangerous goods following all the stipulations about security and safety for their stops? The sense of the word “optimal” is due to the fact that we give a global solution for the location of parking areas throughout the whole European Road Transport Network, adjusting the number of additional areas to be as lower as possible. To do so, we have modeled the problem using graph theory since we are working with a road network. As nodes, we have considered the locations of each already-existing parking area, each loading-and-unloading area each road bifurcation. Each road connecting two nodes is considered as an edge in the graph whose weight corresponds to the distance between both nodes in the edge. By applying a new efficient algorithm, we have found the additional nodes for the network representing the new parking areas adapted for dangerous goods, under the fact that the distance between two parking areas must be less than or equal to 400 km.Keywords: trans-european transport network, dangerous goods, parking areas, graph-based modeling
Procedia PDF Downloads 2801583 Experimental Study of the Efficacy and Emission Properties of a Compression Ignition Engine Running on Fuel Additives with Varying Engine Loads
Authors: Faisal Mahroogi, Mahmoud Bady, Yaser H. Alahmadi, Ahmed Alsisi, Sunny Narayan, Muhammad Usman Kaisan
Abstract:
The Kingdom of Saudi Arabia established Saudi Vision 2030, an initiative of the government with the goal of promoting more socioeconomic as well as cultural diversity. The kingdom, which is dedicated to sustainable development and clean energy, uses cutting-edge approaches to address energy-related issues, including the circular carbon economy (CCE) and a more varied energy mix. In order for Saudi Arabia to achieve its Vision 2030 goal of having a net zero future by 2060, sustainability is essential. By addressing the energy and climate issues of the modern world with responsibility and innovation, Vision 2030 is turning into a global role model for the transition to a sustainable future. As per the Ambitions of the National Environment Strategy of the Saudi Ministry of Environment, Agriculture, and Water (MEWA), raising environmental compliance across all sectors and reducing pollution and adverse environmental impacts are critical focus areas. As a result, the current study presents an experimental analysis of the performance and exhaust emissions of a diesel engine running mostly on waste cooking oil (WCO). A one-cylinder direct-injection diesel engine with constant speed and natural aspiration is the engine type utilized. Research was done on how the engine performed and emission parameters when fueled with a mixture of 10% butanol, 10% diesel, 10% WCO, and 10% diethyl ether (D70B10W10DD10). The study's findings demonstrated that engine emissions of nitrogen oxides (NOX) and carbon monoxide (CO) varied significantly depending on the load being applied. The brake thermal efficiency, cylinder pressure, and the brake power of the engine were all impacted by load change.Keywords: ICE, waste cooking oil, fuel additives, butanol, combustion, emission characteristics
Procedia PDF Downloads 621582 Thomas Kuhn, the Accidental Theologian: An Argument for the Similarity of Science and Religion
Authors: Dominic McGann
Abstract:
Applying Kuhn’s model of paradigm shifts in science to cases of doctrinal change in religion has been a common area of study in recent years. Few authors, however, have sought an explanation for the ease with which this model of theory change in science can be applied to cases of religious change. In order to provide such an explanation of this analytic phenomenon, this paper aims to answer one central question: Why is it that a theory that was intended to be used in an analysis of the history of science can be applied to something as disparate as the doctrinal history of religion with little to no modification? By way of answering this question, this paper begins with an explanation of Kuhn’s model and its applications in the field of religious studies. Following this, Massa’s recently proposed explanation for this phenomenon, and its notable flaws will be explained by way of framing the central proposal of this article, that the operative parts of scientific and religious changes function on the same fundamental concept of changes in understanding. Focusing its argument on this key concept, this paper seeks to illustrate its operation in cases of religious conversion and in Kuhn’s notion of the incommensurability of different scientific paradigms. The conjecture of this paper is that just as a Pagan-turned-Christian ceases to hear Thor’s hammer when they hear a clap of thunder, so too does a Ptolemaic-turned-Copernican-astronomer cease to see the Sun orbiting the Earth when they view a sunrise. In both cases, the agent in question has undergone a similar change in universal understanding, which provides us with a fundamental connection between changes in religion and changes in science. Following an exploration of this connection, this paper will consider the implications that such a connection has for the concept of the division between religion and science. This will, in turn, lead to the conclusion that religion and science are more alike than they are opposed with regards to the fundamental notion of understanding, thereby providing an answer to our central question. The major finding of this paper is that Kuhn’s model can be applied to religious cases so easily because changes in science and changes in religion operate on the same type of change in understanding. Therefore, in summary, science and religion share a crucial similarity and are not as disparate as they first appear.Keywords: Thomas Kuhn, science and religion, paradigm shifts, incommensurability, insight and understanding, philosophy of science, philosophy of religion
Procedia PDF Downloads 1711581 Developing the Principal Change Leadership Non-Technical Competencies Scale: An Exploratory Factor Analysis
Authors: Tai Mei Kin, Omar Abdull Kareem
Abstract:
In light of globalization, educational reform has become a top priority for many countries. However, the task of leading change effectively requires a multidimensional set of competencies. Over the past two decades, technical competencies of principal change leadership have been extensively analysed and discussed. Comparatively, little research has been conducted in Malaysian education context on non-technical competencies or popularly known as emotional intelligence, which is equally crucial for the success of change. This article provides a validation of the Principal Change Leadership Non-Technical Competencies (PCLnTC) Scale, a tool that practitioners can easily use to assess school principals’ level of change leadership non-technical competencies that facilitate change and maximize change effectiveness. The overall coherence of the PCLnTC model was constructed by incorporating three theories: a)the change leadership theory whereby leading change is the fundamental role of a leader; b)competency theory in which leadership can be taught and learned; and c)the concept of emotional intelligence whereby it can be developed, fostered and taught. An exploratory factor analysis (EFA) was used to determine the underlying factor structure of PCLnTC model. Before conducting EFA, five important pilot test approaches were conducted to ensure the validity and reliability of the instrument: a)reviewed by academic colleagues; b)verification and comments from panel; c)evaluation on questionnaire format, syntax, design, and completion time; d)evaluation of item clarity; and e)assessment of internal consistency reliability. A total of 335 teachers from 12 High Performing Secondary School in Malaysia completed the survey. The PCLnTCS with six points Liker-type scale were subjected to Principal Components Analysis. The analysis yielded a three-factor solution namely, a)Interpersonal Sensitivity; b)Flexibility; and c)Motivation, explaining a total 74.326 per cent of the variance. Based on the results, implications for instrument revisions are discussed and specifications for future confirmatory factor analysis are delineated.Keywords: exploratory factor analysis, principal change leadership non-technical competencies (PCLnTC), interpersonal sensitivity, flexibility, motivation
Procedia PDF Downloads 425