Search results for: conventional banking model
4920 Investigation of the Speckle Pattern Effect for Displacement Assessments by Digital Image Correlation
Authors: Salim Çalışkan, Hakan Akyüz
Abstract:
Digital image correlation has been accustomed as a versatile and efficient method for measuring displacements on the article surfaces by comparing reference subsets in undeformed images with the define target subset in the distorted image. The theoretical model points out that the accuracy of the digital image correlation displacement data can be exactly anticipated based on the divergence of the image noise and the sum of the squares of the subset intensity gradients. The digital image correlation procedure locates each subset of the original image in the distorted image. The software then determines the displacement values of the centers of the subassemblies, providing the complete displacement measures. In this paper, the effect of the speckle distribution and its effect on displacements measured out plane displacement data as a function of the size of the subset was investigated. Nine groups of speckle patterns were used in this study: samples are sprayed randomly by pre-manufactured patterns of three different hole diameters, each with three coverage ratios, on a computer numerical control punch press. The resulting displacement values, referenced at the center of the subset, are evaluated based on the average of the displacements of the pixel’s interior the subset.Keywords: digital image correlation, speckle pattern, experimental mechanics, tensile test, aluminum alloy
Procedia PDF Downloads 714919 Introduction to Various Innovative Techniques Suggested for Seismic Hazard Assessment
Authors: Deepshikha Shukla, C. H. Solanki, Mayank K. Desai
Abstract:
Amongst all the natural hazards, earthquakes have the potential for causing the greatest damages. Since the earthquake forces are random in nature and unpredictable, the quantification of the hazards becomes important in order to assess the hazards. The time and place of a future earthquake are both uncertain. Since earthquakes can neither be prevented nor be predicted, engineers have to design and construct in such a way, that the damage to life and property are minimized. Seismic hazard analysis plays an important role in earthquake design structures by providing a rational value of input parameter. In this paper, both mathematical, as well as computational methods adopted by researchers globally in the past five years, will be discussed. Some mathematical approaches involving the concepts of Poisson’s ratio, Convex Set Theory, Empirical Green’s Function, Bayesian probability estimation applied for seismic hazard and FOSM (first-order second-moment) algorithm methods will be discussed. Computational approaches and numerical model SSIFiBo developed in MATLAB to study dynamic soil-structure interaction problem is discussed in this paper. The GIS-based tool will also be discussed which is predominantly used in the assessment of seismic hazards.Keywords: computational methods, MATLAB, seismic hazard, seismic measurements
Procedia PDF Downloads 3394918 A Natural Killer T Cell Subset That Protects against Airway Hyperreactivity
Authors: Ya-Ting Chuang, Krystle Leung, Ya-Jen Chang, Rosemarie H. DeKruyff, Paul B. Savage, Richard Cruse, Christophe Benoit, Dirk Elewaut, Nicole Baumgarth, Dale T. Umetsu
Abstract:
We examined characteristics of a Natural Killer T (NKT) cell subpopulation that developed during influenza infection in neonatal mice, and that suppressed the subsequent development of allergic asthma in a mouse model. This NKT cell subset expressed CD38 but not CD4, produced IFN-γ, but not IL-17, IL-4 or IL-13, and inhibited the development of airway hyperreactivity (AHR) through contact-dependent suppressive activity against helper CD4 T cells. The NKT subset expanded in the lungs of neonatal mice after infection with influenza, but also after treatment of neonatal mice with a Th1-biasing α-GalCer glycolipid analogue, Nu-α-GalCer. These results suggest that early/neonatal exposure to infection or to antigenic challenge can affect subsequent lung immunity by altering the profile of cells residing in the lung and that some subsets of NKT cells can have direct inhibitory activity against CD4+ T cells in allergic asthma. Importantly, our results also suggest a potential therapy for young children that might provide protection against the development of asthma.Keywords: NKT subset, asthma, airway hyperreactivity, hygiene hypothesis, influenza
Procedia PDF Downloads 2384917 Analysis of Moment Rotation Curve for Steel Beam Column Joint
Authors: A. J. Shah, G. R. Vesmawala
Abstract:
Connections perform a fundamental role in the steel structures as global behaviour. In order to evaluate the real influence of the physical and geometrical parameters that control their behaviour, many experimental tests and analysis have been developed but a definitive answer to the problem in question still stands. Here, various configurations of bolts were tried and the resulting moment rotation (M-θ) curves were plotted. The connection configuration is such that two bolts are located above each of the flanges and beside each of the webs. The model considers the combined effects of prying action, the formation of yield lines, and failures due to punching shear and beam section failure. For many types of connections, the stiffness at the service load level falls somewhere in between the fully restrained and simple limits and designers need to account for its behaviour. The (M-θ) curves are generally assumed to be the best characterization of connection behaviour. The moment rotation curves are generally derived from experiments on cantilever type specimens. The moments are calculated directly from the statics of the specimen, while the rotations are measured over a distance typically equal to the point of loading. Thus, this paper establishes the relationship between M-θ behaviour of different types of connections tested and presents the relative strength of various possible arrangements of bolts.Keywords: bolt, moment, rotation, stiffness, connections
Procedia PDF Downloads 3924916 Glocalization of Journalism and Mass Communication Education: Best Practices from an International Collaboration on Curriculum Development
Authors: Bellarmine Ezumah, Michael Mawa
Abstract:
Glocalization is often defined as the practice of conducting business according to both local and global considerations – this epitomizes the curriculum co-development collaboration between a journalism and mass communications professor from a university in the United States and the Uganda Martyrs University in Uganda where a brand new journalism and mass communications program was recently co-developed. This paper presents the experiences and research result of this initiative which was funded through the Institute of International Education (IIE) under the umbrella of the Carnegie African Diaspora Fellowship Program (CADFP). Vital international and national concerns were addressed. On a global level, scholars have questioned and criticized the general Western-module ingrained in journalism and mass communication curriculum and proposed a decolonization of journalism curricula. Another major criticism is the concept of western-based educators transplanting their curriculum verbatim to other regions of the world without paying greater attention to the local needs. To address these two global concerns, an extensive assessment of local needs was conducted prior to the conceptualization of the new program. The assessment of needs adopted a participatory action model and captured the knowledge and narratives of both internal and external stakeholders. This involved review of pertinent documents including the nation’s constitution, governmental briefs, and promulgations, interviews with governmental officials, media and journalism educators, media practitioners, students, and benchmarking the curriculum of other tertiary institutions in the nation. Information gathered through this process served as blueprint and frame of reference for all design decisions. In the area of local needs, four key factors were addressed. First, the realization that most media personnel in Uganda are both academically and professionally unqualified. Second, the practitioners with academic training were found lacking in experience. Third, the current curricula offered at several tertiary institutions are not comprehensive and lack local relevance. The project addressed these problems thus: first, the program was designed to cater to both traditional and non-traditional students offering opportunities for unqualified media practitioners to get their formal training through evening and weekender programs. Secondly, the challenge of inexperienced graduates was mitigated by designing the program to adopt the experiential learning approach which many refer to as the ‘Teaching Hospital Model’. This entails integrating practice to theory - similar to the way medical students engage in hands-on practice under the supervision of a mentor. The university drew a Memorandum of Understanding (MoU) with reputable media houses for students and faculty to use their studios for hands-on experience and for seasoned media practitioners to guest-teach some courses. With the convergence functions of media industry today, graduates should be trained to have adequate knowledge of other disciplines; therefore, the curriculum integrated cognate courses that would render graduates versatile. Ultimately, this research serves as a template for African colleges and universities to follow in their quest to glocalize their curricula. While the general concept of journalism may remain western, journalism curriculum developers in Africa through extensive assessment of needs, and focusing on those needs and other societal particularities, can adjust the western module to fit their local needs.Keywords: curriculum co-development, glocalization of journalism education, international journalism, needs assessment
Procedia PDF Downloads 1274915 Gasification of Trans-4-Hydroxycinnamic Acid with Ethanol at Elevated Temperatures
Authors: Shyh-Ming Chern, Wei-Ling Lin
Abstract:
Lignin is a major constituent of woody biomass, and exists abundantly in nature. It is the major byproducts from the paper industry and bioethanol production processes. The byproducts are mainly used for low-valued applications. Instead, lignin can be converted into higher-valued gaseous fuel, thereby helping to curtail the ever-growing price of oil and to slow down the trend of global warming. Although biochemical treatment is capable of converting cellulose into liquid ethanol fuel, it cannot be applied to the conversion of lignin. Alternatively, it is possible to convert lignin into gaseous fuel thermochemically. In the present work, trans-4-hydroxycinnamic acid, a model compound for lignin, which closely resembles the basic building blocks of lignin, is gasified in an autoclave with ethanol at elevated temperatures and pressures, that are above the critical point of ethanol. Ethanol, instead of water, is chosen, because ethanol dissolves trans-4-hydroxycinnamic acid easily and helps to convert it into lighter gaseous species relatively well. The major operating parameters for the gasification reaction include temperature (673-873 K), reaction pressure (5-25 MPa) and feed concentration (0.05-0.3 M). Generally, more than 80% of the reactant, including trans-4-hydroxycinnamic acid and ethanol, were converted into gaseous products at an operating condition of 873 K and 5 MPa.Keywords: ethanol, gasification, lignin, supercritical
Procedia PDF Downloads 2384914 A Daily Diary Study on Technology-Assisted Supplemental Work, Psychological Detachment, and Well-Being – The Mediating Role of Cognitive Coping
Authors: Clara Eichberger, Daantje Derks, Hannes Zacher
Abstract:
Technology-assisted supplemental work (TASW) involves performing job-related tasks after regular working hours with the help of technological devices. Due to emerging information and communication technologies, such behavior becomes increasingly common. Since previous research on the relationship of TASW, psychological detachment and well-being are mixed, this study aimed to examine the moderating roles of appraisal and cognitive coping. A moderated mediation model was tested with daily diary data from 100 employees. As hypothesized, TASW was positively related to negative affect at bedtime. In addition, psychological detachment mediated this relationship. Results did not confirm appraisal and cognitive coping as moderators. However, additional analyses revealed cognitive coping as a mediator of the positive relationship of TASW and positive affect at bedtime. These results suggest that, on the one hand engaging in TASW can be harmful to employee well-being (i.e., more negative affect) and on the other hand, it can also be associated with higher well-being (i.e., more positive affect) in case it is accompanied by cognitive coping.Keywords: cognitive coping, psychological detachment, technology-assisted supplemental work, well-being
Procedia PDF Downloads 1924913 Detection of Keypoint in Press-Fit Curve Based on Convolutional Neural Network
Authors: Shoujia Fang, Guoqing Ding, Xin Chen
Abstract:
The quality of press-fit assembly is closely related to reliability and safety of product. The paper proposed a keypoint detection method based on convolutional neural network to improve the accuracy of keypoint detection in press-fit curve. It would provide an auxiliary basis for judging quality of press-fit assembly. The press-fit curve is a curve of press-fit force and displacement. Both force data and distance data are time-series data. Therefore, one-dimensional convolutional neural network is used to process the press-fit curve. After the obtained press-fit data is filtered, the multi-layer one-dimensional convolutional neural network is used to perform the automatic learning of press-fit curve features, and then sent to the multi-layer perceptron to finally output keypoint of the curve. We used the data of press-fit assembly equipment in the actual production process to train CNN model, and we used different data from the same equipment to evaluate the performance of detection. Compared with the existing research result, the performance of detection was significantly improved. This method can provide a reliable basis for the judgment of press-fit quality.Keywords: keypoint detection, curve feature, convolutional neural network, press-fit assembly
Procedia PDF Downloads 2264912 A phytochemical and Biological Study of Viscum schemperi Engl. Growing in Saudi Arabia
Authors: Manea A. I. Alqrad, Alaa Sirwi, Sabrin R. M. Ibrahim, Hossam M. Abdallah, Gamal A. Mohamed
Abstract:
Phytochemical study of the methanolic extract of the air dried powdered of the parts of Viscum schemperi Engl. (Family: Viscaceae) using different chromatographic techniques led to the isolation of five compounds: -amyrenone (1), betulinic acid (2), (3β)-olean-12-ene-3,23-diol (3), -oleanolic acid (4), and α-oleanolic acid (5). Their structures were established based on physical, chemical, and spectral data. Anti-inflammatory and anti-apoptotic activities of oleanolic acid in a mouse model of acute hepatorenal damage were assessed. This study showed the efficacy of oleanolic acid to counteract thioacetamide-induced hepatic and kidney injury in mice through the reduction of hepatocyte oxidative damage, suppression of inflammation, and apoptosis. More importantly, oleanolic acid suppressed thioacetamide-induced hepatic and kidney injury by inhibiting NF-κB/TNF-α-mediated inflammation/apoptosis and enhancing SIRT1/Nrf2/Heme-oxygenase signalling pathway. These promising pharmacological activities suggest the potential use of oleanolic acid against hepatorenal damage.Keywords: oleanolic acid, viscum schimperi, thioacetamide, SIRT1/Nrf2/NF-κB, hepatorenal damage
Procedia PDF Downloads 974911 Black-Box-Optimization Approach for High Precision Multi-Axes Forward-Feed Design
Authors: Sebastian Kehne, Alexander Epple, Werner Herfs
Abstract:
A new method for optimal selection of components for multi-axes forward-feed drive systems is proposed in which the choice of motors, gear boxes and ball screw drives is optimized. Essential is here the synchronization of electrical and mechanical frequency behavior of all axes because even advanced controls (like H∞-controls) can only control a small part of the mechanical modes – namely only those of observable and controllable states whose value can be derived from the positions of extern linear length measurement systems and/or rotary encoders on the motor or gear box shafts. Further problems are the unknown processing forces like cutting forces in machine tools during normal operation which make the estimation and control via an observer even more difficult. To start with, the open source Modelica Feed Drive Library which was developed at the Laboratory for Machine Tools, and Production Engineering (WZL) is extended from one axis design to the multi axes design. It is capable to simulate the mechanical, electrical and thermal behavior of permanent magnet synchronous machines with inverters, different gear boxes and ball screw drives in a mechanical system. To keep the calculation time down analytical equations are used for field and torque producing equivalent circuit, heat dissipation and mechanical torque at the shaft. As a first step, a small machine tool with a working area of 635 x 315 x 420 mm is taken apart, and the mechanical transfer behavior is measured with an impulse hammer and acceleration sensors. With the frequency transfer functions, a mechanical finite element model is built up which is reduced with substructure coupling to a mass-damper system which models the most important modes of the axes. The model is modelled with Modelica Feed Drive Library and validated by further relative measurements between machine table and spindle holder with a piezo actor and acceleration sensors. In a next step, the choice of possible components in motor catalogues is limited by derived analytical formulas which are based on well-known metrics to gain effective power and torque of the components. The simulation in Modelica is run with different permanent magnet synchronous motors, gear boxes and ball screw drives from different suppliers. To speed up the optimization different black-box optimization methods (Surrogate-based, gradient-based and evolutionary) are tested on the case. The objective that was chosen is to minimize the integral of the deviations if a step is given on the position controls of the different axes. Small values are good measures for a high dynamic axes. In each iteration (evaluation of one set of components) the control variables are adjusted automatically to have an overshoot less than 1%. It is obtained that the order of the components in optimization problem has a deep impact on the speed of the black-box optimization. An approach to do efficient black-box optimization for multi-axes design is presented in the last part. The authors would like to thank the German Research Foundation DFG for financial support of the project “Optimierung des mechatronischen Entwurfs von mehrachsigen Antriebssystemen (HE 5386/14-1 | 6954/4-1)” (English: Optimization of the Mechatronic Design of Multi-Axes Drive Systems).Keywords: ball screw drive design, discrete optimization, forward feed drives, gear box design, linear drives, machine tools, motor design, multi-axes design
Procedia PDF Downloads 2844910 On the Network Packet Loss Tolerance of SVM Based Activity Recognition
Authors: Gamze Uslu, Sebnem Baydere, Alper K. Demir
Abstract:
In this study, data loss tolerance of Support Vector Machines (SVM) based activity recognition model and multi activity classification performance when data are received over a lossy wireless sensor network is examined. Initially, the classification algorithm we use is evaluated in terms of resilience to random data loss with 3D acceleration sensor data for sitting, lying, walking and standing actions. The results show that the proposed classification method can recognize these activities successfully despite high data loss. Secondly, the effect of differentiated quality of service performance on activity recognition success is measured with activity data acquired from a multi hop wireless sensor network, which introduces high data loss. The effect of number of nodes on the reliability and multi activity classification success is demonstrated in simulation environment. To the best of our knowledge, the effect of data loss in a wireless sensor network on activity detection success rate of an SVM based classification algorithm has not been studied before.Keywords: activity recognition, support vector machines, acceleration sensor, wireless sensor networks, packet loss
Procedia PDF Downloads 4754909 Surfactant-Free O/W-Emulsion as Drug Delivery System
Authors: M. Kumpugdee-Vollrath, J.-P. Krause, S. Bürk
Abstract:
Most of the drugs used for pharmaceutical purposes are poorly water-soluble drugs. About 40% of all newly discovered drugs are lipophilic and the numbers of lipophilic drugs seem to increase more and more. Drug delivery systems such as nanoparticles, micelles or liposomes are applied to improve their solubility and thus their bioavailability. Besides various techniques of solubilization, oil-in-water emulsions are often used to incorporate lipophilic drugs into the oil phase. To stabilize emulsions surface active substances (surfactants) are generally used. An alternative method to avoid the application of surfactants was of great interest. One possibility is to develop O/W-emulsion without any addition of surface active agents or the so called “surfactant-free emulsion or SFE”. The aim of this study was to develop and characterize SFE as a drug carrier by varying the production conditions. Lidocaine base was used as a model drug. The injection method was developed. Effects of ultrasound as well as of temperature on the properties of the emulsion were studied. Particle sizes and release were determined. The long-term stability up to 30 days was performed. The results showed that the surfactant-free O/W emulsions with pharmaceutical oil as drug carrier can be produced.Keywords: emulsion, lidocaine, Miglyol, size, surfactant, light scattering, release, injection, ultrasound, stability
Procedia PDF Downloads 4864908 Developing an Interpretive Plan for Qubbet El-Hawa North Archaeological Site in Aswan, Egypt
Authors: Osama Amer Mohyeldin Mohamed
Abstract:
Qubbet el-Hawa North (QHN) is an example of an archaeological site in West-Aswan and It has not opened to the public yet and has been under excavation since its discovery in 2013 as a result of the illegal digging that happened in many sites in Egypt because of the unstable situation and the absence of security. The site has the potential to be one of the most attractive sites in Aswan. Moreover, it deserves to be introduced to the visitors in a good manner appropriate to its great significance. Both interpretation and presentation are crucial inseparable tools that communicate the archaeological site's significance to the public and raise their awareness. Moreover, it helps them to understand the past and appreciate archaeological assets. People will never learn or see anything from ancient remains unless it is explained. They would only look at it as ancient and charming. They expect a story, and more than knowledge, authenticity, or even supporting preservation actions, they want to enjoy and be entertained. On the other hand, a lot of archaeologists believe that planning an archaeological site for entertaining visitors deteriorates it and affects its authenticity. Thus, it represents a challenge to design a model for visitors’ experience that meets their expectations and needs while safeguarding the site’s integrity. The article presents a proposal for an interpretation plan for the site of Qubbet el-Hawa North.Keywords: heritage interpretation and presentation, archaeological site management, qubbet el-hawa North, local community engagement, accessibility
Procedia PDF Downloads 274907 Comparison of Different k-NN Models for Speed Prediction in an Urban Traffic Network
Authors: Seyoung Kim, Jeongmin Kim, Kwang Ryel Ryu
Abstract:
A database that records average traffic speeds measured at five-minute intervals for all the links in the traffic network of a metropolitan city. While learning from this data the models that can predict future traffic speed would be beneficial for the applications such as the car navigation system, building predictive models for every link becomes a nontrivial job if the number of links in a given network is huge. An advantage of adopting k-nearest neighbor (k-NN) as predictive models is that it does not require any explicit model building. Instead, k-NN takes a long time to make a prediction because it needs to search for the k-nearest neighbors in the database at prediction time. In this paper, we investigate how much we can speed up k-NN in making traffic speed predictions by reducing the amount of data to be searched for without a significant sacrifice of prediction accuracy. The rationale behind this is that we had a better look at only the recent data because the traffic patterns not only repeat daily or weekly but also change over time. In our experiments, we build several different k-NN models employing different sets of features which are the current and past traffic speeds of the target link and the neighbor links in its up/down-stream. The performances of these models are compared by measuring the average prediction accuracy and the average time taken to make a prediction using various amounts of data.Keywords: big data, k-NN, machine learning, traffic speed prediction
Procedia PDF Downloads 3624906 The Effect of Energy Consumption and Losses on the Nigerian Manufacturing Sector: Evidence from the ARDL Approach
Authors: Okezie A. Ihugba
Abstract:
The bounds testing ARDL (2, 2, 2, 2, 0) technique to cointegration was used in this study to investigate the effect of energy consumption and energy loss on Nigeria's manufacturing sector from 1981 to 2020. The model was created to determine the relationship between these three variables while also accounting for interactions with control variables such as inflation and commercial bank loans to the manufacturing sector. When the dependent variables are energy consumption and energy loss, the bounds tests show that the variables of interest are bound together in the long run. Because electricity consumption is a critical factor in determining manufacturing value-added in Nigeria, some intriguing observations were made. According to the findings, the relationship between LELC and LMVA is statistically significant. According to the findings, electricity consumption reduces manufacturing value-added. The target variable (energy loss) is statistically significant and has a positive sign. In Nigeria, a 1% reduction in energy loss increases manufacturing value-added by 36% in the first lag and 35% in the second. According to the study, the government should speed up the ongoing renovation of existing power plants across the country, as well as the construction of new gas-fired power plants. This will address a number of issues, including overpricing of electricity as a result of grid failure.Keywords: L60, Q43, H81, C52, E31, ARDL, cointegration, Nigeria's manufacturing
Procedia PDF Downloads 1764905 Use of Multistage Transition Regression Models for Credit Card Income Prediction
Authors: Denys Osipenko, Jonathan Crook
Abstract:
Because of the variety of the card holders’ behaviour types and income sources each consumer account can be transferred to a variety of states. Each consumer account can be inactive, transactor, revolver, delinquent, defaulted and requires an individual model for the income prediction. The estimation of transition probabilities between statuses at the account level helps to avoid the memorylessness of the Markov Chains approach. This paper investigates the transition probabilities estimation approaches to credit cards income prediction at the account level. The key question of empirical research is which approach gives more accurate results: multinomial logistic regression or multistage conditional logistic regression with binary target. Both models have shown moderate predictive power. Prediction accuracy for conditional logistic regression depends on the order of stages for the conditional binary logistic regression. On the other hand, multinomial logistic regression is easier for usage and gives integrate estimations for all states without priorities. Thus further investigations can be concentrated on alternative modeling approaches such as discrete choice models.Keywords: multinomial regression, conditional logistic regression, credit account state, transition probability
Procedia PDF Downloads 4844904 Automatic Assignment of Geminate and Epenthetic Vowel for Amharic Text-to-Speech System
Authors: Tadesse Anberbir, Felix Bankole, Tomio Takara, Girma Mamo
Abstract:
In the development of a text-to-speech synthesizer, automatic derivation of correct pronunciation from the grapheme form of a text is a central problem. Particularly deriving phonological features which are not shown in orthography is challenging. In the Amharic language, geminates and epenthetic vowels are very crucial for proper pronunciation but neither is shown in orthography. In this paper, we proposed and integrated a morphological analyzer into an Amharic Text-to-Speech system, mainly to predict geminates and epenthetic vowel positions, and prepared a duration modeling method. Amharic Text-to-Speech system (AmhTTS) is a parametric and rule-based system that adopts a cepstral method and uses a source filter model for speech production and a Log Magnitude Approximation (LMA) filter as the vocal tract filter. The naturalness of the system after employing the duration modeling was evaluated by sentence listening test and we achieved an average Mean Opinion Score (MOS) 3.4 (68%) which is moderate. By modeling the duration of geminates and controlling the locations of epenthetic vowel, we are able to synthesize good quality speech. Our system is mainly suitable to be customized for other Ethiopian languages with limited resources.Keywords: Amharic, gemination, speech synthesis, morphology, epenthesis
Procedia PDF Downloads 854903 The Challenges of Business Incubations: A Case of Malaysian Incubators
Authors: Logaiswari Indiran, Zainab Khalifah, Kamariah Ismail
Abstract:
Business incubators have now been recognized as effective tools in providing business assistance to start-up firms. In both developed and developing countries, the number of incubators is growing tremendously. As the birth rate of incubators increases, so do its challenges. Malaysia, as one of the developing countries in the Asian continent, has also established a number of business incubators to breed and foster the growth and survival of start-up firms. Thus, this study discusses the incubation model applied in Malaysia and the challenges faced by these incubators using secondary data including policies, previous literature, and reports related to Malaysian incubators. The findings of this study call the government to rethink the key role of incubator managers and staffs, internal structure of the incubator concept and process, intellectual properties management, strategic alliances with universities-industries and funding supports in enhancing the support provided by the business incubators in Malaysia. The key challenges highlighted in this study signal important policy lessons for other developing countries that aim to create and map an effective business incubator ecosystem.Keywords: business incubators, incubation challenges, funding support, incubator managers, internal structure, start-up firms
Procedia PDF Downloads 2724902 Status Report of the GERDA Phase II Startup
Authors: Valerio D’Andrea
Abstract:
The GERmanium Detector Array (GERDA) experiment, located at the Laboratori Nazionali del Gran Sasso (LNGS) of INFN, searches for 0νββ of 76Ge. Germanium diodes enriched to ∼ 86 % in the double beta emitter 76Ge(enrGe) are exposed being both source and detectors of 0νββ decay. Neutrinoless double beta decay is considered a powerful probe to address still open issues in the neutrino sector of the (beyond) Standard Model of particle Physics. Since 2013, just after the completion of the first part of its experimental program (Phase I), the GERDA setup has been upgraded to perform its next step in the 0νββ searches (Phase II). Phase II aims to reach a sensitivity to the 0νββ decay half-life larger than 1026 yr in about 3 years of physics data taking. This exposing a detector mass of about 35 kg of enrGe and with a background index of about 10^−3 cts/(keV·kg·yr). One of the main new implementations is the liquid argon scintillation light read-out, to veto those events that only partially deposit their energy both in Ge and in the surrounding LAr. In this paper, the GERDA Phase II expected goals, the upgrade work and few selected features from the 2015 commissioning and 2016 calibration runs will be presented. The main Phase I achievements will be also reviewed.Keywords: gerda, double beta decay, LNGS, germanium
Procedia PDF Downloads 3674901 Application of Machine Learning Models to Predict Couchsurfers on Free Homestay Platform Couchsurfing
Authors: Yuanxiang Miao
Abstract:
Couchsurfing is a free homestay and social networking service accessible via the website and mobile app. Couchsurfers can directly request free accommodations from others and receive offers from each other. However, it is typically difficult for people to make a decision that accepts or declines a request when they receive it from Couchsurfers because they do not know each other at all. People are expected to meet up with some Couchsurfers who are kind, generous, and interesting while it is unavoidable to meet up with someone unfriendly. This paper utilized classification algorithms of Machine Learning to help people to find out the Good Couchsurfers and Not Good Couchsurfers on the Couchsurfing website. By knowing the prior experience, like Couchsurfer’s profiles, the latest references, and other factors, it became possible to recognize what kind of the Couchsurfers, and furthermore, it helps people to make a decision that whether to host the Couchsurfers or not. The value of this research lies in a case study in Kyoto, Japan in where the author has hosted 54 Couchsurfers, and the author collected relevant data from the 54 Couchsurfers, finally build a model based on classification algorithms for people to predict Couchsurfers. Lastly, the author offered some feasible suggestions for future research.Keywords: Couchsurfing, Couchsurfers prediction, classification algorithm, hospitality tourism platform, hospitality sciences, machine learning
Procedia PDF Downloads 1314900 Determination of Safety Distance Around Gas Pipelines Using Numerical Methods
Authors: Omid Adibi, Nategheh Najafpour, Bijan Farhanieh, Hossein Afshin
Abstract:
Energy transmission pipelines are one of the most vital parts of each country which several strict laws have been conducted to enhance the safety of these lines and their vicinity. One of these laws is the safety distance around high pressure gas pipelines. Safety distance refers to the minimum distance from the pipeline where people and equipment do not confront with serious damages. In the present study, safety distance around high pressure gas transmission pipelines were determined by using numerical methods. For this purpose, gas leakages from cracked pipeline and created jet fires were simulated as continuous ignition, three dimensional, unsteady and turbulent cases. Numerical simulations were based on finite volume method and turbulence of flow was considered using k-ω SST model. Also, the combustion of natural gas and air mixture was applied using the eddy dissipation method. The results show that, due to the high pressure difference between pipeline and environment, flow chocks in the cracked area and velocity of the exhausted gas reaches to sound speed. Also, analysis of the incident radiation results shows that safety distances around 42 inches high pressure natural gas pipeline based on 5 and 15 kW/m2 criteria are 205 and 272 meters, respectively.Keywords: gas pipelines, incident radiation, numerical simulation, safety distance
Procedia PDF Downloads 3304899 Packet Fragmentation Caused by Encryption and Using It as a Security Method
Authors: Said Rabah Azzam, Andrew Graham
Abstract:
Fragmentation of packets caused by encryption applied on the network layer of the IOS model in Internet Protocol version 4 (IPv4) networks as well as the possibility of using fragmentation and Access Control Lists (ACLs) as a method of restricting network access to certain hosts or areas of a network.Using default settings, fragmentation is expected to occur and each fragment to be reassembled at the other end. If this does not occur then a high number of ICMP messages should be generated back towards the source host indicating that the packet is too large and that it needs to be made smaller. This result is also expected when the MTU is changed for certain links between devices.When using ACLs and packet fragments to restrict access to hosts or network segments it is possible that ACLs cannot be set up in this way. If ACLs cannot be setup to allow only fragments then it is a limitation of the hardware’s firmware holding back this particular method. If the ACL on the restricted switch can be set up in such a way to allow only fragments then a connection that forces packets to fragment should be allowed to pass through the ACL. This should then make a network connection to the destination machine allowing data to be sent to and from the destination machine. ICMP messages from the restricted access switch and host should also be blocked from being sent back across the link which will be shown in an SSH session into the switch.Keywords: fragmentation, encryption, security, switch
Procedia PDF Downloads 3324898 Influence of the Compression Force and Powder Particle Size on Some Physical Properties of Date (Phoenix dactylifera) Tablets
Authors: Djemaa Megdoud, Messaoud Boudaa, Fatima Ouamrane, Salem Benamara
Abstract:
In recent years, the compression of date (Phoenix dactylifera L.) fruit powders (DP) to obtain date tablets (DT) has been suggested as a promising form of valorization of non commercial valuable date fruit (DF) varieties. To further improve and characterize DT, the present study aims to investigate the influence of the DP particle size and compression force on some physical properties of DT. The results show that independently of particle size, the hardness (y) of tablets increases with the increase of the compression force (x) following a logarithmic law (y = a ln (bx) where a and b are the constants of model). Further, a full factorial design (FFD) at two levels, applied to investigate the erosion %, reveals that the effects of time and particle size are the same in absolute value and they are beyond the effect of the compression. Regarding the disintegration time, the obtained results also by means of a FFD show that the effect of the compression force exceeds 4 times that of the DP particle size. As final stage, the color parameters in the CIELab system of DT immediately after their obtaining are differently influenced by the size of the initial powder.Keywords: powder, tablets, date (Phoenix dactylifera L.), hardness, erosion, disintegration time, color
Procedia PDF Downloads 4294897 Exploratory Characterization of Antibacterial Efficacy of Synthesized Nanoparticles on Staphylococcus Isolates from Hospital Specimens in Saudi Arabia
Authors: Reham K. Sebaih, Afaf I. Shehata , Awatif A. Hindi, Tarek Gheith, Amal A. Hazzani Anas Al-Orjan
Abstract:
Staphylococci spp are ubiquitous gram-positive bacteria is often associated with infections, especially nosocomial infections, and antibiotic resistanceStudy pathogenic bacteria and its use as a tool in the technology of Nano biology and molecular genetics research of the latest research trends of modern characterization and definition of different multiresistant of bacteria including Staphylococci. The Staphylococci are widespread all over the world and particularly in Saudi Arabia The present work study was conducted to evaluate the effect of five different types of nanoparticles (biosynthesized zinc oxide, Spherical and rod of each silver and gold nanoparticles) and their antibacterial impact on the Staphylococcus species. Ninety-six isolates of Staphylococcus species. Staphylococcus aureus, Staphylococcus epidermidis, MRSA were collected from different sources during the period between March 2011G to June 2011G. All isolates were isolated from inpatients and outpatients departments at Royal Commission Hospital in Yanbu Industrial, Saudi Arabia. High percentage isolation from males(55%) than females (45%). Staphylococcus epidermidis from males was (47%), (28%), and(25%). For Staphylococcus aureus and Methicillin-resistant Staphylococcus aureus (MRSA. Isolates from females were Staphylococcus aureus with higher percent of (47%), (30%), and (23%) for MRSA, Staphylococcus epidermidis. Staphylococcus aureus from wound swab were the highest percent (51.42%) followed by vaginal swab (25.71%). Staphylococcus epidermidis were founded with higher percentage in blood (37.14%) and wound swab (34.21%) respectively related to other. The highest percentage of methicillin-resistant Staphylococcus aureus (MRSA)(80.77%) were isolated from wound swab, while those from nostrils were (19.23%). Staphylococcus species were isolates in highest percentage from hospital Emergency department with Staphylococcus aureus (59.37%), Methicillin-resistant Staphylococcus aureus (MRSA) (28.13%)and Staphylococcus epidermidis (12.5%) respectively. Evaluate the antibacterial property of Zinc oxide, Silver, and Gold nanoparticles as an alternative to conventional antibacterial agents Staphylococci isolates from hospital sources we screened them. Gold and Silver rods Nanoparticles to be sensitive to all isolates of Staphylococcus species. Zinc oxide Nanoparticles gave sensitivity impact range(52%) and (48%). The Gold and Silver spherical nanoparticles did not showed any effect on Staphylococci species. Zinc Oxide Nanoparticles gave bactericidal impact (25%) and bacteriostatic impact (75%) for of Staphylococci species. Detecting the association of nanoparticles with Staphylococci isolates imaging by scanning electron microscope (SEM) of some bacteriostatic isolates for Zinc Oxide nanoparticles on Staphylococcus aureus, Staphylococcus epidermidis and Methicillin resistant Staphylococcus aureus(MRSA), showed some Overlapping Bacterial cells with lower their number and appearing some appendages with deformities in external shape. Molecular analysis was applied by Multiplex polymerase chain reaction (PCR) used for the identification of genes within Staphylococcal pathogens. A multiplex polymerase chain reaction (PCR) method has been developed using six primer pairs to detect different genes using 50bp and 100bp DNA ladder marker. The range of Molecular gene typing ranging between 93 bp to 326 bp for Staphylococcus aureus and Methicillin resistant Staphylococcus aureus by TSST-1,mecA,femA and eta, while the bands border were from 546 bp to 682 bp for Staphylococcus epidermidis using icaAB and atlE. Sixteen isolation of Staphylococcus aureus and Methicillin resistant Staphylococcus aureus were positive for the femA gene at 132bp,this allowed the using of this gene as an internal positive control, fifteen isolates of Staphylococcus aureus and Methicillin resistant Staphylococcus aureus were positive for mecA gene at163bp.This gene was responsible for antibiotic resistant Methicillin, Two isolates of Staphylococcus aureus and Methicillin resistant Staphylococcus aureus were positive for the TSST-1 gene at326bp which is responsible for toxic shock syndrome in some Staphylococcus species, None were positive for eta gene at 102bpto that was responsible for Exfoliative toxins. Six isolates of Staphylococcus epidermidis were positive for atlE gene at 682 bp which is responsible for the initial adherence, three isolates of Staphylococcus epidermidis were positive for icaAB gene at 546bp that are responsible for mediates the formation of the biofilm. In conclusion, this study demonstrates the ability of the detection of the genes to discriminate between infecting Staphylococcus strains and considered biological tests, they may potentiate the clinical criteria used for the diagnosis of septicemia or catheter-related infections.Keywords: multiplex polymerase chain reaction, toxic shock syndrome, Staphylococcus aureus, nosocomial infections
Procedia PDF Downloads 3364896 Determining the Width and Depths of Cut in Milling on the Basis of a Multi-Dexel Model
Authors: Jens Friedrich, Matthias A. Gebele, Armin Lechler, Alexander Verl
Abstract:
Chatter vibrations and process instabilities are the most important factors limiting the productivity of the milling process. Chatter can leads to damage of the tool, the part or the machine tool. Therefore, the estimation and prediction of the process stability is very important. The process stability depends on the spindle speed, the depth of cut and the width of cut. In milling, the process conditions are defined in the NC-program. While the spindle speed is directly coded in the NC-program, the depth and width of cut are unknown. This paper presents a new simulation based approach for the prediction of the depth and width of cut of a milling process. The prediction is based on a material removal simulation with an analytically represented tool shape and a multi-dexel approach for the work piece. The new calculation method allows the direct estimation of the depth and width of cut, which are the influencing parameters of the process stability, instead of the removed volume as existing approaches do. The knowledge can be used to predict the stability of new, unknown parts. Moreover with an additional vibration sensor, the stability lobe diagram of a milling process can be estimated and improved based on the estimated depth and width of cut.Keywords: dexel, process stability, material removal, milling
Procedia PDF Downloads 5244895 Domain Adaptation Save Lives - Drowning Detection in Swimming Pool Scene Based on YOLOV8 Improved by Gaussian Poisson Generative Adversarial Network Augmentation
Authors: Simiao Ren, En Wei
Abstract:
Drowning is a significant safety issue worldwide, and a robust computer vision-based alert system can easily prevent such tragedies in swimming pools. However, due to domain shift caused by the visual gap (potentially due to lighting, indoor scene change, pool floor color etc.) between the training swimming pool and the test swimming pool, the robustness of such algorithms has been questionable. The annotation cost for labeling each new swimming pool is too expensive for mass adoption of such a technique. To address this issue, we propose a domain-aware data augmentation pipeline based on Gaussian Poisson Generative Adversarial Network (GP-GAN). Combined with YOLOv8, we demonstrate that such a domain adaptation technique can significantly improve the model performance (from 0.24 mAP to 0.82 mAP) on new test scenes. As the augmentation method only require background imagery from the new domain (no annotation needed), we believe this is a promising, practical route for preventing swimming pool drowning.Keywords: computer vision, deep learning, YOLOv8, detection, swimming pool, drowning, domain adaptation, generative adversarial network, GAN, GP-GAN
Procedia PDF Downloads 984894 Earnings Management and Firm’s Creditworthiness
Authors: Maria A. Murtiati, Ancella A. Hermawan
Abstract:
The objective of this study is to examine whether the firm’s eligibility to get a bank loan is influenced by earnings management. The earnings management is distinguished between accruals and real earnings management. Hypothesis testing is carried out with logistic regression model using sample of 285 companies listed at Indonesian Stock Exchange in 2010. The result provides evidence that a greater magnitude in accruals earnings management increases the firm’s probability to be eligible to get bank loan. In contrast, real earnings management through abnormal cash flow and abnormal discretionary expenses decrease firm’s probability to be eligible to get bank loan, while real management through abnormal production cost increases such probability. The result of this study suggests that if the earnings management is assumed to be opportunistic purpose, the accruals based earnings management can distort the banks credit analysis using financial statements. Real earnings management has more impact on the cash flows, and banks are very concerned on the firm’s cash flow ability. Therefore, this study indicates that banks are more able to detect real earnings management, except abnormal production cost in real earning management.Keywords: discretionary accruals, real earning management, bank loan, credit worthiness
Procedia PDF Downloads 3454893 The Development of Monk’s Food Bowl Production on Occupational Health Safety and Environment at Work for the Strength of Rattanakosin Local Wisdom
Authors: Thammarak Srimarut, Witthaya Mekhum
Abstract:
This study analysed and developed a model for monk’s food bowl production on occupational health safety and environment at work for the encouragement of Rattanakosin local wisdom at Banbart Community. The process of blowpipe welding was necessary to produce the bowl which was very dangerous or 93.59% risk. After the employment of new sitting posture, the work risk was lower 48.41% or moderate risk. When considering in details, it was found that: 1) the traditional sitting posture could create work risk at 88.89% while the new sitting posture could create the work risk at 58.86%. 2) About the environmental pollution, with the traditional sitting posture, workers exposed to the polluted fume from welding at 61.11% while with the new sitting posture workers exposed to the polluted fume from welding at 40.47%. 3) On accidental risk, with the traditional sitting posture, workers exposed to the accident from welding at 94.44% while with the new sitting posture workers exposed to the accident from welding at 62.54%.Keywords: occupational health safety, environment at work, Monk’s food bowl, machine intelligence
Procedia PDF Downloads 4354892 A Study of the Adaptive Reuse for School Land Use Strategy: An Application of the Analytic Network Process and Big Data
Authors: Wann-Ming Wey
Abstract:
In today's popularity and progress of information technology, the big data set and its analysis are no longer a major conundrum. Now, we could not only use the relevant big data to analysis and emulate the possible status of urban development in the near future, but also provide more comprehensive and reasonable policy implementation basis for government units or decision-makers via the analysis and emulation results as mentioned above. In this research, we set Taipei City as the research scope, and use the relevant big data variables (e.g., population, facility utilization and related social policy ratings) and Analytic Network Process (ANP) approach to implement in-depth research and discussion for the possible reduction of land use in primary and secondary schools of Taipei City. In addition to enhance the prosperous urban activities for the urban public facility utilization, the final results of this research could help improve the efficiency of urban land use in the future. Furthermore, the assessment model and research framework established in this research also provide a good reference for schools or other public facilities land use and adaptive reuse strategies in the future.Keywords: adaptive reuse, analytic network process, big data, land use strategy
Procedia PDF Downloads 2034891 Investigating the Contemporary Architecture Education Challenges in India
Authors: Vriddhi Prasad
Abstract:
The paper briefly outlines the nature of contemporary Architecture Education in India and its present challenges with theoretically feasible solutions. It explores in detail the arduous position of architecture education owing to, privatization of higher education institutes in India, every changing demand of the technology driven industry and discipline, along with regional and cultural resources that should be explored academically for the enrichment of graduates. With the government's education policy of supporting privatization, a comprehensive role for the regulating body of Architecture Education becomes imperative. The paper provides key insights through empirical research into the nature of these roles and the areas which need attention in light of the problems. With the aid of critically acclaimed education model like Design Build, contextual retrofits for Indian institutes can be stressed for inclusion in the curriculum. The pairing of a private institute and public industry/research body and vice versa can lead to pro-economic and pro-social research environment. These reforms if stressed by an autonomous nationwide regulating body rather than the state will lead to uniformity and flexibility of curriculum which promotes the creation of fresh graduates who are adaptable to the changing needs.Keywords: architecture education, building information modelling, design build, pedagogy
Procedia PDF Downloads 224