Search results for: single machine scheduling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7397

Search results for: single machine scheduling

2177 Eosinopenia: Marker for Early Diagnosis of Enteric Fever

Authors: Swati Kapoor, Rajeev Upreti, Monica Mahajan, Abhaya Indrayan, Dinesh Srivastava

Abstract:

Enteric Fever is caused by gram negative bacilli Salmonella typhi and paratyphi. It is associated with high morbidity and mortality worldwide. Timely initiation of treatment is a crucial step for prevention of any complications. Cultures of body fluids are diagnostic, but not always conclusive or practically feasible in most centers. Moreover, the results of cultures delay the treatment initiation. Serological tests lack diagnostic value. The blood counts can offer a promising option in diagnosis. A retrospective study to find out the relevance of leucopenia and eosinopenia was conducted on 203 culture proven enteric fever patients and 159 culture proven non-enteric fever patients in a tertiary care hospital in New Delhi. The patient details were retrieved from the electronic medical records section of the hospital. Absolute eosinopenia was considered as absolute eosinophil count (AEC) of less than 40/mm³ (normal level: 40-400/mm³) using LH-750 Beckman Coulter Automated machine. Leucopoenia was defined as total leucocyte count (TLC) of less than 4 X 10⁹/l. Blood cultures were done using BacT/ALERT FA plus automated blood culture system before first antibiotic dose was given. Case and control groups were compared using Pearson Chi square test. It was observed that absolute eosinophil count (AEC) of 0-19/mm³ was a significant finding (p < 0.001) in enteric fever patients, whereas leucopenia was not a significant finding (p=0.096). Using Receiving Operating Characteristic (ROC) curves, it was observed that patients with both AEC < 14/mm³ and TCL < 8 x 10⁹/l had 95.6% chance of being diagnosed as enteric fever and only 4.4% chance of being diagnosed as non-enteric fever. This result was highly significant with p < 0.001. This is a very useful association of AEC and TLC found in enteric fever patients of this study which can be used for the early initiation of treatment in clinically suspected enteric fever patients.

Keywords: absolute eosinopenia, absolute eosinophil count, enteric fever, leucopenia, total leucocyte count

Procedia PDF Downloads 162
2176 Nucleophile Mediated Addition-Fragmentation Generation of Aryl Radicals from Aryl Diazonium Salts

Authors: Elene Tatunashvili, Bun Chan, Philippe E. Nashar, Christopher S. P. McErlean

Abstract:

The reduction of aryl diazonium salts is one of the most efficient ways to generate aryl radicals for use in a wide range of transformations, including Sandmeyer-type reactions, Meerwein arylations of olefins and Gomberg-Bachmann-Hey arylations of heteroaromatic systems. The aryl diazonium species can be reduced electrochemically, by UV irradiation, inner-sphere and outer-sphere single electron transfer processes (SET) from metal salts, SET from photo-excited organic catalysts or fragmentation of adducts with weak bases (acetate, hydroxide, etc.). This paper details an approach for the metal-free reduction of aryl diazonium salts, which facilitates the efficient synthesis of various aromatic compounds under exceedingly mild reaction conditions. By measuring the oxidation potential of a number of organic molecules, a series of nucleophiles were identified that reduce aryl diazonium salts via the addition-fragmentation mechanism. This approach leads to unprecedented operational simplicity: The reactions are very rapid and proceed in the open air; there is no need for external irradiation or heating, and the process is compatible with a large number of radical reactions. We illustrate these advantages by using the addition-fragmentation strategy to regioselectively arylate a series of heterocyclic compounds, to synthesize ketones by arylation of silyl enol ethers, and to synthesize benzothiophene and phenanthrene derivatives by radical annulation reactions.

Keywords: diazonium salts, hantzsch esters, oxygen, radical reactions, synthetic methods

Procedia PDF Downloads 139
2175 Effect of Fines on Liquefaction Susceptibility of Sandy Soil

Authors: Ayad Salih Sabbar, Amin Chegenizadeh, Hamid Nikraz

Abstract:

Investigation of liquefaction susceptibility of materials that have been used in embankments, slopes, dams, and foundations is very essential. Many catastrophic geo-hazards such as flow slides, declination of foundations, and damage to earth structure are associated with static liquefaction that may occur during abrupt shearing of these materials. Many artificial backfill materials are mixtures of sand with fines and other composition. In order to provide some clarifications and evaluations on the role of fines in static liquefaction behaviour of sand sandy soils, the effect of fines on the liquefaction susceptibility of sand was experimentally examined in the present work over a range of fines content, relative density, and initial confining pressure. The results of an experimental study on various sand-fines mixtures are presented. Undrained static triaxial compression tests were conducted on saturated Perth sand containing 5% bentonite at three different relative densities (10, 50, and 90%), and saturated Perth sand containing both 5% bentonite and slag (2%, 4%, and 6%) at single relative density 10%. Undrained static triaxial tests were performed at three different initial confining pressures (100, 150, and 200 kPa). The brittleness index was used to quantify the liquefaction potential of sand-bentonite-slag mixtures. The results demonstrated that the liquefaction susceptibility of sand-5% bentonite mixture was more than liquefaction susceptibility of clean sandy soil. However, liquefaction potential decreased when both of two fines (bentonite and slag) were used. Liquefaction susceptibility of all mixtures decreased with increasing relative density and initial confining pressure.  

Keywords: liquefaction, bentonite, slag, brittleness index

Procedia PDF Downloads 208
2174 Synthesis, Structural, Spectroscopic and Nonlinear Optical Properties of New Picolinate Complex of Manganese (II) Ion

Authors: Ömer Tamer, Davut Avcı, Yusuf Atalay

Abstract:

Novel picolinate complex of manganese(II) ion, [Mn(pic)2] [pic: picolinate or 2-pyridinecarboxylate], was prepared and fully characterized by single crystal X-ray structure determination. The manganese(II) complex was characterized by FT-IR, FT-Raman and UV–Vis spectroscopic techniques. The C=O, C=N and C=C stretching vibrations were found to be strong and simultaneously active in IR and spectra. In order to support these experimental techniques, density functional theory (DFT) calculations were performed at Gaussian 09W. Although the supramolecular interactions have some influences on the molecular geometry in solid state phase, the calculated data show that the predicted geometries can reproduce the structural parameters. The molecular modeling and calculations of IR, Raman and UV-vis spectra were performed by using DFT levels. Nonlinear optical (NLO) properties of synthesized complex were evaluated by the determining of dipole moment (µ), polarizability (α) and hyperpolarizability (β). Obtained results demonstrated that the manganese(II) complex is a good candidate for NLO material. Stability of the molecule arising from hyperconjugative interactions and charge delocalization was analyzed using natural bond orbital (NBO) analysis. The highest occupied and the lowest unoccupied molecular orbitals (HOMO and LUMO) which is also known the frontier molecular orbitals were simulated, and obtained energy gap confirmed that charge transfer occurs within manganese(II) complex. Molecular electrostatic potential (MEP) for synthesized manganese(II) complex displays the electrophilic and nucleophilic regions. From MEP, the the most negative region is located over carboxyl O atoms while positive region is located over H atoms.

Keywords: DFT, picolinate, IR, Raman, nonlinear optic

Procedia PDF Downloads 485
2173 Saving Energy through Scalable Architecture

Authors: John Lamb, Robert Epstein, Vasundhara L. Bhupathi, Sanjeev Kumar Marimekala

Abstract:

In this paper, we focus on the importance of scalable architecture for data centers and buildings in general to help an enterprise achieve environmental sustainability. The scalable architecture helps in many ways, such as adaptability to the business and user requirements, promotes high availability and disaster recovery solutions that are cost effective and low maintenance. The scalable architecture also plays a vital role in three core areas of sustainability: economy, environment, and social, which are also known as the 3 pillars of a sustainability model. If the architecture is scalable, it has many advantages. A few examples are that scalable architecture helps businesses and industries to adapt to changing technology, drive innovation, promote platform independence, and build resilience against natural disasters. Most importantly, having a scalable architecture helps industries bring in cost-effective measures for energy consumption, reduce wastage, increase productivity, and enable a robust environment. It also helps in the reduction of carbon emissions with advanced monitoring and metering capabilities. Scalable architectures help in reducing waste by optimizing the designs to utilize materials efficiently, minimize resources, decrease carbon footprints by using low-impact materials that are environmentally friendly. In this paper we also emphasize the importance of cultural shift towards the reuse and recycling of natural resources for a balanced ecosystem and maintain a circular economy. Also, since all of us are involved in the use of computers, much of the scalable architecture we have studied is related to data centers.

Keywords: scalable architectures, sustainability, application design, disruptive technology, machine learning and natural language processing, AI, social media platform, cloud computing, advanced networking and storage devices, advanced monitoring and metering infrastructure, climate change

Procedia PDF Downloads 74
2172 Feeding Cost, Growth Performance, Meat and some Carcass Characteristics for Algerian “Hamra” Lambs

Authors: Kaddour Ziani, Méghit Boumédiène Khaled

Abstract:

Forty Hamra single non-castrated male lambs were included in the present study. Traits analyzed were weighted at birth (BW) every 20 days. At 99.15±1.07 days old, the animals were weaned, then divided in two identical groups: control and experimental lambs (n=20) according to their live weight; 24.63±0.47 and 24.35±0.64 Kg respectively. During 59 days, two varieties of feed were given to assess the growth performance. The feeding system consisted of supplying a commercial concentrate (corn based) for control lambs. However, a similar amount of experimental concentrate (barley based) was given to the experimental ones. Both diets were supplemented with 200g straw of barley/animal/ration. 10 lambs fed with experimental concentrate were slaughtered at 37.85±0.78 Kg live weight. The growth performance, the diet cost, and some of the carcass and meat characteristics were evaluated. Chemical analysis of both given diets showed an elevated crude fibre content in the commercial concentrate. However, the experimental concentrate contained higher amounts of calcium. Both groups grew at a similar rate (p > 0.05) and showed the same final body weight. Concerning the cost of the given diet, a significant difference has been found (p ≤ 0.001), between both diets. This could affect the price of the produced meat. The dressing percentage was 46.65%, with 2.49% of carcass shrink. Furthermore, an interesting percentage of total muscle was obtained (63.73%) with a good carcass conformation scoring 9.56. Compared to other breed sheep, “Hamra” carcass could be considered as the most valuable economically.

Keywords: Carcass characteristics, feeding cost, growth performance, Hamra lamb, meat

Procedia PDF Downloads 286
2171 Attitudes towards People with Disability and Career Interest in Disability Studies: A Study of Clinical Medical Students of a Tertiary Institution in Southeastern Nigeria

Authors: Ebele V. Okoli, Emmanuel Nwobi, Dozie Ezechukwu, Ijeoma Itanyi

Abstract:

One in seven people worldwide suffer from a disability. 80% of people with disabilities live in developing countries. Negative attitudes and misconceptions among health-care providers constitute barri¬ers to optimal health care for people with disabilities. This underscores the relevance of a study of the attitude of Nigerian medical students towards disability and their willingness to work in the disability sector. This was a descriptive cross-sectional study conducted among 254 penultimate and final year medical students of a university in southeastern Nigeria. The mean age of the students was 24.8 ± 3.12 years. Majority of the students were male (75.2%), single (96.9%), of the Igbo tribe (86.6%), Christian (97.6%) and grew up in urban areas (68.1%). Results indicated that the medical students had a predominantly positive attitude towards people with disability as 73.8% had a positive attitude and mean attitude score was 67.03 ± 0.14 (positive attitude = 61 – 120, negative attitude = 0 - 60). Chi-square analysis did not show any significant effect of demographic and social factors on the students’ attitude towards People with Disabilities. The students were mostly willing to work in areas that address the challenges of people with disability (70.4%) but a greater proportion had never heard about Disability Studies (67.5%). About a third of the students (33.2%) would like to travel abroad to practice in the disability sector. Conclusions: The students generally had a positive attitude towards people with disability and a greater percentage were willing to work in the disability sector in their future career. About two-thirds had however, never heard about disability studies. There was some potential for brain drain among the students as a third of the population intended to practice abroad on graduation.

Keywords: attitudes, career interest, disability, medical students

Procedia PDF Downloads 346
2170 Interpreting Form Based Code in Historic Residential Corridor

Authors: Diljan C. K.

Abstract:

Every location on the planet has a history and culture that give it its own identity and character, making it distinct from others. urbanised world, it is fashionable to remould its original character and impression in a contemporary style. The new character and impression of places show a complete detachment from their roots. The heritage and cultural values of the place are replaced by new impressions, and as a result, they eventually lose their identity and character and never have sustenance. In this situation, form-based coding acts as a tool in the urban design process, helping to come up with solutions that strongly bind individuals to their neighbourhood and are closely related to culture through the physical spaces they are associated with. Form-based code was made by pioneers of new urbanism in 1987 in the United States of America. Since then, it has been used in various projects inside and outside the USA with varied scales, from the design of a single building to the design of a whole community. This research makes an effort to interpret the form-based code in historic corridors to establish the association of physical form and space with the public realm to uphold the context and culture. Many of the historic corridors are undergoing a tremendous transformation in their physical form, avoiding their culture and context. This will lead to it losing its identity in form and function. If the case of Valiyashala in Trivandrum is taken as the case, which is transforming its form and will lead to the loss of its identity, the form-based code will be a suitable tool to strengthen its historical value. The study concludes by analysing the existing code (KMBR) of Valiyashala and form-based code to find the requirements in form-based code for Valiyashala.

Keywords: form based code, urban conservation, heritage, historic corridor

Procedia PDF Downloads 97
2169 Processing Studies and Challenges Faced in Development of High-Pressure Titanium Alloy Cryogenic Gas Bottles

Authors: Bhanu Pant, Sanjay H. Upadhyay

Abstract:

Frequently, the upper stage of high-performance launch vehicles utilizes cryogenic tank-submerged pressurization gas bottles with high volume-to-weight efficiency to achieve a direct gain in the satellite payload. Titanium alloys, owing to their high specific strength coupled with excellent compatibility with various fluids, are the materials of choice for these applications. Amongst the Titanium alloys, there are two alloys suitable for cryogenic applications, namely Ti6Al4V-ELI and Ti5Al2.5Sn-ELI. The two-phase alpha-beta alloy Ti6Al4V-ELI is usable up to LOX temperature of 90K, while the single-phase alpha alloy Ti5Al2.5Sn-ELI can be used down to LHe temperature of 4 K. The high-pressure gas bottles submerged in the LH2 (20K) can store more amount of gas in as compared to those submerged in LOX (90K) bottles the same volume. Thus, the use of these alpha alloy gas bottles stored at 20K gives a distinct advantage with respect to the need for a lesser number of gas bottles to store the same amount of high-pressure gas, which in turn leads to a one-to-one advantage in the payload in the satellite. The cost advantage to the tune of 15000$/ kg of weight is saved in the upper stages, and, thereby, the satellite payload gain is expected by this change. However, the processing of alpha Ti5Al2.5Sn-ELI alloy gas bottles poses challenges due to the lower forgeability of the alloy and mode of qualification for the critical severe application environment. The present paper describes the processing and challenges/ solutions during the development of these advanced gas bottles for LH2 (20K) applications.

Keywords: titanium alloys, cryogenic gas bottles, alpha titanium alloy, alpha-beta titanium alloy

Procedia PDF Downloads 42
2168 Rapid Separation of Biomolecules and Neutral Analytes with a Cationic Stationary Phase by Capillary Electrochromatography

Authors: A. Aslihan Gokaltun, Ali Tuncel

Abstract:

The unique properties of capillary electrochromatography (CEC) such as high performance, high selectivity, low consumption of both reagents and analytes ensure this technique an attractive one for the separation of biomolecules including nucleosides and nucleotides, peptides, proteins, carbohydrates. Monoliths have become a well-established separation media for CEC in the format that can be compared to a single large 'particle' that does not include interparticular voids. Convective flow through the pores of monolith significantly accelerates the rate of mass transfer and enables a substantial increase in the speed of the separation. In this work, we propose a new approach for the preparation of cationic monolithic stationary phase for capillary electrochromatography. Instead of utilizing a charge bearing monomer during polymerization, the desired charge-bearing group is generated on the capillary monolith after polymerization by using the reactive moiety of the monolithic support via one-pot, simple reaction. Optimized monolithic column compensates the disadvantages of frequently used reversed phases, which are difficult for separation of polar solutes. Rapid separation and high column efficiencies are achieved for the separation of neutral analytes, nucleic acid bases and nucleosides in reversed phase mode. Capillary monolith showed satisfactory hydrodynamic permeability and mechanical stability with relative standard deviation (RSD) values below 2 %. A new promising, reactive support that has a 'ligand selection flexibility' due to its reactive functionality represent a new family of separation media for CEC.

Keywords: biomolecules, capillary electrochromatography, cationic monolith, neutral analytes

Procedia PDF Downloads 206
2167 Thermo-Mechanical Behavior of Steel-Wood Connections of Wooden Structures Under the Effect of a Fire

Authors: Ahmed Alagha, Belkacem Lamri, Abdelhak Kada.

Abstract:

Steel-wood assemblies often have complex geometric configurations whose overall behavior under the effect of a fire is conditioned by the thermal response, by combining the two materials steel and wood, whose thermal characteristics are greatly influenced by high temperatures. The objective of this work is to study the thermal behavior of a steel-wood connection, with or without insulating material, subjected to an ISO834 standard fire model. The analysis is developed by the analytical approach using the Eurocode, and numerically, by the finite element method, through the ANSYS calculation code. The design of the connections is evaluated at room temperature taking the cases of single shear and double shear. The thermal behavior of the connections is simulated in transient state while taking into account the modes of heat transfer by convection and by radiation. The variation of temperature as a function of time is evaluated in different positions of the connections while talking about the heat produced and the formation of the carbon layer. The results relate to the temperature distributions in the connection elements as a function of the duration of the fire. The results of the thermal analysis show that the temperature increases rapidly and reaches more than 260 °C in the steel material for an hour of exposure to fire. The temperature development in wood material is different from that in steel because of its thermal properties. Wood heats up on the outside and burns, its surface can reach very high temperatures in points on the surface.

Keywords: Eurocode 5, finite elements, ISO834, simple shear, thermal behaviour, wood-steel connection

Procedia PDF Downloads 71
2166 Estimation of Relative Permeabilities and Capillary Pressures in Shale Using Simulation Method

Authors: F. C. Amadi, G. C. Enyi, G. Nasr

Abstract:

Relative permeabilities are practical factors that are used to correct the single phase Darcy’s law for application to multiphase flow. For effective characterisation of large-scale multiphase flow in hydrocarbon recovery, relative permeability and capillary pressures are used. These parameters are acquired via special core flooding experiments. Special core analysis (SCAL) module of reservoir simulation is applied by engineers for the evaluation of these parameters. But, core flooding experiments in shale core sample are expensive and time consuming before various flow assumptions are achieved for instance Darcy’s law. This makes it imperative for the application of coreflooding simulations in which various analysis of relative permeabilities and capillary pressures of multiphase flow can be carried out efficiently and effectively at a relative pace. This paper presents a Sendra software simulation of core flooding to achieve to relative permeabilities and capillary pressures using different correlations. The approach used in this study was three steps. The first step, the basic petrophysical parameters of Marcellus shale sample such as porosity was determined using laboratory techniques. Secondly, core flooding was simulated for particular scenario of injection using different correlations. And thirdly the best fit correlations for the estimation of relative permeability and capillary pressure was obtained. This research approach saves cost and time and very reliable in the computation of relative permeability and capillary pressures at steady or unsteady state, drainage or imbibition processes in oil and gas industry when compared to other methods.

Keywords: relative permeabilty, porosity, 1-D black oil simulator, capillary pressures

Procedia PDF Downloads 434
2165 Surface Acoustic Wave (SAW)-Induced Mixing Enhances Biomolecules Kinetics in a Novel Phase-Interrogation Surface Plasmon Resonance (SPR) Microfluidic Biosensor

Authors: M. Agostini, A. Sonato, G. Greco, M. Travagliati, G. Ruffato, E. Gazzola, D. Liuni, F. Romanato, M. Cecchini

Abstract:

Since their first demonstration in the early 1980s, surface plasmon resonance (SPR) sensors have been widely recognized as useful tools for detecting chemical and biological species, and the interest of the scientific community toward this technology has known a rapid growth in the past two decades owing to their high sensitivity, label-free operation and possibility of real-time detection. Recent works have suggested that a turning point in SPR sensor research would be the combination of SPR strategies with other technologies in order to reduce human handling of samples, improve integration and plasmonic sensitivity. In this light, microfluidics has been attracting growing interest. By properly designing microfluidic biochips it is possible to miniaturize the analyte-sensitive areas with an overall reduction of the chip dimension, reduce the liquid reagents and sample volume, improve automation, and increase the number of experiments in a single biochip by multiplexing approaches. However, as the fluidic channel dimensions approach the micron scale, laminar flows become dominant owing to the low Reynolds numbers that typically characterize microfluidics. In these environments mixing times are usually dominated by diffusion, which can be prohibitively long and lead to long-lasting biochemistry experiments. An elegant method to overcome these issues is to actively perturb the liquid laminar flow by exploiting surface acoustic waves (SAWs). With this work, we demonstrate a new approach for SPR biosensing based on the combination of microfluidics, SAW-induced mixing and the real-time phase-interrogation grating-coupling SPR technology. On a single lithium niobate (LN) substrate the nanostructured SPR sensing areas, interdigital transducer (IDT) for SAW generation and polydimethylsiloxane (PDMS) microfluidic chambers were fabricated. SAWs, impinging on the microfluidic chamber, generate acoustic streaming inside the fluid, leading to chaotic advection and thus improved fluid mixing, whilst analytes binding detection is made via SPR method based on SPP excitation via gold metallic grating upon azimuthal orientation and phase interrogation. Our device has been fully characterized in order to separate for the very first time the unwanted SAW heating effect with respect to the fluid stirring inside the microchamber that affect the molecules binding dynamics. Avidin/biotin assay and thiol-polyethylene glycol (bPEG-SH) were exploited as model biological interaction and non-fouling layer respectively. Biosensing kinetics time reduction with SAW-enhanced mixing resulted in a ≈ 82% improvement for bPEG-SH adsorption onto gold and ≈ 24% for avidin/biotin binding—≈ 50% and 18% respectively compared to the heating only condition. These results demonstrate that our biochip can significantly reduce the duration of bioreactions that usually require long times (e.g., PEG-based sensing layer, low concentration analyte detection). The sensing architecture here proposed represents a new promising technology satisfying the major biosensing requirements: scalability and high throughput capabilities. The detection system size and biochip dimension could be further reduced and integrated; in addition, the possibility of reducing biological experiment duration via SAW-driven active mixing and developing multiplexing platforms for parallel real-time sensing could be easily combined. In general, the technology reported in this study can be straightforwardly adapted to a great number of biological system and sensing geometry.

Keywords: biosensor, microfluidics, surface acoustic wave, surface plasmon resonance

Procedia PDF Downloads 262
2164 Protection against Sodium Arsenate Induced Fetal Toxicity in Albino Mice by Vitamin C and E

Authors: Fariha Qureshi, Mohammad Tahir

Abstract:

Epidemiological evidences indicated that arsenic contamination in drinking water increased the incidence of spontaneous abortion, stillbirth and premature babies in pregnant women. This study was designed to investigate the protective role of vitamin C&E against sodium arsenate induced fetal toxicity in albino mice. Twenty-four pregnant albino mice of BALB/c strain were randomly divided into 4 groups having 6 animals in each. Group A1 served as control and was injected with 0.1ml/kg/day distilled water I/P for 18 days. Groups A2,A3 & A4 received single I/P injection of sodium arsenate 35mg/kg on 8th gestational day, whereas groups A3 and A4 were also given Vitamin C and E by I/P injection, 9 mg/kg/day and 15 mg/kg/day respectively, starting from 8th GD and continued for the rest of the pregnancy period. The early implantation sites, fetal resorptions, weight of live fetuses and crown rump length were recorded. Gross morphological examination was carried out for malformations. Fetal kidneys were extracted for histological and micrometric analysis. Group A2 exhibited an increased incidence of abortion, fetal resorptions, significant decrease in number of litter and fetal weight; the difference of means was statistically significant among the groups (p<0.000). In group A2 fetal kidneys presented glomerulonephritis with acute tubular necrotic changes and interstitial fibrosis. Groups A3&A4 showed statistically significant improvement in these parameters. The results revealed the antioxidant potential of Vitamin C and E in protecting against arsenic induced fetal toxicity in mice.

Keywords: fetal toxicity, fetal resorptions, interstitial fibrosis, tocopherol

Procedia PDF Downloads 259
2163 Crossing Narrative Waters in World Cinema: Alamar (2009) and Kaili Blues (2015)

Authors: Dustin Dill

Abstract:

The physical movement of crossing over water points to both developing narrative tropes and innovative cinematography in World Cinema today. Two prime examples, Alamar (2009) by Pedro González-Rubio and Kaili Blues (2015) by Bi Gan, demonstrate how contemporary storytelling in a film not only rests upon these water shots but also emerges from them. The range of symbolism that these episodes in the story provoke goes hand in hand with the diverse filming sequences found in the respective productions. While González-Rubio decides to cut the scene into long and longer shots, Gan uses a single take. The differing angles depict equally unique directors and film projects: Alamar runs parallel to many definitions of the essay film, and Kaili Blues resonates much more with mystery and art film. Nonetheless, the crossing of water scenes influence the narratives’ subjects despite the generic consequences, and it is within the essay, mystery, and art film genres which allows for a better understanding of World Cinema. Tiago de Luca explains World Cinema’s prerogative of giving form to a certain type of spectator does not always line up. Given the immense number of interpretations of crossing water —the escape from suffering to find nirvana, rebirth, and colonization— underline the difficulty of categorizing it. If before this type of cross-genre was a trait that defined World Cinema in its beginning, this study observes that González-Rubio and Gan question the all-encompassing genre with their experimental shots of a universal narrative trope, the crossing of water.

Keywords: cinematography, genre, narrative, world cinema

Procedia PDF Downloads 265
2162 Numerical Investigation of Soft Clayey Soil Improved by Soil-Cement Columns under Harmonic Load

Authors: R. Ziaie Moayed, E. Ghanbari Alamouty

Abstract:

Deep soil mixing is one of the improvement methods in geotechnical engineering which is widely used in soft soils. This article investigates the consolidation behavior of a soft clay soil which is improved by soil-cement column (SCC) by numerical modeling using Plaxis2D program. This behavior is simulated under vertical static and cyclic load which is applied on the soil surface. The static load problem is the simulation of a physical model test in an axisymmetric condition which uses a single SCC in the model center. The results of numerical modeling consist of settlement of soft soil composite, stress on soft soil and column, and excessive pore water pressure in the soil show a good correspondence with the test results. The response of soft soil composite to the cyclic load in vertical direction also compared with the static results. Also the effects of two variables namely the cement content used in a SCC and the area ratio (the ratio of the diameter of SCC to the diameter of composite soil model, a) is investigated. The results show that the stress on the column with the higher value of a, is lesser compared with the stress on other columns. Different rate of consolidation and excessive pore pressure distribution is observed in cyclic load problem. Also comparing the results of settlement of soil shows higher compressibility in the cyclic load problem.

Keywords: area ratio, consolidation behavior, cyclic load, numerical modeling, soil-cement column

Procedia PDF Downloads 141
2161 Associated Map and Inter-Purchase Time Model for Multiple-Category Products

Authors: Ching-I Chen

Abstract:

The continued rise of e-commerce is the main driver of the rapid growth of global online purchase. Consumers can nearly buy everything they want at one occasion through online shopping. The purchase behavior models which focus on single product category are insufficient to describe online shopping behavior. Therefore, analysis of multi-category purchase gets more and more popular. For example, market basket analysis explores customers’ buying tendency of the association between product categories. The information derived from market basket analysis facilitates to make cross-selling strategies and product recommendation system. To detect the association between different product categories, we use the market basket analysis with the multidimensional scaling technique to build an associated map which describes how likely multiple product categories are bought at the same time. Besides, we also build an inter-purchase time model for associated products to describe how likely a product will be bought after its associated product is bought. We classify inter-purchase time behaviors of multi-category products into nine types, and use a mixture regression model to integrate those behaviors under our assumptions of purchase sequences. Our sample data is from comScore which provides a panelist-label database that captures detailed browsing and buying behavior of internet users across the United States. Finding the inter-purchase time from books to movie is shorter than the inter-purchase time from movies to books. According to the model analysis and empirical results, this research finally proposes the applications and recommendations in the management.

Keywords: multiple-category purchase behavior, inter-purchase time, market basket analysis, e-commerce

Procedia PDF Downloads 356
2160 Preparation and Characterization of Calcium Phosphate Cement

Authors: W. Thepsuwan, N. Monmaturapoj

Abstract:

Calcium phosphate cements (CPCs) is one of the most attractive bioceramics due to its moldable and shape ability to fill complicated bony cavities or small dental defect positions. In this study, CPCs were produced by using mixtures of tetracalcium phosphate (TTCP, Ca4O(PO4)2) and dicalcium phosphate anhydrous (DCPA, CaHPO4) in equimolar ratio (1/1) with aqueous solutions of acetic acid (C2H4O2) and disodium hydrogen phosphate dehydrate (Na2HPO4.2H2O) in combination with sodium alginate in order to improve theirs moldable characteristic. The concentrations of the aqueous solutions and sodium alginate were varied to investigate the effects of different aqueous solution and alginate on properties of the cements. The cement paste was prepared by mixing cement powder (P) with aqueous solution (L) in a P/L ratio of 1.0 g/ 0.35 ml. X-ray diffraction (XRD) was used to analyses phase formation of the cements. Setting times and compressive strength of the set CPCs were measured using the Gilmore apparatus and Universal testing machine, respectively. The results showed that CPCs could be produced by using both basic (Na2HPO4.2H2O) and acidic (C2H4O2) solutions. XRD results show the precipitation of hydroxyapatite in all cement samples. No change in phase formation among cements using difference concentrations of Na2HPO4.2H2O solutions. With increasing concentration of acidic solutions, samples obtained less hydroxyapatite with a high dicalcium phosphate dehydrate leaded to a shorter setting time. Samples with sodium alginate exhibited higher crystallization of hydroxyapatite than that of without alginate as a result of shorten setting time in basic solution but a longer setting time in acidic solution. The stronger cement was attained from samples using acidic solution with sodium alginate; however it was lower than using the basic solution.

Keywords: calcium phosphate cements, TTCP, DCPA, hydroxyapatite, properties

Procedia PDF Downloads 378
2159 Multi Response Optimization in Drilling Al6063/SiC/15% Metal Matrix Composite

Authors: Hari Singh, Abhishek Kamboj, Sudhir Kumar

Abstract:

This investigation proposes a grey-based Taguchi method to solve the multi-response problems. The grey-based Taguchi method is based on the Taguchi’s design of experimental method, and adopts Grey Relational Analysis (GRA) to transfer multi-response problems into single-response problems. In this investigation, an attempt has been made to optimize the drilling process parameters considering weighted output response characteristics using grey relational analysis. The output response characteristics considered are surface roughness, burr height and hole diameter error under the experimental conditions of cutting speed, feed rate, step angle, and cutting environment. The drilling experiments were conducted using L27 orthogonal array. A combination of orthogonal array, design of experiments and grey relational analysis was used to ascertain best possible drilling process parameters that give minimum surface roughness, burr height and hole diameter error. The results reveal that combination of Taguchi design of experiment and grey relational analysis improves surface quality of drilled hole.

Keywords: metal matrix composite, drilling, optimization, step drill, surface roughness, burr height, hole diameter error

Procedia PDF Downloads 304
2158 An Exploratory Study to Investigate the Impact of Corporate Social Responsibility on Luxury Brand Avoidance in India

Authors: Glyn Atwal, Douglas Bryson

Abstract:

The rapid expansion of a consumer class in India has also coincided with an increasing awareness of social and environmental issues. The overall objective of this study explores to what extent Corporate Social Responsibility (CSR) can lead to luxury brand avoidance within an Indian context. In-depth interviews were conducted with luxury consumers in New Delhi. The demographic breakdown of those interviewed was 16 males and 9 females, aged between 21 and 44. Antecedents of brand avoidance could be sorted according to two main categories. The first category was consumer dissatisfaction due to poor product or service performance. Customer service, particularly within the hospitality sector, was identified as a defining source of brand avoidance. The second category was negative stereotypes of brand users. A salient finding was that no single participant explicitly identified CSR as a source of brand avoidance. However, the interviews revealed that luxury consumers are in fact concerned about CSR issues but assume that international luxury brands have a positive record on CSR performance. Interestingly, participants placed greater emphasis on the broader interpretation of ‘corporate reputation’ rather than specific social or environmental issues to determine the CSR performance of a luxury brand. The findings reported in this exploratory study suggest that Indian luxury consumers do value the overall CSR performance of luxury brands expressed as a brand responsibility or brand reputation, and this is a potential source of brand avoidance. International luxury brands need, therefore, consider developing but also communicating a positive CSR strategy in order to reduce the risk of customers forming negative opinions about the brand.

Keywords: brand avoidance, CSR, luxury

Procedia PDF Downloads 301
2157 X-Ray Diffraction and Crosslink Density Analysis of Starch/Natural Rubber Polymer Composites Prepared by Latex Compounding Method

Authors: Raymond Dominic Uzoh

Abstract:

Starch fillers were extracted from three plant sources namely amora tuber (a wild variety of Irish potato), sweet potato and yam starch and their particle size, pH, amylose, and amylopectin percentage decomposition determined accordingly by high performance liquid chromatography (HPLC). The starch was introduced into natural rubber in liquid phase (through gelatinization) by the latex compounding method and compounded according to standard method. The prepared starch/natural rubber composites was characterized by Instron Universal testing machine (UTM) for tensile mechanical properties. The composites was further characterized by x-ray diffraction and crosslink density analysis. The particle size determination showed that amora starch granules have the highest particle size (156 × 47 μm) followed by yam starch (155× 40 μm) and then the sweet potato starch (153 × 46 μm). The pH test also revealed that amora starch has a near neutral pH of 6.9, yam 6.8, and sweet potato 5.2 respectively. Amylose and amylopectin determination showed that yam starch has a higher percentage of amylose (29.68), followed by potato (22.34) and then amora starch with the lowest value (14.86) respectively. The tensile mechanical properties testing revealed that yam starch produced the best tensile mechanical properties followed by amora starch and then sweet potato starch. The structure, crystallinity/amorphous nature of the product composite was confirmed by x-ray diffraction, while the nature of crosslinking was confirmed by swelling test in toluene solvent using the Flory-Rehner approach. This research study has rendered a workable strategy for enhancing interfacial interaction between a hydrophilic filler (starch) and hydrophobic polymeric matrix (natural rubber) yielding moderately good tensile mechanical properties for further exploitation development and application in the rubber processing industry.

Keywords: natural rubber, fillers, starch, amylose, amylopectin, crosslink density

Procedia PDF Downloads 157
2156 Design of a Virtual Reality System for Children with Developmental Coordination Disorder

Authors: Ya-Ju Ju, Li-Chen Yang, Yi-Chun Du, Rong-Ju Cherng

Abstract:

Introduction: It is estimated that 5-6% of school-aged children may be diagnosed to have developmental coordination disorder (DCD). Children with DCD are characterized with motor skill difficulty which cannot be explained by any medical or intellectual reasons. Such motor difficulties limit children’s participation to sports activity, further affect their physical fitness, cardiopulmonary function and balance, and may lead to obesity. The purpose of the project was to develop an exergaming system for children with DCD aiming to improve their physical fitness, cardiopulmonary function and balance ability. Methods: This study took five steps to build up the system: system planning, tasks selection, tasks programming, system integration and usability test. The system basically adopted virtual reality technique to integrate self-developed training programs. The training programs were developed to brainstorm among team members and after literature review. The selected tasks for training in the system were a combination of fundamental movement tor skill. Results and Discussion: Based on the theory of motor development, we design the training task from easy ones to hard ones, from single tasks to dual tasks. The tasks included walking, sit to stand, jumping, kicking, weight shifting, side jumping and their combination. Preliminary study showed that the tasks presented an order of development. Further study is needed to examine its effect on motor skill and cardiovascular fitness in children with DCD.

Keywords: virtual reality, virtual reality system, developmental coordination disorder, children

Procedia PDF Downloads 100
2155 Using LTE-Sim in New Hanover Decision Algorithm for 2-Tier Macrocell-Femtocell LTE Network

Authors: Umar D. M., Aminu A. M., Izaddeen K. Y.

Abstract:

Deployments of mini macrocell base stations also referred to as femtocells, improve the quality of service of indoor and outdoor users. Nevertheless, mobility management remains a key issue with regards to their deployment. This paper is leaned towards this issue, with an in-depth focus on the most important aspect of mobility management -handover. In handover management, making a handover decision in the LTE two-tier macrocell femtocell network is a crucial research area. Decision algorithms in this research are classified and comparatively analyzed according to received signal strength, user equipment speed, cost function, and interference. However, it was observed that most of the discussed decision algorithms fail to consider cell selection with hybrid access policy in a single macrocell multiple femtocell scenario, another observation was a majority of these algorithms lack the incorporation of user equipment residence parameter. Not including this parameter boosts the number of unnecessary handover occurrence. To deal with these issues, a sophisticated handover decision algorithm is proposed. The proposed algorithm considers the user’s velocity, received signal strength, residence time, as well as the femtocell base station’s access policy. Simulation results have shown that the proposed algorithm reduces the number of unnecessary handovers when compared to conventional received signal strength-based handover decision algorithm.

Keywords: user-equipment, radio signal service, long term evolution, mobility management, handoff

Procedia PDF Downloads 111
2154 The Structure and Function Investigation and Analysis of the Automatic Spin Regulator (ASR) in the Powertrain System of Construction and Mining Machines with the Focus on Dump Trucks

Authors: Amir Mirzaei

Abstract:

The powertrain system is one of the most basic and essential components in a machine. The occurrence of motion is practically impossible without the presence of this system. When power is generated by the engine, it is transmitted by the powertrain system to the wheels, which are the last parts of the system. Powertrain system has different components according to the type of use and design. When the force generated by the engine reaches to the wheels, the amount of frictional force between the tire and the ground determines the amount of traction and non-slip or the amount of slip. At various levels, such as icy, muddy, and snow-covered ground, the amount of friction coefficient between the tire and the ground decreases dramatically and considerably, which in turn increases the amount of force loss and the vehicle traction decreases drastically. This condition is caused by the phenomenon of slipping, which, in addition to the waste of energy produced, causes the premature wear of driving tires. It also causes the temperature of the transmission oil to rise too much, as a result, causes a reduction in the quality and become dirty to oil and also reduces the useful life of the clutches disk and plates inside the transmission. this issue is much more important in road construction and mining machinery than passenger vehicles and is always one of the most important and significant issues in the design discussion, in order to overcome. One of these methods is the automatic spin regulator system which is abbreviated as ASR. The importance of this method and its structure and function have solved one of the biggest challenges of the powertrain system in the field of construction and mining machinery. That this research is examined.

Keywords: automatic spin regulator, ASR, methods of reducing slipping, methods of preventing the reduction of the useful life of clutches disk and plate, methods of preventing the premature dirtiness of transmission oil, method of preventing the reduction of the useful life of tires

Procedia PDF Downloads 67
2153 The Importance of Artificial Intelligence in Various Healthcare Applications

Authors: Joshna Rani S., Ahmadi Banu

Abstract:

Artificial Intelligence (AI) has a significant task to carry out in the medical care contributions of things to come. As AI, it is the essential capacity behind the advancement of accuracy medication, generally consented to be a painfully required development in care. Albeit early endeavors at giving analysis and treatment proposals have demonstrated testing, we anticipate that AI will at last dominate that area too. Given the quick propels in AI for imaging examination, it appears to be likely that most radiology, what's more, pathology pictures will be inspected eventually by a machine. Discourse and text acknowledgment are now utilized for assignments like patient correspondence and catch of clinical notes, and their utilization will increment. The best test to AI in these medical services areas isn't regardless of whether the innovations will be sufficiently skilled to be valuable, but instead guaranteeing their appropriation in day by day clinical practice. For far reaching selection to happen, AI frameworks should be affirmed by controllers, coordinated with EHR frameworks, normalized to an adequate degree that comparative items work likewise, instructed to clinicians, paid for by open or private payer associations, and refreshed over the long haul in the field. These difficulties will, at last, be survived, yet they will take any longer to do as such than it will take for the actual innovations to develop. Therefore, we hope to see restricted utilization of AI in clinical practice inside 5 years and more broad use inside 10 years. It likewise appears to be progressively evident that AI frameworks won't supplant human clinicians for a huge scope, yet rather will increase their endeavors to really focus on patients. Over the long haul, human clinicians may advance toward errands and work plans that draw on remarkably human abilities like sympathy, influence, and higher perspective mix. Maybe the lone medical services suppliers who will chance their professions over the long run might be the individuals who will not work close by AI

Keywords: artificial intellogence, health care, breast cancer, AI applications

Procedia PDF Downloads 169
2152 An Assessment of Different Blade Tip Timing (BTT) Algorithms Using an Experimentally Validated Finite Element Model Simulator

Authors: Mohamed Mohamed, Philip Bonello, Peter Russhard

Abstract:

Blade Tip Timing (BTT) is a technology concerned with the estimation of both frequency and amplitude of rotating blades. A BTT system comprises two main parts: (a) the arrival time measurement system, and (b) the analysis algorithms. Simulators play an important role in the development of the analysis algorithms since they generate blade tip displacement data from the simulated blade vibration under controlled conditions. This enables an assessment of the performance of the different algorithms with respect to their ability to accurately reproduce the original simulated vibration. Such an assessment is usually not possible with real engine data since there is no practical alternative to BTT for blade vibration measurement. Most simulators used in the literature are based on a simple spring-mass-damper model to determine the vibration. In this work, a more realistic experimentally validated simulator based on the Finite Element (FE) model of a bladed disc (blisk) is first presented. It is then used to generate the necessary data for the assessment of different BTT algorithms. The FE modelling is validated using both a hammer test and two firewire cameras for the mode shapes. A number of autoregressive methods, fitting methods and state-of-the-art inverse methods (i.e. Russhard) are compared. All methods are compared with respect to both synchronous and asynchronous excitations with both single and simultaneous frequencies. The study assesses the applicability of each method for different conditions of vibration, amount of sampling data, and testing facilities, according to its performance and efficiency under these conditions.

Keywords: blade tip timing, blisk, finite element, vibration measurement

Procedia PDF Downloads 297
2151 Antimicrobial Evaluation of Polyphenon 60 and Ciprofloxacin Loaded Nano Emulsion against Uropathogenic Escherichia coli Bacteria and Its in vivo Analysis

Authors: Atinderpal Kaur, Shweta Dang

Abstract:

Our aim is to develop a nanoemulsion-based delivery system containing polyphenon 60 (P60) and ciprofloxacin (Cipro) for intravaginal delivery to treat urinary tract infection. In the present study Polyphenon 60 (P60) and ciprofloxacin (Cipro) were loaded in a single nano emulsion (NE) system via ultra-sonication technique and characterized for particle size, in vitro release and antibacterial efficacy against Bcl-2 level Escherichia coli bacteria. To determine in vivo pharmacokinetic parameters and intravaginal transportation of NE, gamma scintigraphy and biodistribution study was conducted by radiolabelling NE with technetium pertechnetate (99mTc). The preliminary antibacterial investigation showed synergy between these compounds with FICindex of 0.42. The developed formulation showed zeta potential +55.3 and particle size of 151.7 nm, with PDI of 0.196. The in vitro release percentage of P60 at the end of 7th hours was 94.8 ± 0.9 % whereas the release for Cipro was 75.1± 0.15 % in simulated vaginal media. MBC was identified and the findings demonstrated that in both ESBL (Extended Spectrum β- lactamase) and MBL (Metallo β- lactamase) cultures the P60+Cipro NE showed inhibition of growth of all the isolates at 2 mg/ml dilutions. The percentage per gram of radiolabelled drug was found (3.50±0.26) and (3.81±0.30) in kidney and urinary bladder, respectively at 3 h. From the findings, it was concluded that the developed P60+Cipro NE was transported efficiently throughout the target organs, had long duration of action and high biocompatibility via intravaginal administration as compared to oral administration.

Keywords: ciprofloxacin, gamma scintigraphy, intravaginal drug delivery, Polyphenon 60

Procedia PDF Downloads 309
2150 Tool for Maxillary Sinus Quantification in Computed Tomography Exams

Authors: Guilherme Giacomini, Ana Luiza Menegatti Pavan, Allan Felipe Fattori Alves, Marcela de Oliveira, Fernando Antonio Bacchim Neto, José Ricardo de Arruda Miranda, Seizo Yamashita, Diana Rodrigues de Pina

Abstract:

The maxillary sinus (MS), part of the paranasal sinus complex, is one of the most enigmatic structures in modern humans. The literature has suggested that MSs function as olfaction accessories, to heat or humidify inspired air, for thermoregulation, to impart resonance to the voice and others. Thus, the real function of the MS is still uncertain. Furthermore, the MS anatomy is complex and varies from person to person. Many diseases may affect the development process of sinuses. The incidence of rhinosinusitis and other pathoses in the MS is comparatively high, so, volume analysis has clinical value. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure, which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust, and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authors’ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, São Paulo State University, Brazil. The tool for automatic MS quantification, developed in Matlab®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression, and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Bland–Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to quantify MS volume proved to be robust, fast, and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authors’ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, São Paulo State University, Brazil. The tool for automatic MS quantification, developed in Matlab®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Bland–Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to automatically quantify MS volume proved to be robust, fast and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases.

Keywords: maxillary sinus, support vector machine, region growing, volume quantification

Procedia PDF Downloads 497
2149 A Study on How to Develop the Usage Metering Functions of BIM (Building Information Modeling) Software under Cloud Computing Environment

Authors: Kim Byung-Kon, Kim Young-Jin

Abstract:

As project opportunities for the Architecture, Engineering and Construction (AEC) industry have grown more complex and larger, the utilization of BIM (Building Information Modeling) technologies for 3D design and simulation practices has been increasing significantly; the typical applications of the BIM technologies include clash detection and design alternative based on 3D planning, which have been expanded over to the technology of construction management in the AEC industry for virtual design and construction. As for now, commercial BIM software has been operated under a single-user environment, which is why initial costs for its introduction are very high. Cloud computing, one of the most promising next-generation Internet technologies, enables simple Internet devices to use services and resources provided with BIM software. Recently in Korea, studies to link between BIM and cloud computing technologies have been directed toward saving costs to build BIM-related infrastructure, and providing various BIM services for small- and medium-sized enterprises (SMEs). This study addressed how to develop the usage metering functions of BIM software under cloud computing architecture in order to archive and use BIM data and create an optimal revenue structure so that the BIM services may grow spontaneously, considering a demand for cloud resources. To this end, the author surveyed relevant cases, and then analyzed needs and requirements from AEC industry. Based on the results & findings of the foregoing survey & analysis, the author proposed herein how to optimally develop the usage metering functions of cloud BIM software.

Keywords: construction IT, BIM (Building Information Modeling), cloud computing, BIM-based cloud computing, 3D design, cloud BIM

Procedia PDF Downloads 489
2148 Improving Fingerprinting-Based Localization System Using Generative AI

Authors: Getaneh Berie Tarekegn

Abstract:

A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. It also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 39