ARGrasp_ICRA2024png

Xiwen Dengxiong, Xueting Wang, Shi Bai, Yunbo Zhang, 2024 IEEE International Conference on Robotics and Automation (ICRA 2024), accepted. [Video @ Youtube]

Most existing 6-DoF robot grasping solutions depend on strong supervision on grasp pose to ensure satisfactory performance, which could be laborious and impractical when the robot works in some restricted area. To this end, we propose a self-supervised 6-DoF grasp pose detection framework via an Augmented Reality (AR) teleoperation system that can efficiently learn human demonstrations and provide 6-DoF grasp poses without grasp pose annotations. Specifically, the system collects the human demonstration from the AR environment and contrastively learns the grasping strategy from the demonstration. For the real-world experiment, the proposed system leads to satisfactory grasping abilities and learning to grasp unknown objects within three demonstrations.


VisualizationErrorpng

Wenhao Yang, Yunbo Zhang, Journal of Computing and Information Science in Engineering, Mar 2024, 24(3): 031003,  in the special issue on Extended Reality in Design and Manufacturing. [https://doi.org/10.1115/1.4063350]

Augmented reality (AR) enhances the user’s perception of the real environment by superimposing virtual images generated by computers. These virtual images provide additional visual information that complements the real-world view. AR systems are rapidly gaining popularity in various manufacturing fields such as training, maintenance, assembly, and robot programming. In some AR applications, it is crucial for the invisible virtual environment to be precisely aligned with the physical environment to ensure that human users can accurately perceive the virtual augmentation in conjunction with their real surroundings. The process of achieving this accurate alignment is known as calibration. During some robotics applications using AR, we observed instances of misalignment in the visual representation within the designated workspace. This misalignment can potentially impact the accuracy of the robot’s operations during the task. Based on the previous research on AR-assisted robot programming systems, this work investigates the sources of misalignment errors and presents a simple and efficient calibration procedure to reduce the misalignment accuracy in general video see-through AR systems. To accurately superimpose virtual information onto the real environment, it is necessary to identify the sources and propagation of errors. In this work, we outline the linear transformation and projection of each point from the virtual world space to the virtual screen coordinates. An offline calibration method is introduced to determine the offset matrix from the head-mounted display (HMD) to the camera, and experiments are conducted to validate the improvement achieved through the calibration process.


MR_Cybersecuritypng

Wenhao Yang, Xiwen Dengxiong, Xueting Wang, Yidan Hu, Yunbo Zhang, Journal of Computing and Information Science in Engineering, Mar 2024, 24(3): 031004, in the special issue on Extended Reality in Design and Manufacturing. [https://doi.org/10.1115/1.4062658][Video@Youtube]

This paper aims to present a potential cybersecurity risk existing in mixed reality (MR)-based smart manufacturing applications that decipher digital passwords through a single RGB camera to capture the user’s mid-air gestures. We first created a test bed, which is an MR-based smart factory management system consisting of mid-air gesture-based user interfaces (UIs) on a video see-through MR head-mounted display. To interact with UIs and input information, the user’s hand movements and gestures are tracked by the MR system. We setup the experiment to be the estimation of the password input by users through mid-air hand gestures on a virtual numeric keypad. To achieve this goal, we developed a lightweight machine learning-based hand position tracking and gesture recognition method. This method takes either video streaming or recorded video clips (taken by a single RGB camera in front of the user) as input, where the videos record the users’ hand movements and gestures but not the virtual UIs. With the assumption of the known size, position, and layout of the keypad, the machine learning method estimates the password through hand gesture recognition and finger position detection. The evaluation result indicates the effectiveness of the proposed method, with a high accuracy of 97.03%, 94.06%, and 83.83% for 2-digit, 4-digit, and 6-digit passwords, respectively, using real-time video streaming as input with known length condition. Under the unknown length condition, the proposed method reaches 85.50%, 76.15%, and 77.89% accuracy for 2-digit, 4-digit, and 6-digit passwords, respectively.


MultiFusion_DepthEnhancepng

Chuhua Xian, Jun Zhang, Wenhao Yang, Yunbo Zhang,  Journal of Intelligent Manufacturing (2024). [https://doi.org/10.1007/s10845-023-02299-7]

The depth image obtained by consumer-level depth cameras generally has low resolution and missing regions due to the limitations of the depth camera hardware and the method of depth image generation. Despite the fact that many studies have been done on RGB image completion and super-resolution, a key issue with depth images is that there will be evident jagged boundaries and a significant loss of geometric information. To address these issues, we introduce a multi-scale progressive fusion network for depth image completion and super-resolution in this paper, which has an asymptotic structure for integrating hierarchical features in different domains. We employ two separate branches to learn the features of a multi-scale image given a depth image and its corresponding RGB image. The extracted features are then fused into different level features of these two branches using a step-by-step strategy to recreate the final depth image. To confine distinct borders and geometric features, a multi-dimension loss is also designed. Extensive depth completion and super-resolution studies reveal that our proposed method outperforms state-of-the-art methods both qualitatively and quantitatively. The proposed methods are also applied to two human–robot interaction applications, including a remote-controlled robot based on an unmanned ground vehicle (UGV), AR-based toolpath planning, and automatic toolpath extraction. All these experimental results indicate the effectiveness and potential benefits of the proposed methods.


SystemWorkFlowpng

Wenhao Yang, Qinqin Xiao, Yunbo Zhang, Journal of Intelligent Manufacturing (2023). [https://doi.org/10.1007/s10845-023-02096-2][Video@Youtube]

In the era of Industry 4.0, manufacturing enterprises are actively adopting collaborative robots (Cobots) in their productions. Current online and offline robot programming methods are difficult to use and require extensive experience or skills. On the other hand, the manufacturing industries are experiencing a labor shortage. An essential question, therefore, is: how would a new robot programming method help novice users complete complex tasks effectively, efficiently, and intuitively? To answer this question, we proposed HAR2bot, a novel human-centered augmented reality programming interface with awareness of cognitive load. Using NASA's system design theory and the cognitive load theory, a set of guidelines for designing an AR-based human-robot interaction system is obtained through a human-centered design process. Based on these guidelines, we designed and implemented a human-in-the-loop workflow with features for cognitive load management. The effectiveness and efficiency of HAR2bot are verified in two complex tasks compared with existing online programming methods. We also evaluated HAR2bot quantitatively and qualitatively through a user study with 16 participants. According to the user study, compared with existing methods, HAR2bot has higher efficiency, a lower overall cognitive load, lower cognitive loads for each type, and higher safety.


ARVisErrorCompletepng

Wenhao Yang, Yunbo Zhang, In Proceedings of the ASME 2022 17th International Manufacturing Science and Engineering Conference, MSEC 2022, Accepted. [pdf]

Under the fourth industrial revolution (Industry 4.0), Augmented Reality (AR) provides new affordances for a variety of applications, such as AR-based human-robot interaction, virtual assembly assistance, and workforce virtual training. The see-through head-mounted displays (STHMDs), based on either optical see-through or video see-through technologies, are the primary AR device to augment the visual perception of the real environment with computer-generated contents through a hand-free headset. Specifically, the video see-through STHMDs process the superimposing of the real environment and virtual contents based on the digital images and output it to users, while optical see-through STHMDs display virtual contents through the optics-based near-eyes display with users' normal view of the real scene kept. For both types of AR devices, the accuracy of visualization is essential. For example, in AR-based human-robot interaction, the inaccurate rendering of 3D virtual objects with respect to the real environment, will lead to users' mistaking operations, and therefore, causes an invalid tool path planning result. In spite of many works related to system calibration and error reduction for optical see-through STHMDs, there are few efforts at figuring out the nature and factors of those errors in video see-through STHMDs. In this paper, taking consumer-available AR video see-through STHMDs as an example, we identify error sources of registration and build a mathematical model of the display progress to describe the error propagation in the stereo video see-through systems. Then, based on the mathematical model of the system, the sensitivity of each error source to the final registration error is analyzed. Finally, possible solutions of error correction are suggested and summarized in the general video see-through STHMDs.


ErrorManagementpng

Matt Ryan, Yiwen Wang, Qinqin Xiao, Rui Liu, Yunbo Zhang, Error Management-based Virtual Reality Training for Cnc Milling Set-up, In Proceedings of the ASME 2022 17th International Manufacturing Science and Engineering Conference, MSEC 2022, Accepted. [pdf] [Video@Youtube]

In order to address the demand for skilled machinists and limitations with current training programs, we introduce an immersive Virtual Reality (VR) CNC machining training environment for CNC machine setup processes with a novel error management based training curriculum. Current machinist training programs are several years long requiring active mentorship from a skilled individual and are very costly due to the materials and tools required. Mistakes and errors made during the set up process can create safety risks, waste material and break equipment requiring additional time to reset. Existing VR CNC milling training environments fail to address mistakes that can occur during the setup process. In order to address these operational challenges, a novel error-management based training in VR is proposed which allows trainees to learn the set up procedure,learn the common errors \& mistakes and practice identifying errors in addition to practicing activities for the setup. The training first introduces students to the setup procedure, followed by demonstrations of error cases and identification and management strategies culminating in practice opportunities. Trainees witness a spatial demonstration of the procedure, guided by auditory and text instructions. Users are able to actively explore the spatial teaching environment while controlling a virtual CNC milling machine. A preliminary user training test is performed comparing the VR method to a video training and a video training with error management curriculum.


auditoryCNCpng

Krzysztof Jarosz, Yunbo Zhang, Rui Liu,  In Proceedings of the ASME 2022 17th International Manufacturing Science and Engineering Conference, MSEC 2022, Accepted.[pdf]

In the era of Industry 4.0, the machining sound has been extensively adopted in tool condition monitoring systems, virtual machining environment, and remote machining solutions. However, only limited attention has been paid to understand how experienced machinists detect tool wear and improper cutting conditions based on their hearing in the real machining environment. This paper aims to experimentally investigate and analyze the auditory perception of CNC operators during the cutting process and their capabilities of detecting unfavorable cutting conditions and faults using their sense of hearing and expertise. The sound in the machining environment was analyzed in the aspect of sound pressure levels (SPL). Optimal positions for sound sample acquisition were determined and audio data was recorded for future analysis. Experimental cutting tests with simulated process faults were conducted, where machinists with varying degrees of experience observed the process, listened to the machining sound and tried to determine whether cutting conditions were normal or if faults occurred. The primary research goal was to analyze how well operators can monitor the process using their various senses and to investigate the role of sound and auditory perceptions of trained professionals in cutting process supervision and monitoring. SPL measurements have shown that the sound pressure varies substantially in the machining environment, which is expected to affect the quality and volume of recorded machining sound depending on microphone positioning. Cutting tests have shown that the machinists use various senses to determine faults in the process, relying most significantly on auditory stimuli, with other factors, such as vibrations or visual examination of the workpiece having a secondary effect in the assessment of cutting process conditions and outcomes.


immersiveReview_CGjpg

Rui Liu, Chao Peng, Yunbo Zhang, Hannah Husarek, and Qi Yu, Computers & Graphics, Elsevier, 2021. [PDF][https://doi.org/10.1016/j.cag.2021.07.023]

With the expanded digitalization of manufacturing and product development process, research into the use of immersive technology in smart manufacturing has increased. The use of immersive technology is theorized to increase the productivity of all steps in the product development process, from the start of the concept generation phase to assembling the final product. Many aspects of immersive technology are considered, including techniques for CAD model conversion and rendering, types of VR/AR displays, interaction modalities, as well as its integration with different areas of product development. The purpose of this survey paper is to investigate the potential applications of immersive technology and advantages and potential drawbacks that should be considered when integrating the technology into the workplace. The potential application is broad, and the possibilities are continuing to expand as the technology used becomes more advanced and more affordable for commercial business to implement on a large scale. The technology is currently being utilized in the concept generation and in the design or engineering of new products. Additionally, the immersive technology have great potential to increase the productivity of assembly line workers and of the factory layout/functionality, and could provide a more hands-on form of training, which leads to the conclusion that immersive technology is the step to the future in terms of smart product development strategies to implement for employers.

ARobotpng

Wenhao Yang, Qinqin Xiao, Yunbo Zhang, In Proceedings of the ASME 2021 16th InternationalManufacturing Science and Engineering Conference, MSEC 2021, June 21-25, 2021, Virtual, Online.  [pdf] [ https://doi.org/10.1115/MSEC2021-62468] [Video@Youtube]

To solve the problems of complex robot programming tasks, we propose an Augmented Reality (AR) based human-robot interface for planning a collision-free path in a complex environment. Current robot programming methods usually require a high level of experience in robot programming (online programming), the time-consuming 3D modeling of the working environment for collision detection (offline programming), and a tedious and inefficient re-planing to adapt environment or task changes (both online and offline programming). In order to address these problems, an end-to-end AR human-robot interface is proposed, which provides a new affordance to users by enabling them to plan the path in the AR environment. A set of user-interactive tools allow users to define and edit waypoints as the high-level guidance and the direct inputs for the toolpath planning package, Kinematics and Dynamics Library (KDL). With the fast sensing of the workspace and accurate rendering, an in-situ simulation module is utilized for collision check and verification by the users’ perception. Users will repeat the process of 1) waypoints definition and editing, and 2) the collision checking and path feasibility verification, until a satisfactory path is obtained. A preliminary testing is conducted in a use case with complex obstacles to verified the effectiveness and the efficiency of the proposed interface.


FabHandWear_UbiComp2021png

Luis Paredes, Sai Swarup Reddy, Subramanian Chidambaram, Devashri Vagholkar, Yunbo Zhang, Bedrich Benes, Karthik Ramani
In Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Presented at  UbiComp 2021. [pdf] [Video@Youtube]

Current hand wearables have limited customizability, they are loose-fit to an individual’s hand and lack comfort. The main barrier in customizing hand wearables is the geometric complexity and size variation in hands. Moreover, there are different functions that the users can be looking for; some may only want to detect hand’s motion or orientation; others may be interested in tracking their vital signs. Current wearables usually fit multiple functions and are designed for a universal user with none or limited customization. There are no specialized tools that facilitate the creation of customized hand wearables for varying hand sizes and provide different functionalities. We envision an emerging generation of customizable hand wearables that supports hand differences and promotes hand exploration with additional functionality. We introduce FabHandWear, a novel system that allows end-to-end design and fabrication of customized functional self-contained hand wearables. FabHandWear is designed to work with off-the-shelf electronics, with the ability to connect them automatically and generate a printable pattern for fabrication. We validate our system by using illustrative applications, a durability test, and an empirical user evaluation. Overall, FabHandWear offers the freedom to create customized, functional, and manufacturable hand wearables.


BioPrintReviewpng

Srikanthan Ramesh, Ola LA Harrysson, Prahalad K. Rao, Ali Tamayol, Denis Cormier, Yunbo Zhang, Iris V. Rivero, Bioprinting, 2020, 21 (e00116). [pdf] [https://doi.org/10.1016/j.bprint.2020.e00116]


Extrusion-based bioprinting involves extrusion of bioinks through nozzles to create three-dimensional structures. The bioink contains living organisms with biological relevance for emerging applications such as tissue scaffolds, organs-on-a-chip, regenerative medicine, and drug delivery systems. Bioinks, which are mixtures of biomaterials and living cells, influence the quality of printed constructs through their physical, mechanical, biological, and rheological behavior. Printability is a property of a bioink used to describe its ability to create well-defined structures. Amongst all contributing factors, rheological properties and printing parameters are primary factors that influence the quality of bioprinted constructs. With the increasing popularity of extrusion bioprinting, different approaches for controlling these properties and parameters have emerged. This review highlights the role of rheology and process parameters in extrusion bioprinting and discusses qualitative and quantitative methods proposed to measure and define the printability of bioinks. Finally, an overview of key challenges and future trends in extrusion bioprinting is provided.

HandRecon_CGI20png

Hao Peng, Chuhua Xian, Yunbo Zhang, The Visual Computer, selected from Computer Graphics International (CGI) 2020, pp.1-13, July 2020. [pdf] [https://doi.org/10.1007/s00371-020-01908-3]


Most of the existing methods for 3D hand analysis based on RGB images mainly focus on estimating hand key points or poses, which cannot capture geometric details of the 3D hand shape. In this work, we propose a novel method to reconstruct a 3D hand mesh from a single monocular RGB image. Different from current parameter-based or pose-based methods, our proposed method directly estimates the 3D hand mesh based on Graph Convolution Neural Network (GCN) [1]. Our network consists of two modules: the hand localization and mask generation module, and the 3D hand mesh reconstruction module. The first module, which is a VGG16-based network, is applied to localize the hand region in the input image and generate the binary mask of the hand. The second module takes the high order features from the first and uses a GCN-based network to estimate the coordinates of each vertex of the hand mesh and reconstruct the 3D hand shape. To achieve better accuracy, a novel loss based on the differential properties of the discrete mesh is proposed. We also use professional software to create a large synthetic data set that contains both ground truth 3D hand meshes and poses for training.To handle the real-world data, we use the CycleGAN network to transform the data domain of real-world images to that of our synthesis data set. We demonstrate that our method can produce accurate 3D hand mesh,and achieve an efficient performance for real-time applications.


SmartMfgReview_MDPI_Technologiespng

Shane Terry, Hao Lu, Ismail Fidan, Yunbo Zhang, Khalid Tantawi, Terry Guo, Bahram Asiabanpour, MDPI Technologies, 2020, 8(2), 31. [pdf][https://doi.org/10.3390/technologies8020031]


Today, the current trends of manufacturing are towards the adaptation and implementation of Smart Manufacturing, which is a new initiative to turn the traditional factories into profitable innovation facilities. However, the concept and technologies are still in a state of infancy since many manufacturers around the world are not fully knowledgeable about the benefits of Smart Manufacturing compared to their current practices. This article reviews several aspects of Smart Manufacturing and introduces its advantages in terms of energy-saving and production efficiency. This article also points out that some areas need further research so that Smart Manufacturing can be better shaped.


iMoldpng

Jonathan Ting, Yunbo Zhang, Sang Ho Yoon, James D. Holbery, Siyuan Ma, In Proceedings of CHI EA '20: Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. [pdf][https://doi.org/10.1145/3334480.3382804]


In recent years, in-mold electronics (IME) technique was introduced as a combination of printing electrically functional materials and vacuum plastic forming. IME has gained significant attention across various industries since it enables various electrical functionalities on a wide range of 3D geometries. Although IME shows great application potentials, hardships still exist during design for the manufacturing stage. For example, printed 2D structures experience mechanical bending and stretching during vacuum forming. This results in challenges for designers to ensure precise circuit-to-3D mold registration or to prevent over deformation of circuit and attached components. To this end, we propose a software toolkit that provides real time 2D-to-3D mapping, guided structural and electrical circuit design with interactive user interface. We present a novel software-guided IME process that leads to fully functional 3D electronic structures with printed conductive traces and assembled surface-mount components.


surfaceTO_overviewpng

Yunbo Zhang, Tsz-Ho Kwok, Computer-Aided Design, Special issue on Computer-Aided Design on Advances in Generative Design, 111, pp.113-122. [pdf][https://doi.org/10.1016/j.cad.2019.02.005


This paper applies the topology optimization (TO) technique to the design of custom compression casts/braces on two-manifold mesh surfaces. Conventional braces or casts, usually made of plaster or fiberglass, have the drawbacks of being heavy and unventilated to wear. To reduce the weight and improve the performance of a custom brace, TO methods are adopted to optimize the geometry of the brace in the three-dimensional (3D) space, but they are computationally expensive. Based on our observation that the brace has a much smaller thickness compared to other dimensions and the applied loads are normal forces, this paper presents a novel TO method based on thin plate elements on the two-dimensional manifold (2-manifold) surfaces instead of 3D solid elements. Our working pipeline starts from a 3D scan of a human body represented by a 2-manifold mesh surface, which is the base design domain for the custom brace. Similar to the concept of isoparametric representation, the 3D design domain is mapped onto a two-dimensional (2D) parametric domain. An Finite Element Analysis (FEA) with bending moments is performed on the parameterized 2D design domain, and the Solid Isotropic Material with Penalization (SIMP) method is applied to optimize the pattern in the parametric domain. After the optimized cast/brace is obtained on the 2-manifold mesh surface, a solid model is generated by our design interface and then sent to a 3D printer for fabrication. Compared with the optimization method with solid elements, our method is more efficient and controllable due to the high eciency of solving FEA in the 2D domain.


ShapeStructuralizer_Teaserpng

Subramanian Chidambaram*, Yunbo Zhang*, Venkatraghavan Sundararajan, Niklas Elmqvist, Karthik Ramani, In Proceedings of ACM SIGCHI Conference on Human Factors in Computing Systems (CHI 2019), May 4–9, 2019, Glasgow, Scotland, UK. [pdf][https://doi.org/10.1145/3290605.3300893][Video@Youtube] (* Equal Contribution Authors, acceptance rate 23.8%)


Current Computer-Aided Design (CAD) tools lack proper support for guiding novice users towards designs ready for fabrication. We propose Shape Structuralizer (SS), an interactive design support system that repurposes surface models into structural constructions using rods and custom 3Dprinted joints. Shape Structuralizer embeds a recommendation system that computationally supports the user during design ideation by providing design suggestions on local refinements of the design. This strategy enables novice users to choose designs that both satisfy stress constraints as well as their personal design intent. The interactive guidance enables users to repurpose existing surface mesh models, analyze them in-situ for stress and displacement constraints, add movable joints to increase functionality, and attach a customized appearance. This also empowers novices to fabricate even complex constructs while ensuring structural soundness. We validate the Shape Structuralizer tool with a qualitative user study where we observed that even novice users were able to generate a large number of structurally safe designs for fabrication.


ARFab_NAMRC2018png

Yunbo Zhang, Tsz-Ho Kwok, In Proceedings of the 46th SME North American Manufacturing Research Conference, NAMRC 46, College Station, Texas, USA, June 18-22, 2018. [pdf][https://doi.org/10.1016/j.promfg.2018.07.140


In this paper, we apply Augmented Reality (AR) technologies to develop a design and interaction interface for Smart Manufacturing (SmartMFG). This work is motivated by the lack of appropriate human-machine-interaction (HMI) tools to support interaction and customization in SmartMFG environment. Trying to address this research problem, we hypothesize that AR-based design interfaces that communicate with Machine Control Unit (MCU) directly will increase the degree of interaction and the complexity of instructions performed in Manual Data Input (MDI) systems. To test this hypothesis, we developed a prototyping system consisting of an AR-tablet device as the input interface and an Ultimaker 3 printer as the machine tool. Firstly, this AR-based system has sensing, design and control capabilities to interact and communicate with the machine tool via Wifi. Secondly, a set of sketch-based computational tools is developed for users to design shapes on existing objects easily and efficiently within the AR environment. Finally, The customized design is converted to machine code, which is also customized based on the machine tool and the registration of the virtual model and the existing object. We tested our system by designing two customized shapes onto an existing shape in the AR environment and generating the G-code to control the printer to fabricate them onto the physical object.


iSoft_UIST2017pngSang Ho Yoon, Ke Huo, Yunbo Zhang, Guiming Chen, Luis Paredes, Subramanian Chidambaram, Karthik Ramani, In Proceedings of the 30th Annual ACM Symposium on User Interface Software & Technology (UIST 2017) , Quebec City, Canada, Oct 22-25, 2017. [pdf][https://doi.org/10.1145/3126594.3126654][Highlighted by ACM Interactions Magazine Link][Video@Youtube] (Acceptance rate 22.5%)


Abstract: We present iSoft, a single volume soft sensor capable of sensing real-time continuous contact and unidirectional stretching. We propose a low-cost and an easy way to fabricate such piezoresistive elastomer-based soft sensors for instant interactions. We employ an electrical impedance tomography (EIT) technique to estimate changes of resistance distribution on the sensor caused by fingertip contact. To compensate for the rebound elasticity of the elastomer and achieve real-time continuous contact sensing, we apply a dynamic baseline update for EIT. The baseline updates are triggered by fingertip contact and movement detections. Further, we support unidirectional stretching sensing using a model-based approach which works separately with continuous contact sensing. We also provide a software toolkit for users to design and deploy personalized interfaces with customized sensors. Through a series of experiments and evaluations, we validate the performance of contact and stretching sensing. Through example applications, we show the variety of examples enabled by iSoft.




wire-fig-12-301png
Min Liu*, Yunbo Zhang*, Jing Bai, Yuanzhi Cao, Jeffrey Alperovich, Karthik Ramani, In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 2017), Denver, CO, May 6-11, 2017. [pdf][https://doi.org/10.1145/3025453.3025619][Video@Youtube] (* Equal Contribution Authors, acceptance rate 25%)

Abstract: We propose WireFab, a rapid modeling and prototyping system that uses bent metal wires as the structure framework. WireFab approximates both the skeletal articulation and the skin appearance of the corresponding virtual skin meshes, and it allows users to personalize the designs by (1) specifying joint positions and part segmentations, (2) defining joint types and motion ranges to build a wire-based skeletal model, and (3) abstracting the segmented meshes into mixed-dimensional appearance patterns or attachments. The WireFab is designed to allow the user to choose how to best preserve the fidelity of the topological structure and articulation motion while selectively maintaining the fidelity of the geometric appearance. Compared to 3D-printing based high-fidelity fabrication systems, WireFab increases prototyping speed by ignoring unnecessary geometric details while preserving structural integrity and articulation motion. In addition, other rapid or low-fidelity fabrication systems produce only static models, while WireFab produces posable articulated models as user’s desire.

 

figt-rexpng

Yunbo Zhang, Tsz-Ho Kwok, Rapid Prototyping Journal, 23(6), 1136-1145, 2017. [pdf][https://doi.org/10.1108/RPJ-08-2016-0129][Video@Youtube]


Abstract: Additive Manufacturing (AM) enables the fabrication of three-dimensional (3D) objects with complex shapes without additional tools and refixturing. However, it is difficult for user to use traditional computer-aided design tools to design custom products. In this paper, we presented a design system to help user design custom 3D printable products based on some reference freeform shapes. The user can define and edit styling curves on the reference model using our interactive geometric operations for styling curve. Incorporating with the reference models, these curves can be converted into 3D printable models through our fabrication interface. We tested our system with four design applications including a hollow patterned bicycle helmet, a T-rex with skin frame structure, a face mask with Voronoi patterns, and an AM-specific night dress with hollow patterns. The executable prototype of the presented design framework used in the customization process is publicly available.


 

TRing_UIST2016png

Sang Ho Yoon, Yunbo Zhang, Ke Huo, Karthik Ramani, To Appear in Proceedings of the 29th Annual ACM Symposium on User Interface Software & Technology (UIST'16) , Tokyo, Japan, 2016. (Acceptance rate 20.6%) [pdf][https://doi.org/10.1145/2984511.2984529][Video@Youtube]


Abstract: We present TRing, a finger-worn input device which provides instant and customizable interactions. TRing offers a novel method for making plain objects interactive using an embedded magnet and a finger-worn device. With a particle filter integrated magnetic sensing technique, we compute the fingertip’s position relative to the embedded magnet. We also offer a magnet placement algorithm that guides the magnet installation location based upon the user’s interface customization. By simply inserting or attaching a small magnet, we bring interactivity to both fabricated and existing objects. In our evaluations, TRing shows an average tracking error of 8.6 mm in 3D space and a 2D targeting error of 4.96 mm, which are sufficient for implementing average-sized conventional controls such as buttons and sliders. A user study validates the input performance with TRing on a targeting task (92% accuracy within 45 mm distance) and a cursor control task (91% accuracy for a 10 mm target). Furthermore, we show examples that highlight the interaction capability of our approach.


 
realfusionpng
Cecil Piya, Vinayak, Yunbo Zhang, Karthik Ramani, In Proceedings of Graphics Interface 2016, Victoria, BC, Canada. [pdf][https://doi.org/10.20380/GI2016.11][Video@Youtube]

Abstract: We present RealFusion, an interactive workflow that supports early stage design ideation in a digital 3D medium. RealFusion is inspired by the practice of found-object-art, wherein new representations are created by composing existing objects. The key motivation behind our approach is direct creation of 3D artifacts during design ideation, in contrast to conventional practice of employing 2D sketching. RealFusion comprises of three creative states where users can (a) repurpose physical objects as modeling components, (b) modify the components to explore different forms, and (c) compose them into a meaningful 3D model. We demonstrate RealFusion using a simple interface that comprises of a depth sensor and a smartphone. To achieve direct and efficient manipulation of modeling elements, we also utilize mid-air interactions with the smartphone. We conduct a user study with novice designers to evaluate the creative outcomes that can be achieved using RealFusion.

 

cardboardizerpngYunbo Zhang, Wei Gao, Luis Paredes, Karthik Ramani, In Proceedings of the ACM 2016 CHI Conference on Human Factors in Computing Systems, May 7-12, San Jose, CA USA, pages 897-907, 2016. (Acceptance rate 23.4%) [pdf][https://doi.org/10.1145/2858036.2858362] [Video@Youtube][Media]


Abstract: Computer-aided design of flat patterns allows designers to prototype foldable 3D objects made of heterogeneous sheets of material. We found origami designs are often characterized by pre-synthesized patterns and automated algorithms. Furthermore, augmenting articulated features to a desired model requires time-consuming synthesis of interconnected joints. This paper presents CardBoardiZer, a rapid cardboard based prototyping platform that allows everyday sculptural 3D models to be easily customized, articulated and folded. We develop a building platform to allow the designer to 1) import a desired 3D shape, 2) customize articulated partitions into planar or volumetric foldable patterns, and 3) define rotational movements between partitions. The system unfolds the model into 2D crease-cut-slot patterns ready for die-cutting and folding. In this paper, we developed interactive algorithms and validated the usability of CardBoardiZer using various 3D models. Furthermore, comparisons between CardBoardiZer and methods of Autodesk® 123D Make, demonstrated significantly shorter time-to-prototype and ease of fabrication.


 

optimalfitting_ijamtpng

Yunbo Zhang, Charlie C.L. Wang, Karthik Ramani, International Journal of Advanced Manufacturing Technology, pp 1-15, 2016. [pdf][doi:10.1007/s00170-016-8669-2]


Abstract: A flattenable mesh surface is a polygonal mesh surface that can be unfolded into a planar patch without stretching any polygon. This paper presents a new method for computing a slightly stretched flattenable mesh surface M from a piecewise-linear surface patch P in 3D, where the shape approximation error between M and P is minimized and the strain of stretching on M is controlled. Prior approaches result in either a flattenable surface that could be quite different from the input shape or a (discrete) developable surface has relative simple shape. The techniques investigated in this paper overcome these difficulties. First, we introduce a new surface modeling method to conduct a sequence of nearly isometric deformations to morph a flattenable mesh surface to a new shape which has a better approximation of the input surface. Second, in order to get better initial surfaces for fitting and overcome topological obstacles, a shape perturbation scheme is investigated to obtain the optimal surface fitting result. Last, to improve the scalability of our optimal surface fitting algorithm, a coarse-to-fine fitting framework is exploited so that very dense flattenable mesh surfaces can be modeled and boundaries of the input surfaces can be interpolated.


 

revomaker_uistpng

Wei Gao*, Yunbo Zhang*, Diogo C. Nazzetta, Karthik Ramani, Raymond J. Cipra, In Proceeding of the 28th Annual ACM Symposium on User Interface Software & Technology (UIST'15), Nov 8-11, Charlotte, NC USA, Pages 437-446, 2015. (* Equal Contribution Authors, acceptance rate 23.6%). [pdf][https://doi.org/10.1145/2807442.2807476][Video@Youtube][Presentation][Media


Abstract: In recent years, 3D printing has gained significant attention from the maker community, academia, and industry to support low-cost and iterative prototyping of designs. Current unidirectional extrusion systems require printing sacrificial material to support printed features such as overhangs. Furthermore, integrating functions such as sensing and actuation into these parts requires additional steps and processes to create “functional enclosures”, since design functionality cannot be easily embedded into prototype printing. All of these factors result in relatively high design iteration times. We present “RevoMaker”, a self-contained 3D printer that creates direct out-of-the-printer functional prototypes, using less build material and with substantially less reliance on support structures. By modifying a standard low-cost FDM printer with a revolving cuboidal platform and printing partitioned geometries around cuboidal facets, we achieve a multidirectional additive prototyping process to reduce the print and support material use. Our optimization framework considers various orientations and sizes for the cuboidal base. The mechanical, electronic, and sensory components are preassembled on the flattened laser-cut facets and enclosed inside the cuboid when closed. We demonstrate RevoMaker directly printing a variety of customized and fully-functional product prototypes, such as computer mice and toys, thus illustrating the new affordances of 3D printing for functional product design.


 

jmd15selffoldingpng

Tsz-Ho Kwok, Charlie C.L. Wang, Dongping Deng, Yunbo Zhang, Yong Chen, ASME Transactions - Journal of Mechanical Design, Special Issue on Design for Additive Manufacturing, 137(11), 111413, 2015. [pdf] [doi:10.1115/1.4031023] [Video@Youtube


Abstract: A self-folding structure fabricated by additive manufacturing can be automatically folded into a demanding 3D shape by actuation mechanisms such as heating. However, 3D surfaces can only be fabricated by self-folding structures when they are flattenable. Most generally designed parts are not flattenable. To address the problem, we develop a shape optimization method to modify a non-flattenable surface into flattenable. The shape optimization framework is equipped with topological operators for adding interior/boundary cuts to further improve the flattenability. When inserting cuts, self-intersection is locally prevented on the flattened 2D pieces. The total length of inserted cuts is also minimized to reduce artifacts on the finally folded 3D shape.


 

cadamreviewpng

Wei Gao, Yunbo Zhang, Devarajan Ramanujan, Karthik Ramani, Yong Chen, Christopher B. Williams, Charlie C.L. Wang, Yung C. Shin, Song Zhang, Pablo D. Zavattieri, Computer-Aided Design, Special Issue on Geometric and Physical Modeling for Additive Manufacturing, 69, 65-89, 2015. (Top 1 most cited articles from Computer-Aided Design since 2014Top 1 the most downloaded articles from Computer-Aided Design in the last 90 days). [doi:10.1016/j.cad.2015.04.001]

Abstract: Additive manufacturing (AM) is poised to bring about a revolution in the way products are designed, manufactured, and distributed to end users. This technology has gained significant academic as well as industry interest due to its ability to create complex geometries with customizable material properties. AM has also inspired the development of the maker movement by democratizing design and manufacturing. Due to the rapid proliferation of a wide variety of technologies associated with AM, there is a lack of a comprehensive set of design principles, manufacturing guidelines, and standardization of best practices. These challenges are compounded by the fact that advancements in multiple technologies (for example materials processing, topology optimization) generate a “positive feedback loop” effect in advancing AM. In order to advance research interest and investment in AM technologies, some fundamental questions and trends about the dependencies existing in these avenues need highlighting. The goal of our review paper is to organize this body of knowledge surrounding AM, and present current barriers, findings, and future trends significantly to the researchers. We also discuss fundamental attributes of AM processes, evolution of the AM industry, and the affordances enabled by the emergence of AM in a variety of areas such as geometry processing, material design, and education. We conclude our paper by pointing out future directions such as the “print-it-all” paradigm, that have the potential to re-imagine current research and spawn completely new avenues for exploration.


 
 

cvprrecon2014pngWuyuan Xie, Yunbo Zhang, Charlie C.L. Wang, C.-K. Chung, 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, Ohio, June 24-27, 2014 (Oral presentation paper - with acceptance rate as 5.75%). [pdf] [doi:10.1109/CVPR.2014.282] [Video@TechTalks][Video@Youtube] [Video2@Youtube] [Project Page with Code]

Abstract: In this paper, we propose an efficient method to reconstruct surface-from-gradients (SfG). Our method is formulated under the framework of discrete geometry processing. Unlike the existing SfG approaches, we transfer the continuous reconstruction problem into a discrete space and efficiently solve the problem via a sequence of least-square optimization steps. Our discrete formulation brings three advantages: 1) the reconstruction preserves sharp-features, 2) sparse/incomplete set of gradients can be well handled, and 3) domains of computation can have irregular boundaries. Generally, these strengths of our method help overcome the unwanted distortions during the surface reconstruction. Our formulation is direct and easy to implement, and the comparisons with state-of-the-arts show the effectiveness of our method.


 
constraineddeformationpngShuo Jin, Yunbo Zhang, Charlie C.L. Wang, Computer Graphics Forum Volume 33, Issue 2, pages 429–438, May 2014. [pdf][doi:10.1111/cgf.12331][Video@Youtube]

Abstract: Techniques have been developed to deform a mesh with multiple types of constraints. One limitation of prior methods is that the accuracy of demanded metrics on the resultant model cannot be guaranteed. Adding metrics directly as hard constraints to an optimization functional often leads to unexpected distortion when target metrics differ significant from what are on the input model. In this paper, we present an effective framework to deform mesh models by enforcing demanded metrics on length, area and volume. To approach target metrics stably and minimize distortion, an iterative scale-driven deformation is investigated, and a global optimization functional is exploited to balance the scaling effect at different parts of a model. Examples demonstrate that our approach provides a user-friendly tool for designers who are used to semantic input.

 

gmpcrossparavdpngTsz-Ho Kwok, Yunbo Zhang, Charlie C.L. Wang, Graphical Models, Special Issue of 2012 Geometric Modeling and Processing (GMP) conference, June 20-22, 2012, Mount Huang, Volume 74, Issue 4, July 2012, Pages 152-163. [pdf] [doi:10.1016/j.gmod.2012.03.012] [Project Page with Executable Program]

Abstract: In this paper, we propose a novel algorithm to construct common base domains for cross-parameterization constrained by anchor points. Based on the common base domains, a bijective mapping between given models can be established. Experimental results show that the distortion in a cross-parameterization generated on our common base domains is much smaller than that of a mapping on domains constructed by prior methods. Different from prior algorithms that generate domains by a heuristic of having higher priority to link the shortest paths between anchor points, we compute the surface Voronoi diagram of anchor points to find out the initial connectivity for the base domains. The final common base domains can be efficiently generated from the initial connectivity. The Voronoi diagram of the anchor points gives better cues than the heuristic of connecting shortest paths greedily, therefore resulting in an efficient and reliable algorithm for construction of common base domains that bring to low distortion in constrained cross-parameterization.

 

 

CrossPara_TVCG2012png

Tsz-Ho Kwok, Yunbo Zhang, Charlie C.L. Wang, IEEE Transactions on Visualization and Computer Graphics, vol.18, no.10, pp.1678-1692, Oct. 2012. [pdf] [doi:10.1109/TVCG.2011.115] [Project Page with Executable Program]

Abstract: Given a set of corresponding user-specified anchor points on a pair of models having similar features and topologies, the cross-parameterization technique can establish a bijective mapping constrained by the anchor points. In this paper, we present an efficient algorithm to optimize the complexes and the shape of common base domains in cross-parameterization for reducing the distortion of the bijective mapping. The optimization is also constrained by the anchor points. We investigate a new signature, Length-Preserved Base Domain (LPBD), for measuring the level of stretch between surface patches in crossparameterization. This new signature well balances the accuracy of measurement and the computational speed. Based on LPBD, a set of metrics are studied and compared. The best ones are employed in our domain optimization algorithm that consists of two major operators, boundary swapping and patch merging. Experimental results show that our optimization algorithm can reduce the distortion in cross-parameterization efficiently.


 

cadcoolingpngYu Wang, Kai-Ming Yu, Charlie C.L. Wang, Yunbo Zhang, Computer-Aided Design, vol.43, no.8, pp.1001-1010, August 2011. [pdf] [doi:10.1016/j.cad.2011.04.011]

Abstract: This paper presents an automatic method for designing conformal cooling circuit, which is an essential component that directly affects the quality and timing for the products fabricated by rapid tooling. To reduce the time of cooling and control the uniformity of temperature and volumetric shrinkage, industry expects to have cooling channels that are conformal to the shape of products. We achieve the goal of automatically designing such conformal cooling circuit in twofold. First, the relationship between the conformal cooling and the geometry shape of cooling circuit is formulated. Based on that, we investigate a geometric modeling algorithm to design the cooling circuit approaching the conformal cooling. Simulations have been taken to verify the advantage of the cooling circuit generated by our algorithm.

 

 

cadcurvedpolygonpngYuen-Shan Leung, Charlie C.L. Wang, Yunbo Zhang, Computer-Aided Design, vol.43, no.6, pp.573-585, June 2011. [pdf] [doi:10.1016/j.cad.2011.01.010

 

Abstract: We present a method for refining n-sided polygons on a given piecewise linear model by using local computation, where the curved polygons generated by our method interpolate the positions and normals of vertices on the input model. Firstly, we construct a Bezier curve for each silhouette edge. Secondly, we employ a new method to obtain C1 continuous cross-tangent functions that are constructed on these silhouette curves. An important feature of our method is that the cross tangent functions are produced solely by their corresponding facet parameters. Gregory patches can therefore be locally constructed on every polygons while preserving G1 continuity between neighboring patches. To provide a flexible shape control, several local schemes are provided to modify the cross-tangent functions so that the sharp features can be retained on the resultant models. Because of the localized construction, our method can be easily accelerated by graphics hardware and fully run on the Graphics Processing Unit (GPU).


 

tasewirewarpingpngYunbo Zhang, Charlie C.L. Wang, IEEE Transactions on Automation Science and Engineering, vol.8, no.1, pp.205-215, January 2011. [pdf] [doi:10.1109/TASE.2010.2051665

 

Abstract: Surface flattening has numerous applications in sheet manufacturing industries, such as garment industry, shoe industry, toy industry, furniture industry and ship industry. Motivated by the requirements of those industries, WireWarping approach presented in [1] is exploited to generate 2D patterns with invariant length of feature and boundary curves. However, strict length constraints on all feature curves sometimes cause large distortions on 2D patterns, especially for those 3D surfaces which are highly non-developable. In this paper, we present a flexible and robust extension of WireWarping by introducing a new type of feature curves named elastic feature, which brings flexibility to shape control of the resultant 2D patterns. On these new feature curves, instead of strictly preserving the exact lengths, only the ranges of their lengths are controlled. To achieve this function, a multi-loop shape control optimization framework is proposed to find the optimized 2D shape among all possible flattening results with different length variations on those elastic feature curves, while the lengths of other feature curves are kept unchanged. Besides, we also present a topology processing algorithm on the network of feature curves to eliminate cases that lead to numerical singularity. Experimental results show that the WireWarping++ can successfully flatten surface patches into 2D patterns with more flexible shape control and more robust numerical performance.


 

cgastylingtoproductpngCharlie C.L. Wang, Yunbo Zhang, Hoi Sheung, IEEE Computer Graphics and Applications, vol.20, no.6, pp.74-85, November 2010. [pdf] [doi:10.1109/MCG.2009.155] [Application on WetSuit Design@Youtube] [Application in Facial Mask Design@YouTube] [exuskin.com


Abstract: This article describes a geometric modeling system that generates industry required planar pieces for fabricating user-customized products from styling designs. The processing from style design to industrial patterns is automated. Prestored styling designs can be automatically mapped into different reference model shapes and then unfolded into planar pieces. Besides, a map-guided algorithm has been developed to locate unfolded pieces according to industrial requirement.


 

localflattenablepngHongwei Lin, Yunbo Zhang, Charlie C. L. Wang, Shuming Gao, ASME IDETC/CIE 2010 Conference, 30th Computers and Information in Engineering Conference, Montreal, Quebec, Canada, August 15-18, 2010. [pdf] [doi:10.1115/DETC2010-28301

 

Abstract: Models represented by polygonal meshes have been more and more widely used in CAD/CAM systems. In sheet manufacturing industries, the flattenability of a model is very important. Prior methods for processing the flattenability of a mesh surface usually employ a constrained optimization framework, which takes the positions of all its non-boundary vertices as variables in computation. For a mesh surface with hundred thousands of vertices, solving such an optimization is very time-consuming, and may exceed the capacity of main memory. In this paper, we develop a controllable evolution method to process the flattenability of a given mesh patch. It decouples the global optimization problem into a sequence of local controllable evolution steps, each of which has only one variable. Therefore, mesh surfaces with large number of vertices can be processed.